cognitio-ml-expert

Modelo especializado en Machine Learning y entrenamiento de IA, fine-tuned desde microsoft/Phi-3-mini-4k-instruct.

πŸ“Š Modelo Base

  • Base Model: microsoft/Phi-3-mini-4k-instruct
  • Fine-tuning: QLoRA (4-bit + LoRA)
  • Dataset: Cognitio ML Expert Dataset

🎯 Capacidades

Este modelo sabe TODO sobre:

  • βœ… Machine Learning fundamentals
  • βœ… Deep Learning architectures (CNN, RNN, Transformer, GNN)
  • βœ… Optimization algorithms (SGD, Adam, AdamW)
  • βœ… Fine-tuning techniques (LoRA, QLoRA, PEFT)
  • βœ… Frameworks (PyTorch, TensorFlow, JAX)
  • βœ… Best practices y troubleshooting

πŸ“š Dataset

El dataset incluye conocimiento sobre:

  • Arquitecturas de redes neuronales
  • Algoritmos de optimizaciΓ³n
  • TΓ©cnicas de fine-tuning
  • Frameworks y librerΓ­as
  • Mejores prΓ‘cticas

πŸš€ Uso

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Cargar modelo base
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
model = PeftModel.from_pretrained(base_model, "Kimberlyindiva/cognitio-ml-expert")

# Usar
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
inputs = tokenizer("ΒΏQuΓ© es un Transformer?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

πŸ“ Fine-tuning

Para fine-tune este modelo:

python fine_tune_starcoder_cognitio.py

Ver fine_tune_starcoder_cognitio.py para detalles.

πŸ“„ Licencia

MIT License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Kimberlyindiva/cognitio-ml-expert

Finetuned
(421)
this model