cognitio-ml-expert
Modelo especializado en Machine Learning y entrenamiento de IA, fine-tuned desde microsoft/Phi-3-mini-4k-instruct.
π Modelo Base
- Base Model: microsoft/Phi-3-mini-4k-instruct
- Fine-tuning: QLoRA (4-bit + LoRA)
- Dataset: Cognitio ML Expert Dataset
π― Capacidades
Este modelo sabe TODO sobre:
- β Machine Learning fundamentals
- β Deep Learning architectures (CNN, RNN, Transformer, GNN)
- β Optimization algorithms (SGD, Adam, AdamW)
- β Fine-tuning techniques (LoRA, QLoRA, PEFT)
- β Frameworks (PyTorch, TensorFlow, JAX)
- β Best practices y troubleshooting
π Dataset
El dataset incluye conocimiento sobre:
- Arquitecturas de redes neuronales
- Algoritmos de optimizaciΓ³n
- TΓ©cnicas de fine-tuning
- Frameworks y librerΓas
- Mejores prΓ‘cticas
π Uso
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Cargar modelo base
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
model = PeftModel.from_pretrained(base_model, "Kimberlyindiva/cognitio-ml-expert")
# Usar
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
inputs = tokenizer("ΒΏQuΓ© es un Transformer?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
π Fine-tuning
Para fine-tune este modelo:
python fine_tune_starcoder_cognitio.py
Ver fine_tune_starcoder_cognitio.py para detalles.
π Licencia
MIT License
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Kimberlyindiva/cognitio-ml-expert
Base model
microsoft/Phi-3-mini-4k-instruct