HumanPet Experiment

Goal

The goal that we hope to achieve with the HumanPet series is an LLM that can talk more human-like and can express itself via visual emotions. The name HumanPet was chosen with this goal in mind (Human = it talks more human-like, Pet = it can display emotions like a virtual pet).

However, we're also wanting to apply a certain personality to HumanPet. The personality can be described as happy and energetic.

Stages

We'll release multiple HumanPet models:

Stage Name Purpose Approach Hallucinations Note Time
1 HumanPet X1 1.7B Less robotic, classify emotion and intent Existing dataset Apparent No effort. Now
2 HumanPet X2 1.7B Less robotic, limited visual emotion, start of a personality Stage 1 + NLP Apparent Close to no effort, just configuring. March 2026
3 HumanPet Translator Semi 1.7B Rewrite boring text to silly text with some visual emotion Stage 2 + generated professional text for translation input Reduced A little effort. We use a professional LLM to rewrite silly sentences back to professional texts. This makes sure the model will learn how to rewrite sentences, not just change the words (like NLP does now). It is not sure if this stage will be made. We might use another method. March-April 2026
4 HumanPet X3 1.7B Less robotic, full visual emotion, full personality Stage 2 + auto-translated datasets (stage 3) / manual rewriting Reduced Auto-translated datasets: A little effort, using stage 3 to translate an existing dataset into simple silly texts. Manual rewriting: Big effort, will take a long time. We remove some of the hallucination by ommitting information that isn't in the context. 2026
5 HumanPet Translator 1.7B Rewrite boring text to accurate silly text with full visual emotion Stage 4 Reduced No effort, we just reuse the dataset from stage 4. See below for explanation on the translator. It is not sure if this stage will be made. We might use another method. 2026 - early 2027
6 HumanPet Instruct 1.7B Less robotic, full visual emotion, full personality Stage 4 + manual/generative instruction writing Minimal Medium effort. The final model, which is hopefully one that fulfills our goal. Instructions (via system prompt) will either be written by us, by the translator, or both. It is not sure if this stage will be made. We might use another method. It all depends on the results of stage 4. 2027
7 HumanPet Tool 1.7B Less robotic, full visual emotion, full personality, tools Stage 6 + translated tool datasets (stage 5) Minimal Low effort. Use the translator (stage 5) to create a silly tool dataset. It is not sure if this stage will be made. It all depends on the results of stage 5 and 6. 2027
8 HumanPet2 Instruct ??? B Less robotic, full visual emotion, full personality Stage 6 + other datasets translated (stage 5) Close to none Low effort. The final model (stage 6) reinforced with datasets that contain less hallucination, translated with the translator (stage 5). It is not sure if this stage will be made. We might use another method. It all depends on the results of stage 5 and 6. 2027 - early 2028

Unfortunately, this project has come to an end. Note that all stages are experimental. Stage 1, 2 and 3 are done, but all the other stages will most likely never complete. The "X1"/"X2"/etc. stands for "experimental model stage 1"/"experimental model stage 2" etc.

Because this is the end of the HumanPet experiment, we're releasing all model files soon.

The reason why we wanted to include a translator at stage 3 and 5 mainly has to do with being able to turn other datasets into silly ones. This means that we could've used tool, chain-of-thought or other instruct datasets while keeping the personality in tact.

The roadmap was:

Stage Name Time
1 HumanPet X1 1.7B Now
2 HumanPet X2 1.7B March 2026
3 HumanPet Translator Semi 1.7B March-April 2026
4 HumanPet X3 1.7B (canceled) 2026
6 HumanPet Instruct 1.7B (canceled) 2027

Progress

Progress per asset:

Type Asset Status
Dataset HumanPet X1 4.3k 🟒 Done
Model HumanPet X1 1.7B 🟒 Done
Dataset HumanPet X2 4.3k 🟒 Done
Model HumanPet X2 1.7B πŸ”΄ Done, mistakes found
Dataset Re-train: HumanPet X2.1 4.3k 🟒 Done
Model Re-train: HumanPet X2.1 1.7B πŸ”΄ Done, mistakes found
Dataset Re-train: HumanPet X2.2 4.3k 🟒 Done
Model Re-train: HumanPet X2.2 1.7B 🟒 Done
Dataset HumanPet Translator Semi 4.3k 🟒 Done
Model HumanPet Translator Semi 1.7B πŸ”΄ Done, mistakes found
Dataset HumanPet X3 ??? k πŸ”΄ Canceled
Model HumanPet X3 1.7B πŸ”΄ Canceled
Dataset HumanPet Translator ??? k πŸ”΄ Canceled
Model HumanPet Translator 1.7B πŸ”΄ Canceled
Dataset HumanPet Instruct ??? k πŸ”΄ Canceled
Model HumanPet Instruct 1.7B πŸ”΄ Canceled
Dataset HumanPet Tool ??? k πŸ”΄ Canceled
Model HumanPet Tool 1.7B πŸ”΄ Canceled
Dataset HumanPet2 Instruct ??? k πŸ”΄ Canceled
Model HumanPet2 Instruct ??? B πŸ”΄ Canceled

(31% done.)

Findings

HumanPet X1 1.7B

  • Very human tone and preferences
  • Some work-related/formal situations
  • Some IRL hallucinations (as if talking to them in real life)

HumanPet X2 1.7B

  • "That's what he told me" got changed to "That's what muehehe told me" in NLP because of any combination of "hehe" (including just "he") converting to "muehehe"

HumanPet X2.1 1.7B

  • "Hi, John!" got changed to "Hiyar!! John!" in NLP, should be "Hiyar, John!!"

HumanPet X2.2 1.7B

  • Uses <think>...</think> tags incorrectly (probably not enough epochs / params)
  • Hallucinates emote tag usage and other non-existent tags (probably not enough epochs / params)
  • "OK" got changed to "OKI" in NLP because "OK" is fully capital and thus NLP also converts it to full capital, should be "Oki"
  • "It's" got changed to "It 's" in NLP
  • Strong bias towards being female/transgender

HumanPet Translator Semi 1.7B

  • Still uses formal wording sometimes
  • Not super accurate
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Flexan/HumanPet