๐Ÿš€ Tema_Q-R-0.4B-GGUF

๐Ÿ”ฅ ใƒขใƒ‡ใƒซๆฆ‚่ฆ

Tema_Q-R-0.4B๏ผˆๅคฉ้ฆฌๆฑ‚๏ผ‰ ใฏใ€Liquid AIใŒ้–‹็™บใ—ใŸ้ซ˜ๆ€ง่ƒฝใชใ‚ชใƒผใƒ—ใƒณใƒขใƒ‡ใƒซ LFM2.5 350M ใ‚’ๅŸบ็›คใซใ—ใŸใ€ๆ—ฅๆœฌ่ชžใ€่‹ฑ่ชžๅ‘ใ‘ใฎๆ”น่‰ฏ็‰ˆๅคง่ฆๆจก่จ€่ชžใƒขใƒ‡ใƒซ๏ผˆLLM๏ผ‰ใงใ™ใ€‚

้€šๅธธใฎLFM2.5 350Mใงใฏๅ›ž็ญ”ใŒ้›ฃใ—ใ„ใƒ—ใƒญใƒณใƒ—ใƒˆใซๅฏพใ—ใฆใ‚‚ใ€ใ‚ˆใ‚Š่‡ช็”ฑใงๆœ‰็”จใชๅฟœ็ญ”ใ‚’็”Ÿๆˆใงใใ‚‹ใ‚ˆใ†่จญ่จˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใ‚ฏใƒชใ‚จใ‚คใƒ†ใ‚ฃใƒ–ใชๅŸท็ญ†ใ€็Ÿฅ่ญ˜ๆŽขๆฑ‚ใชใฉใ€ใ‚ใ‚‰ใ‚†ใ‚‹ๅˆ†้‡ŽใงAIใฎๅฏ่ƒฝๆ€งใ‚’ๆœ€ๅคง้™ใซๅผ•ใๅ‡บใ—ใŸใ„ใƒฆใƒผใ‚ถใƒผใซๆœ€้ฉใงใ™ใ€‚

Tema_Q-R-0.4B (ๅคฉ้ฆฌๆฑ‚) is an improved Large-Scale Language Model (LLM) for Japanese and English, based on Liquid AI's high-performance open model LFM2.5 350M.

It is designed to generate more flexible and useful responses, even to prompts that are difficult for the standard LFM2.5 350M to answer. It is ideal for users who want to maximize the potential of AI in all areas, including creative writing and knowledge exploration.

้ …็›ฎ ่ฉณ็ดฐ
ใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซ Liquid AI LFM2.5 350M
ใƒขใƒ‡ใƒซๅ Tema_Q-R-0.4B
ๅฏพๅฟœ่จ€่ชž ๆ—ฅๆœฌ่ชž (JA), ่‹ฑ่ชž (EN)
ใƒขใƒ‡ใƒซใ‚ตใ‚คใ‚บ 0.4 Billion Parameters
ใƒฉใ‚คใ‚ปใƒณใ‚น LFM2.5ใฎใƒฉใ‚คใ‚ปใƒณใ‚นใซๆบ–ๆ‹ 
้–‹็™บ Tema_Q้–‹็™บใƒใƒผใƒ 

๐Ÿ›ก๏ธ ่ฒฌไปปใ‚ใ‚‹AIๅˆฉ็”จใจๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใฎๅฎ‰ๅ…จๆ€ง

โš ๏ธ ่ฒฌไปปใ‚ใ‚‹ๅˆฉ็”จใฎๅพนๅบ•

  • ใƒฆใƒผใ‚ถใƒผใฎ่ฒฌไปป: ใƒขใƒ‡ใƒซใฎๅˆฉ็”จ่€…ใฏใ€็”Ÿๆˆใ•ใ‚ŒใŸใ‚ณใƒณใƒ†ใƒณใƒ„ใŒใ€้ฉ็”จใ•ใ‚Œใ‚‹ๆณ•ๅพ‹ใ€่ฆๅˆถใ€ใŠใ‚ˆใณHugging Faceใฎๅˆฉ็”จ่ฆ็ด„/ใ‚ณใƒณใƒ†ใƒณใƒ„ใƒใƒชใ‚ทใƒผใซๆบ–ๆ‹ ใ™ใ‚‹ใ“ใจใ‚’ๅ…จ้ข็š„ใซไฟ่จผใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚
  • ็ฆๆญขไบ‹้ …: ใ“ใฎใƒขใƒ‡ใƒซใ‚’ใ€ใ„ใ‹ใชใ‚‹ๅทฎๅˆฅใ€ใƒใƒฉใ‚นใƒกใƒณใƒˆใ€ๆšดๅŠ›ใ€้•ๆณ•่กŒ็‚บใ€ใŠใ‚ˆใณๆœ‰ๅฎณใช็›ฎ็š„ใฎใŸใ‚ใซๅˆฉ็”จใ™ใ‚‹ใ“ใจใ‚’ๅ›บใ็ฆใ˜ใพใ™ใ€‚
Downloads last month
43
GGUF
Model size
0.4B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yuna1126/Tema_Q-R-0.4B-GGUF

Quantized
(3)
this model

Space using yuna1126/Tema_Q-R-0.4B-GGUF 1