Instructions to use kawasumi/Tema_Q-R12.2-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use kawasumi/Tema_Q-R12.2-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="kawasumi/Tema_Q-R12.2-GGUF", filename="Tema_Q-R12.2-Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use kawasumi/Tema_Q-R12.2-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
Use Docker
docker model run hf.co/kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use kawasumi/Tema_Q-R12.2-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "kawasumi/Tema_Q-R12.2-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kawasumi/Tema_Q-R12.2-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
- Ollama
How to use kawasumi/Tema_Q-R12.2-GGUF with Ollama:
ollama run hf.co/kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
- Unsloth Studio new
How to use kawasumi/Tema_Q-R12.2-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for kawasumi/Tema_Q-R12.2-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for kawasumi/Tema_Q-R12.2-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for kawasumi/Tema_Q-R12.2-GGUF to start chatting
- Docker Model Runner
How to use kawasumi/Tema_Q-R12.2-GGUF with Docker Model Runner:
docker model run hf.co/kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
- Lemonade
How to use kawasumi/Tema_Q-R12.2-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull kawasumi/Tema_Q-R12.2-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Tema_Q-R12.2-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)🚀 Tema_Q-R12.2
🔥 モデル概要
Tema_Q-R12.2(天馬求) は、Googleが開発した高性能なオープンモデル Gemma 2 9B を基盤にした、日本語、英語向けの改良版大規模言語モデル(LLM)です。
通常のGemma 2では回答が難しいプロンプトに対しても、より自由で有用な応答を生成できるよう設計されています。クリエイティブな執筆、複雑なプログラミングタスク、ディープな知識探求など、あらゆる分野でAIの可能性を最大限に引き出したいユーザーに最適です。このモデルは非情に強力な推論体系を取ります。
Tema_Q-R12.2 is an improved Large Language Model (LLM) tailored for Japanese, English, and Chinese, built upon Gemma 2 9B, a high-performance open model developed by Google.
It is designed to generate more flexible and useful responses, even for prompts that the standard Gemma 2 might find challenging to answer. It is ideal for users who wish to maximize the potential of AI in all fields, including creative writing, complex programming tasks, and deep knowledge exploration.
| 項目 | 詳細 |
|---|---|
| ベースモデル | Google Gemma 2 9B |
| モデル名 | Tema_Q-R12.2 |
| 対応言語 | 日本語 (JA), 英語 (EN) |
| モデルサイズ | 9 Billion Parameters |
| ライセンス | Gemma 2のライセンスに準拠 |
| 開発 | KY, TY, HY, KK |
🛡️ 責任あるAI利用と学習データの安全性
⚠️ 責任ある利用の徹底
- ユーザーの責任: モデルの利用者は、生成されたコンテンツが、適用される法律、規制、およびHugging Faceの利用規約/コンテンツポリシーに準拠することを全面的に保証する必要があります。
- 禁止事項: このモデルを、いかなる差別、ハラスメント、暴力、違法行為、および有害な目的のために利用することを固く禁じます。
- Downloads last month
- 11
4-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="kawasumi/Tema_Q-R12.2-GGUF", filename="Tema_Q-R12.2-Q4_K_M.gguf", )