File size: 3,656 Bytes
b29b21e 883d3ec b29b21e 849c885 e4fb1f3 849c885 869dd82 4e85def 8e3afca 80f2fc3 59155e9 80f2fc3 3f8870d 80f2fc3 a50ad5d 4e85def 8e3afca 04f8d63 4e85def 8e3afca 59155e9 b96ca22 9069070 b96ca22 59155e9 88604f7 7069475 e5dce18 59155e9 acea25d 59155e9 4e85def |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
title: README
emoji: π
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
---
<!-- header start -->
<!-- 200823 -->
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://github.com/PrunaAI/pruna/raw/main/docs/assets/images/logo.png"
alt="PrunaAI"
style="width: 50%; min-width: 400px; display: block; margin: 0;">
</a>
<!-- header end -->
----
# π Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI/pruna)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/JFQmtFKCjd)
[](https://www.reddit.com/r/PrunaAI/)
----
# π Simply make AI models faster, cheaper, smaller, greener!
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package.
- It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**.
- It supports **various hardware including GPU, CPU, Edge**.
- It supports **various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation** that can be **combined together**.
- You can either **play on your own** with smash/compression configurations or **let the smashing/compressing agent** find the optimal configuration **[Pro]**.
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
You can set it up in minutes and compress your first models in few lines of code!
----
# β© How to get started?
You can smash your own models by installing pruna with pip:
```
pip install pruna
```
or directly [from source](https://github.com/PrunaAI/pruna).
You can start with simple notebooks to experience efficiency gains with:
| Use Case | Free Notebooks |
|------------------------------------------------------------|----------------------------------------------------------------|
| **3x Faster Stable Diffusion Models** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sd_deepcache.ipynb) |
| **Making your LLMs 4x smaller** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/llms.ipynb) |
| **Smash your model with a CPU only** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/cv_cpu.ipynb) |
| **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_tutorial.ipynb) |
| **100% faster Whisper Transcription** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_whisper.ipynb) |
| **Run your Flux model without an A100** | β© [Smash for free](https://githubtocolab.com/PrunaAI/pruna/blob/1d68f74c132bd4045f2af55bb1e5c03bf2dde6a9/docs/tutorials/flux_small.ipynb) |
| **x2 smaller Sana in action** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sana_diffusers_int8.ipynb) |
For more details about installation, free tutorials and Pruna Pro tutorials, you can check the [Pruna AI documentation](https://docs.pruna.ai/).
----
|