Update README.md
Browse files
README.md
CHANGED
|
@@ -15,8 +15,14 @@ pinned: false
|
|
| 15 |
</div>
|
| 16 |
<!-- header end -->
|
| 17 |
|
| 18 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
|
|
|
| 20 |
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package.
|
| 21 |
- It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**.
|
| 22 |
- It supports **various hardware including GPU, CPU, Edge**.
|
|
@@ -25,6 +31,7 @@ pinned: false
|
|
| 25 |
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
|
| 26 |
You can set it up in minutes and compress your first models in few lines of code!
|
| 27 |
|
|
|
|
| 28 |
You can smash your own models by installing pruna with:
|
| 29 |
```
|
| 30 |
pip install pruna[gpu]==0.1.3 --extra-index-url https://prunaai.pythonanywhere.com/
|
|
@@ -33,19 +40,12 @@ For more details about installation and tutorials, you can check the [Pruna AI d
|
|
| 33 |
|
| 34 |
| Use Case | Free Notebooks |
|
| 35 |
|------------------------------------------------------------|----------------------------------------------------------------|
|
| 36 |
-
| 3x Faster Stable Diffusion Models | ▶️ [Smash for free](https://colab.research.google.com/drive/1BZm6NtCsF2mBV4UYlRlqpTIpTmQgR0iQ?usp=sharing) |
|
| 37 |
-
| Turbocharge Stable Diffusion Video Generation | ▶️ [Smash for free](https://colab.research.google.com/drive/1m1wvGdXi-qND-2ys0zqAaMFZ9DbMd5jW?usp=sharing) |
|
| 38 |
-
| Making your LLMs 4x smaller | ▶️ [Smash for free](https://colab.research.google.com/drive/1jQgwhmoPz80qRf5NdRJcY_pAr7Oj5Ftv?usp=sharing) |
|
| 39 |
-
| Blazingly fast Computer Vision Models | ▶️ [Smash for free](https://colab.research.google.com/drive/1GkzxTQW-2yCKXc8omE6Sa4SxiETMi8yC?usp=sharing) |
|
| 40 |
-
| Smash your model with a CPU only | ▶️ [Smash for free](https://colab.research.google.com/drive/19iLNVSgbx_IoCgduXPhqKq7rCoxegnZO?usp=sharing) |
|
| 41 |
-
| Transcribe 2 hours of audio in less than 2 minutes with Whisper | ▶️ [Smash for free](https://colab.research.google.com/drive/1dc6fb8_GD8eshznthBSpGpRu4WPW7xuZ?usp=sharing) |
|
| 42 |
-
| 100% faster Whisper Transcription | ▶️ [Smash for free](https://colab.research.google.com/drive/1kCJ4-xmo7y8VS6smzaV0207A5rONHPXu?usp=sharing) |
|
| 43 |
-
| Flux generation in a heartbeat, literally | ▶️ [Smash for free](https://colab.research.google.com/drive/18_iG0UXhD7OQR_CxSSsKFC8TLDsRw_9m?usp=sharing) |
|
| 44 |
-
| Run your Flux model without an A100 | ▶️ [Smash for free](https://colab.research.google.com/drive/1i1iSITNgiOpschV-Nu5mfX-effwYV9sn?usp=sharing) |
|
| 45 |
-
|
| 46 |
-
**Join the Pruna AI community!**
|
| 47 |
-
[](https://twitter.com/PrunaAI)
|
| 48 |
-
[](https://github.com/PrunaAI)
|
| 49 |
-
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
|
| 50 |
-
[](https://discord.com/invite/rskEr4BZJx)
|
| 51 |
-
[](https://www.reddit.com/r/PrunaAI/)
|
|
|
|
| 15 |
</div>
|
| 16 |
<!-- header end -->
|
| 17 |
|
| 18 |
+
# Join the Pruna AI community!
|
| 19 |
+
[](https://twitter.com/PrunaAI)
|
| 20 |
+
[](https://github.com/PrunaAI)
|
| 21 |
+
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
|
| 22 |
+
[](https://discord.com/invite/rskEr4BZJx)
|
| 23 |
+
[](https://www.reddit.com/r/PrunaAI/)
|
| 24 |
|
| 25 |
+
# Simply make AI models faster, cheaper, smaller, greener!
|
| 26 |
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package.
|
| 27 |
- It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**.
|
| 28 |
- It supports **various hardware including GPU, CPU, Edge**.
|
|
|
|
| 31 |
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
|
| 32 |
You can set it up in minutes and compress your first models in few lines of code!
|
| 33 |
|
| 34 |
+
# How to get started?
|
| 35 |
You can smash your own models by installing pruna with:
|
| 36 |
```
|
| 37 |
pip install pruna[gpu]==0.1.3 --extra-index-url https://prunaai.pythonanywhere.com/
|
|
|
|
| 40 |
|
| 41 |
| Use Case | Free Notebooks |
|
| 42 |
|------------------------------------------------------------|----------------------------------------------------------------|
|
| 43 |
+
| **3x Faster Stable Diffusion Models** | ▶️ [Smash for free](https://colab.research.google.com/drive/1BZm6NtCsF2mBV4UYlRlqpTIpTmQgR0iQ?usp=sharing) |
|
| 44 |
+
| **Turbocharge Stable Diffusion Video Generation** | ▶️ [Smash for free](https://colab.research.google.com/drive/1m1wvGdXi-qND-2ys0zqAaMFZ9DbMd5jW?usp=sharing) |
|
| 45 |
+
| **Making your LLMs 4x smaller** | ▶️ [Smash for free](https://colab.research.google.com/drive/1jQgwhmoPz80qRf5NdRJcY_pAr7Oj5Ftv?usp=sharing) |
|
| 46 |
+
| **Blazingly fast Computer Vision Models** | ▶️ [Smash for free](https://colab.research.google.com/drive/1GkzxTQW-2yCKXc8omE6Sa4SxiETMi8yC?usp=sharing) |
|
| 47 |
+
| **Smash your model with a CPU only** | ▶️ [Smash for free](https://colab.research.google.com/drive/19iLNVSgbx_IoCgduXPhqKq7rCoxegnZO?usp=sharing) |
|
| 48 |
+
| **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | ▶️ [Smash for free](https://colab.research.google.com/drive/1dc6fb8_GD8eshznthBSpGpRu4WPW7xuZ?usp=sharing) |
|
| 49 |
+
| **100% faster Whisper Transcription** | ▶️ [Smash for free](https://colab.research.google.com/drive/1kCJ4-xmo7y8VS6smzaV0207A5rONHPXu?usp=sharing) |
|
| 50 |
+
| **Flux generation in a heartbeat, literally** | ▶️ [Smash for free](https://colab.research.google.com/drive/18_iG0UXhD7OQR_CxSSsKFC8TLDsRw_9m?usp=sharing) |
|
| 51 |
+
| **Run your Flux model without an A100** | ▶️ [Smash for free](https://colab.research.google.com/drive/1i1iSITNgiOpschV-Nu5mfX-effwYV9sn?usp=sharing) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|