Update README.md
Browse files
README.md
CHANGED
|
@@ -5,4 +5,17 @@ datasets:
|
|
| 5 |
base_model:
|
| 6 |
- google/siglip2-base-patch16-224
|
| 7 |
library_name: transformers
|
| 8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
base_model:
|
| 6 |
- google/siglip2-base-patch16-224
|
| 7 |
library_name: transformers
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# SigLip2 Math
|
| 11 |
+
|
| 12 |
+
This version of siglip2 is fine tuned on `shiwk24/MathCanvas-Imagen` using the `code_derived_captions` split.
|
| 13 |
+
I trained for 1 epoch on 4M math images, with a random selection between the tikz code or caption using a batch size of 640.
|
| 14 |
+
|
| 15 |
+
This is not a classification model, since the loss function was pairwise contrastive loss.
|
| 16 |
+
Use for embedding or downstream classifier training is recommended.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|