Instructions to use google/owlvit-base-patch16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/owlvit-base-patch16 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-object-detection", model="google/owlvit-base-patch16")# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection processor = AutoProcessor.from_pretrained("google/owlvit-base-patch16") model = AutoModelForZeroShotObjectDetection.from_pretrained("google/owlvit-base-patch16") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -57,7 +57,6 @@ text = texts[i]
|
|
| 57 |
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
|
| 58 |
|
| 59 |
# Print detected objects and rescaled box coordinates
|
| 60 |
-
score_threshold = 0.1
|
| 61 |
for box, score, label in zip(boxes, scores, labels):
|
| 62 |
box = [round(i, 2) for i in box.tolist()]
|
| 63 |
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
|
|
|
|
| 57 |
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
|
| 58 |
|
| 59 |
# Print detected objects and rescaled box coordinates
|
|
|
|
| 60 |
for box, score, label in zip(boxes, scores, labels):
|
| 61 |
box = [round(i, 2) for i in box.tolist()]
|
| 62 |
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
|