AI & ML interests

None defined yet.

Recent Activity

danielhanchen 
posted an update 4 days ago
unmodeled-tyler 
posted an update 5 days ago
view post
Post
2239
RESULTS ARE IN!

- Videos of each evaluation: https://www.youtube.com/playlist?list=PLkDBfeR-zsShiZ2HpcscFDH-36uDwsl5W
- Link to repo: https://github.com/unmodeled-tyler/vessel-browser
-
quanta-intellect


Finally just wrapped up a comparative analysis of my new open source AI browser, Vessel, against Claude Chrome from Anthropic.

The test evaluates both web navigation harnesses for speed and efficiency on a simple real-world e-commerce task. Opus 4.6 was used for each of the 3 evaluations, and the results show that Opus 4.6 was AT LEAST 2X FASTER when using Vessel Browser for web navigation in place of Claude Chrome.

Results (in order, fastest to slowest)

1. Claude Code + Vessel Browser: 3 minutes and 10s

2. Hermes Agent + Vessel Browser: 4 minutes and 13s

3. Claude Code + Claude Chrome: 7 minutes and 57s

Vessel Browser is open source, designed explicitly for agents from the ground-up (it is not a fork of a human browser with AI features bolted on), and supports a local MCP server for agent control, or BYOK custom OAI endpoints. Check it out for yourself!
qgallouedec 
posted an update 6 days ago
view post
Post
2084
TRL v1.0 is out!

Hugging Face's TRL library is downloaded 3 million times a month. Over 130k models trained with it are public on the Hub, and major projects like @unsloth and @axolotl-ai-co build directly on top of it. v1.0 is the moment we acknowledged that responsibility explicitly, with a real stability contract.

The field hasn't settled. Building stable software in a domain that keeps invalidating its own assumptions is the actual problem we're solving. The answer is a design that can absorb the next shift without breaking what people rely on.

What's in v1.0:
Deep Hugging Face integration, low infrastructure burden
What's next: asynchronous GRPO, better scaling support, and making training legible enough that agents can inspect and steer it.

pip install --upgrade trl


Read more: hf.co/blog/trl-v1
danielhanchen 
posted an update 6 days ago
view post
Post
2512
A new way to use Unsloth.

Coming soon...
unmodeled-tyler 
posted an update 11 days ago
view post
Post
1955
Hey Hugging Face!

PRODUCT HUNT LINK: https://www.producthunt.com/products/quanta-intellect?utm_source=other&utm_medium=social

I've been sharing my new AI browser Vessel the last few days and I've gotten some great feedback/interest from a lot of you!

I'm excited to announce that Vessel Browser is now live on Product Hunt! If this is the first you've heard of it, check it out! Vessel is an open source AI browser built specifically for agents on Linux. It's not a fork of an existing browser, and it doesn't assume that the human is the primary operator.

If you've already tried Vessel Browser, feel free to leave a review on Product Hunt of what you thought - or if you'd prefer, send me an email directly or reach out on twitter if you have any questions about it. I'm perpetually online & happy to chat 😀

I'm committed to building the best open source AI browser out there, and Vessel is only going to improve as time goes on!
danielhanchen 
posted an update 12 days ago
view post
Post
788
You don’t need to set LLM parameters anymore! 🚀

llama.cpp uses only the context length + compute your local setup needs. Unsloth also auto-applies the correct model settings

Try in Unsloth Studio - now with precompiled llama.cpp binaries.

GitHub: https://github.com/unslothai/unsloth
  • 2 replies
·
unmodeled-tyler 
posted an update 13 days ago
view post
Post
1466
PSA: LiteLLM has been compromised on PyPI - if you have it installed, CHECK NOW.

LiteLLM is used as a dependency in A LOT of AI tooling, so there's a pretty good chance that you have it installed somewhere on your machine (my instance was part of Hermes Agent, but I was unaffected by the hack)

Versions 1.82.7 & 1.82.8 on PyPI have been compromised with a multi-stage credential stealer.

- Version 1.82.8 uses a .pth file that executes on EVERY python process startup. You don't even need to import litellm. Just having it installed is enough.
- The payload harvests SSH keys, .env files, AWS/GCP/Azure credentials, Kubernetes configs, database passwords, crytpo wallets, shell history - basically every secret on your machine.
- Stolen data is encrypted with a hardcoded RSA key and exfiltrated to a domain that is NOT part of a legitimate litellm infrastructure.
- If you're running Kubernetes, it attempts lateral movement across the entire cluster.
- The C2 is hosted on the Internet Computer blockchain, making it essentially impossible to take down.

This is part of a coordinated campaign by a threat actor called TeamPCP who have also hit Trivy (Aqua Security), Checkmarx KICS, and multiple npm packages in the last week ALONE.

What to do:

1. Run 'pip show litellm' in every environment you have
2. If you're on 1.82.7 or 1.82.8 - rotate EVERY secret on that machine immediately.
3. Check for persistence artifacts ~/.config/sysmon/sysmon.py & ~/.config/systemd/user/sysmon.service

I was lucky in this case that my litellm version was out of date, but if you've installed litellm as a dependency in ANY package within the last 24ish hours, you're gonna want to check.

SOURCES
https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/

Same group, different attack a couple of days ago: https://www.stepsecurity.io/blog/canisterworm-how-a-self-propagating-npm-worm-is-spreading-backdoors-across-the-ecosystem
  • 5 replies
·
etemiz 
in blog-explorers/README 14 days ago

accidental exit

#14 opened 16 days ago by
etemiz