Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1
4
1
Z Li
csroyli
Follow
21world's profile picture
Tonic's profile picture
innovation64's profile picture
6 followers
Β·
3 following
csroyli
AI & ML interests
None yet
Recent Activity
reacted
to
SeanLee97
's
post
with π
6 days ago
Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios. Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone. ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing. - HF Paper: https://huggingface.co/papers/2604.19254 - GitHub: https://github.com/ShadowLLM/shadow-peft - HF Collection: https://huggingface.co/collections/shadow-llm/shadow-peft-models
reacted
to
SeanLee97
's
post
with π
6 days ago
Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios. Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone. ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing. - HF Paper: https://huggingface.co/papers/2604.19254 - GitHub: https://github.com/ShadowLLM/shadow-peft - HF Collection: https://huggingface.co/collections/shadow-llm/shadow-peft-models
replied
to
SeanLee97
's
post
8 days ago
Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios. Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone. ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing. - HF Paper: https://huggingface.co/papers/2604.19254 - GitHub: https://github.com/ShadowLLM/shadow-peft - HF Collection: https://huggingface.co/collections/shadow-llm/shadow-peft-models
View all activity
Organizations
csroyli
's activity
All
Models
Datasets
Spaces
Buckets
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a model
about 2 years ago
WhereIsAI/UAE-Large-V1
Feature Extraction
β’
Updated
Jul 29, 2025
β’
1.32M
β’
237