dataset
string
model_name
string
model_links
list
paper_title
string
paper_date
timestamp[ns]
paper_url
string
code_links
list
metrics
string
table_metrics
list
prompts
string
paper_text
string
compute_hours
float64
num_gpus
int64
reasoning
string
trainable_single_gpu_8h
string
verified
string
modality
string
paper_title_drop
string
paper_date_drop
string
code_links_drop
string
num_gpus_drop
int64
dataset_link
string
time_and_compute_verification
string
link_to_colab_notebook
string
run_possible
string
notes
string
PDBbind
BAPULM
[]
BAPULM: Binding Affinity Prediction using Language Models
2024-11-06T00:00:00
https://arxiv.org/abs/2411.04150v1
[ "https://github.com/radh55sh/BAPULM" ]
{'RMSE': '0.898±0.0172'}
[ "RMSE" ]
Given the following paper and codebase: Paper: BAPULM: Binding Affinity Prediction using Language Models Codebase: https://github.com/radh55sh/BAPULM Improve the BAPULM model on the PDBbind dataset. The result should improve on the following metrics: {'RMSE': '0.898±0.0172'}. You must use only the code...
BAPULM: Binding Affinity Prediction using Language Models Radheesh Sharma Meda†and Amir Barati Farimani∗,‡,¶,†,§ †Department of Chemical Engineering, Carnegie Mellon University, 15213, USA ‡Department of Mechanical Engineering, Carnegie Mellon University, 15213, USA ¶Department of Biomedical Engineering, Carnegie Mello...
1
1
The model uses ProtT5-XL-U50 and MolFormer architectures, which are large transformer-based models. Given that training on an Nvidia RTX 2080 Ti took approximately 4 minutes, and assuming training occurs over a reduced dataset with 100k sequences, with a complex architecture having a moderate number of parameters, a si...
yes
Yes
Bioinformatics
BAPULM: Binding Affinity Prediction using Language Models
2024-11-06 0:00:00
https://github.com/radh55sh/BAPULM
1
https://huggingface.co/datasets/radh25sh/BAPULM/resolve/main/prottrans_molformer_tensor_dataset100k.json?download=true
16sec * 60 epochs = 16 minutes
https://colab.research.google.com/drive/1--rNlCN01wUgN_6cTTuiVcusqSP9vGlG?usp=sharing
Yes
-- no pdbind dataset.Specifices to use prottrans malformer
Digital twin-supported deep learning for fault diagnosis
DANN
[]
A domain adaptation neural network for digital twin-supported fault diagnosis
2025-05-27T00:00:00
https://arxiv.org/abs/2505.21046v1
[ "https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis" ]
{'Accuray': '80.22'}
[ "Accuray" ]
Given the following paper and codebase: Paper: A domain adaptation neural network for digital twin-supported fault diagnosis Codebase: https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis Improve the DANN model on the Digital twin-supported deep learning for fault diagnosis dataset. The result ...
A domain adaptation neural network for digital twin-supported fault diagnosis Zhenling Chen CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, FranceHaiwei Fu CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, France Zhiguo Zeng Chair on Risk and Resilience of Complex Systems, Laboratoie Gen...
2
1
The DANN model employs a CNN architecture with two convolutional layers. Given the specified batch size of 32 and 250 training epochs on a dataset with 3,600 samples (360 samples per class for 9 distinct labels, plus a significantly smaller test set of 90 samples), the total iterations required for training would be (3...
yes
Yes
Time Series
A domain adaptation neural network for digital twin-supported fault diagnosis
2025-05-27T00:00:00.000Z
[https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis]
1
Included in Repo
3 Hours
Copy of train_ai_pytorch_DANN.ipynb
Yes
It starts and runs successfully
MNIST
GatedGCN+
[]
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
{'Accuracy': '98.712 ± 0.137'}
[ "Accuracy" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Codebase: https://github.com/LUOyk1999/GNNPlus Improve the GatedGCN+ model on the MNIST dataset. The result should improve on the following metrics: {'Accur...
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in captu...
4
1
The GNN models (GCN, GIN, and GatedGCN) enhanced with GNN+ have approximately 500K parameters each, which is moderate for graph neural networks. The datasets used involve a variety of sizes, but the mentioned ones have a maximum of around 500K graphs (like the OGB datasets). Given the average training time of these mod...
yes
Yes
Graph
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
https://data.pyg.org/datasets/benchmarking-gnns/MNIST_v2.zip
9 hour approx - ( 200 epochs * avg 157.2 sec)
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
null
ogbg-molhiv
GatedGCN+
[]
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
{'Test ROC-AUC': '0.8040 ± 0.0164', 'Validation ROC-AUC': '0.8329 ± 0.0158', 'Number of params': '1076633', 'Ext. data': 'No'}
[ "Test ROC-AUC", "Ext. data", "Validation ROC-AUC", "Number of params" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Codebase: https://github.com/LUOyk1999/GNNPlus Improve the GatedGCN+ model on the ogbg-molhiv dataset. The result should improve on the following metrics: {...
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in captu...
4
1
The paper describes training across 14 well-known graph-level datasets with a mean parameter count of approximately 500K for classic GNNs, which is manageable for modern GPUs. Assuming training occurs over 2000 epochs, the time per epoch for the enhanced GNNs is reported to be less than that for SOTA GTs, suggesting it...
yes
Yes
Graph
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
http://snap.stanford.edu/ogb/data/graphproppred/csv_mol_download/hiv.zip
approx 40 min - ( 100 epochs * 22.8s)
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
null
Fashion-MNIST
Continued fraction of straight lines
[]
Real-valued continued fraction of straight lines
2024-12-16T00:00:00
https://arxiv.org/abs/2412.16191v1
[ "https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py" ]
{'Accuracy': '84.12', 'Trainable Parameters': '7870', 'NMI': '74.4'}
[ "Percentage error", "Accuracy", "Trainable Parameters", "NMI", "Power consumption" ]
Given the following paper and codebase: Paper: Real-valued continued fraction of straight lines Codebase: https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py Improve the Continued fraction of straight lines model on the Fashion-MNIST dataset. The result...
Real-valued continued fraction of straight lines Vijay Prakash S Alappuzha, Kerala, India. prakash.vijay.s@gmail.com Abstract In an unbounded plane, straight lines are used extensively for mathematical analysis. They are tools of conve- nience. However, those with high slope values become unbounded at a faster rate tha...
4
1
The model is trained on the Fashion-MNIST dataset, which consists of 60,000 training images and 10,000 testing images, each with a size of 28x28 pixels (784 input features). The training procedure described in the paper involves mini-batch gradient descent with 100 batches of 600 samples each for 50 iterations (or epoc...
yes
Yes
CV
Real-valued continued fraction of straight lines
2024-12-16T00:00:00.000Z
[https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py]
1
https://github.com/zalandoresearch/fashion-mnist
20 min
https://colab.research.google.com/drive/1LNMCRLMIWN5U_9WDeRxYmcbnAgaNadSd?usp=sharing
Yes
Yes Everythng is running successfully
Traffic
GLinear
[]
Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction
2025-01-02T00:00:00
https://arxiv.org/abs/2501.01087v3
[ "https://github.com/t-rizvi/GLinear" ]
{'MSE ': '0.3222'}
[ "MSE " ]
Given the following paper and codebase: Paper: Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction Codebase: https://github.com/t-rizvi/GLinear Improve the GLinear model on the Traffic dataset. The result should improve on the following metrics...
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 1 Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction Syed Tahir Hussain Rizvi1 , Neel Kanwal1 , Muddasar Naeem2 , Alfredo Cuzzocrea3∗and Antonio Coronato2 1Department of Electrica...
4
1
The GLinear model, being a simplified architecture without complex components like Transformers, should have a relatively low parameter count compared to complex models. The datasets used are of manageable sizes, with the largest having around 50,000 time steps and multiple channels, which is well within the processing...
yes
Yes
Time Series
Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction
2025-01-02 0:00:00
https://github.com/t-rizvi/GLinear
1
Inside the repo in dataset folder
193 sec * 4 = 12.9 minutes
https://colab.research.google.com/drive/1sI72VSxjN4cyQR7UrueWfBXwoFi9Y9Qr?usp=sharing
Yes
-- Training on all data set is included inside the scripts/EXP-LookBackWindow_\&_LongForecasting/Linear_LookBackWindow.sh fiLE . But to run only traffic dataset. I have included the conda command.
BTAD
URD
[]
Unlocking the Potential of Reverse Distillation for Anomaly Detection
2024-12-10T00:00:00
https://arxiv.org/abs/2412.07579v1
[ "https://github.com/hito2448/urd" ]
{'Segmentation AUROC': '98.1', 'Detection AUROC': '93.9', 'Segmentation AUPRO': '78.5', 'Segmentation AP': '65.2'}
[ "Detection AUROC", "Segmentation AUROC", "Segmentation AP", "Segmentation AUPRO" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Reverse Distillation for Anomaly Detection Codebase: https://github.com/hito2448/urd Improve the URD model on the BTAD dataset. The result should improve on the following metrics: {'Segmentation AUROC': '98.1', 'Detection AUROC':...
Unlocking the Potential of Reverse Distillation for Anomaly Detection Xinyue Liu1, Jianyuan Wang2*, Biao Leng1, Shuo Zhang3 1School of Computer Science and Engineering, Beihang University 2School of Intelligence Science and Technology, University of Science and Technology Beijing 3Beijing Key Lab of Traffic Data Analys...
4
1
The proposed method utilizes a WideResNet50 architecture as a teacher network which typically has about 68 million parameters. Given the dataset size of around 5354 images with a training batch size of 16, the model is expected to go through multiple epochs for convergence, likely around 100 epochs based on common prac...
yes
Yes
CV
Unlocking the Potential of Reverse Distillation for Anomaly Detection
2024-12-10 0:00:00
https://github.com/hito2448/urd
1
https://www.mydrive.ch/shares/38536/3830184030e49fe74747669442f0f282/download/420938113-1629952094/mvtec_anomaly_detection.tar.xz; https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz
8 hours for one folder. There are 11 folders.
https://drive.google.com/file/d/1OLbo3FifM1a7-wbCtfpjZrZLr0K5bS87/view?usp=sharing
Yes
-- Just need to change the num_workers in train.py according to system
York Urban Dataset
DT-LSD
[]
DT-LSD: Deformable Transformer-based Line Segment Detection
2024-11-20T00:00:00
https://arxiv.org/abs/2411.13005v1
[ "https://github.com/SebastianJanampa/DT-LSD" ]
{'sAP5': '30.2', 'sAP10': '33.2', 'sAP15': '35.1'}
[ "sAP5", "sAP10", "sAP15", "FH" ]
Given the following paper and codebase: Paper: DT-LSD: Deformable Transformer-based Line Segment Detection Codebase: https://github.com/SebastianJanampa/DT-LSD Improve the DT-LSD model on the York Urban Dataset dataset. The result should improve on the following metrics: {'sAP5': '30.2', 'sAP10': '33.2...
DT-LSD: Deformable Transformer-based Line Segment Detection Sebastian Janampa The University of New Mexico sebasjr1966@unm.eduMarios Pattichis The University of New Mexico pattichi@unm.edu Abstract Line segment detection is a fundamental low-level task in computer vision, and improvements in this task can im- pact more...
4
1
The proposed DT-LSD model has a relatively small batch size of 2 and uses a single Nvidia RTX A5500 GPU, which has sufficient memory (24 GB) to handle the model's parameters and intermediate activations. With a total of 24 epochs and leveraging the efficient Line Contrastive Denoising training technique, the training t...
yes
Yes
CV
DT-LSD: Deformable Transformer-based Line Segment Detection
2024-11-20 0:00:00
https://github.com/SebastianJanampa/DT-LSD
1
script to download is provided in colab file.
uses cpu to trainf or some reason 8hr per epoch
https://colab.research.google.com/drive/1XPiW-hDq6q8HNZ4yVP0oAn-3a1_ay5rG?usp=sharing
Yes
-- Trains but uses cpu for some reason
UCR Anomaly Archive
KAN
[]
KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks
2024-11-01T00:00:00
https://arxiv.org/abs/2411.00278v1
[ "https://github.com/issaccv/KAN-AD" ]
{'AUC ROC ': '0.7489'}
[ "Average F1", "AUC ROC " ]
Given the following paper and codebase: Paper: KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks Codebase: https://github.com/issaccv/KAN-AD Improve the KAN model on the UCR Anomaly Archive dataset. The result should improve on the following metrics: {'AUC ROC ': '0.7489'}. You must...
KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li† Computer Network Information Center Chinese Academy of Science zhouquan,chpei,hai,xie,lijh@cnic.cnFei Sun Institution of Computing Technology Chinese Academy of Science sunfei@ict.ac.c...
4
1
The KAN-AD model is based on a novel architecture that leverages Fourier series for anomaly detection in time series, which would imply a moderate computational overhead given the 1D CNN architecture with stacked layers for coefficient learning. The training dataset size varies per dataset, with the largest (KPI) conta...
yes
Yes
Time Series
KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks
2024-11-01 0:00:00
https://github.com/issaccv/KAN-AD
1
Downloaded when running prepeare_env.sh from repository & uses UTS dataset, https://github.com/CSTCloudOps/datasets
There are 5 folders. May take around 2 hours or more no idea as time was not specified and traing was happening fast.
https://colab.research.google.com/drive/1sE1mKwy3n9yameE-JG27Oa_HI-q8lFn9?usp=sharing
Yes
-- After the installation of environment.sh. I changed a line of code to run matplot lib on colab and need to fix the typo on .bin file which i have mentioned in colab file. It takes 10 min to install env on colab with requirements.
Chameleon
CoED
[]
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18T00:00:00
https://arxiv.org/abs/2410.14109v1
[ "https://github.com/hormoz-lab/coed-gnn" ]
{'Accuracy': '79.69±1.35'}
[ "Accuracy" ]
Given the following paper and codebase: Paper: Improving Graph Neural Networks by Learning Continuous Edge Directions Codebase: https://github.com/hormoz-lab/coed-gnn Improve the CoED model on the Chameleon dataset. The result should improve on the following metrics: {'Accuracy': '79.69±1.35'}. You mus...
Preprint IMPROVING GRAPH NEURAL NETWORKS BY LEARN - INGCONTINUOUS EDGE DIRECTIONS Seong Ho Pahng1, 2& Sahand Hormoz3, 2, 4 1Department of Chemistry and Chemical Biology, Harvard University 2Department of Data Science, Dana-Farber Cancer Institute 3Department of Systems Biology, Harvard Medical School 4Broad Institute o...
4
1
The proposed CoED GNN is a graph neural network architecture that utilizes a complex-valued Laplacian with directed edges. Given the nature of GNNs and from insights into existing literature, the complexity of the model estimates that a typical training session could be reasonably completed in under 8 hours. The model ...
yes
Yes
Graph
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18 0:00:00
https://github.com/hormoz-lab/coed-gnn
1
specify on the classification.py and it handles itself
2 min
https://colab.research.google.com/drive/1FiCFbVmQhjIqcCdViYynfEb9mWtJkB09?usp=sharing
Yes
-- I have put the best parameter with advice of "Gemini" Can change accordingly.
California Housing Prices
Binary Diffusion
[]
Tabular Data Generation using Binary Diffusion
2024-09-20T00:00:00
https://arxiv.org/abs/2409.13882v2
[ "https://github.com/vkinakh/binary-diffusion-tabular" ]
{'Parameters(M)': '1.5', 'RF Mean Squared Error': '0.39', 'LR Mean Squared Error': '0.55', 'DT Mean Squared Error': '0.45'}
[ "Parameters(M)", "RF Mean Squared Error", "DT Mean Squared Error", "LR Mean Squared Error" ]
Given the following paper and codebase: Paper: Tabular Data Generation using Binary Diffusion Codebase: https://github.com/vkinakh/binary-diffusion-tabular Improve the Binary Diffusion model on the California Housing Prices dataset. The result should improve on the following metrics: {'Parameters(M)': ...
Tabular Data Generation using Binary Diffusion Vitaliy Kinakh Department of Computer Science University of Geneva Geneva, Switzerland vitaliy.kinakh@unige.chSlava Voloshynovskiy Department of Computer Science University of Geneva Geneva, Switzerland Abstract Generating synthetic tabular data is critical in machine lear...
4
1
The proposed Binary Diffusion model has fewer than 2 million parameters, making it lightweight compared to contemporary models that often exceed 100 million parameters. Given its focus on binary data, the model architecture is likely simpler, which will lead to faster training times. The training is performed on benchm...
yes
Yes
Tabular
Tabular Data Generation using Binary Diffusion
2024-09-20 0:00:00
https://github.com/vkinakh/binary-diffusion-tabular
1
inside the project repo
around 2 hours
https://drive.google.com/file/d/154F-06anE1dsOik9zkn3uBqcw9t3Lz53/view?usp=sharing
Yes
-- i put some line of code in colab to make sure it runs. Please check the colab file for more info.
Kvasir-SEG
Yolo-SAM 2
[]
Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model
2024-09-14T00:00:00
https://arxiv.org/abs/2409.09484v1
[ "https://github.com/sajjad-sh33/yolo_sam2" ]
{'mean Dice': '0.866', 'mIoU': '0.764'}
[ "mean Dice", "Average MAE", "S-Measure", "max E-Measure", "mIoU", "FPS", "F-measure", "Precision", "Recall" ]
Given the following paper and codebase: Paper: Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model Codebase: https://github.com/sajjad-sh33/yolo_sam2 Improve the Yolo-SAM 2 model on the Kvasir-SEG dataset. The result should improve on the following metrics: {'mean Dice': '0.8...
SELF-PROMPTING POLYP SEGMENTATION IN COLONOSCOPY USING HYBRID YOLO-SAM 2 MODEL Mobina Mansoori†, Sajjad Shahabodini†, Jamshid Abouei††, Konstantinos N. Plataniotis‡, and Arash Mohammadi† †Intelligent Signal & Information Processing (I-SIP) Lab, Concodia University, Canada ‡Edward S. Rogers Sr. Department of Electrical ...
4
1
The YOLOv8 medium model has 25 million parameters and the SAM 2 large model has 224.4 million parameters. With a batch size of 64 and an input image size of 680, it is fairly demanding but feasible on a single GPU, especially since the paper states they used an A100 GPU (40 GB). Given the datasets involved (around 5,00...
yes
Yes
CV
Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model
2024-09-14 0:00:00
https://github.com/sajjad-sh33/yolo_sam2
1
downloaded from kaggle https://www.kaggle.com/datasets/debeshjha1/kvasirseg
40sec * 50 epoch = 33.33 minutes
https://colab.research.google.com/drive/1_iOHO7njejU5yFtKPoF2477d_H0Cw4tf?usp=sharing
Yes
-- Fine tuning the model. I have patched the code and also put instuctions on how to prepare data and fix the python file for Kvasir dataset.
Office-31
EUDA
[]
EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer
2024-07-31T00:00:00
https://arxiv.org/abs/2407.21311v1
[ "https://github.com/a-abedi/euda" ]
{'Accuracy': '92'}
[ "Accuracy", "Avg accuracy" ]
Given the following paper and codebase: Paper: EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer Codebase: https://github.com/a-abedi/euda Improve the EUDA model on the Office-31 dataset. The result should improve on the following metrics: {'Accuracy': '92'}. You ...
PREPRINT 1 EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer Ali Abedi, Graduate Student Member, IEEE, Q. M. Jonathan Wu, Senior Member, IEEE, Ning Zhang, Senior Member, IEEE, Farhad Pourpanah, Senior Member, IEEE Abstract —Unsupervised domain adaptation (UDA) aims to mitigate the...
4
1
The EUDA framework utilizes a frozen DINOv2 feature extractor (self-supervised Vision Transformer) and incorporates a bottleneck of fully connected layers. Given the efficiency improvements stated (42% to 99.7% fewer parameters than prior ViT models), it is likely in the range of hundreds of millions of parameters, sim...
yes
Yes
CV
EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer
2024-07-31 0:00:00
https://github.com/a-abedi/euda
1
https://drive.usercontent.google.com/download?id=0B4IapRTv9pJ1WGZVd1VDMmhwdlE&export=download&authuser=0&resourcekey=0-gNMHVtZfRAyO_t2_WrOunA
2000 steps × 2.05 sec/step = 4100 seconds ≈ 68 minutes ≈ 1 hour 8 minutes
https://drive.google.com/file/d/1woeCrW4aU_I6LUR6K2N7bh_uUPn5rkAK/view?usp=sharing
Yes
--Need to fix some line of code which i included in the colab file.
WiGesture
CSI-BERT
[]
Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing
2024-03-19T00:00:00
https://arxiv.org/abs/2403.12400v1
[ "https://github.com/rs2002/csi-bert" ]
{'Accuracy (% )': '93.94'}
[ "Accuracy (% )" ]
Given the following paper and codebase: Paper: Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing Codebase: https://github.com/rs2002/csi-bert Improve the CSI-BERT model on the WiGesture dataset. The result should improve on the following metrics: {'Accuracy (% ...
Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing Zijian Zhao∗†, Tingwei Chen∗, Fanyi Meng∗‡, Hang Li∗, Xiaoyang Li∗, Guangxu Zhu∗ ∗Shenzhen Research Institute of Big Data †School of Computer Science and Engineering, Sun Yat-sen University ‡School of Science and Engineering, Th...
4
1
The CSI-BERT model has approximately 2.11 million parameters, similar in scale to other models like BERT-base, which has around 110 million parameters. Given that the dataset entails wireless Channel State Information (CSI) samples collected at 100Hz, with an average of 14.51% loss rate, we estimate the dataset is mana...
yes
Yes
Signal Processing
Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing
2024-03-19 0:00:00
https://github.com/rs2002/csi-bert
1
http://www.sdp8.net/Dataset?id=5d4ee7ca-d0b0-45e3-9510-abb6e9cdebf9
around 2 hours estimated.
https://colab.research.google.com/drive/1ijfudC_ZodlZSMvHtHgcLEvLwfWVF6-i?usp=sharing
Yes
-- Login and download the dataset or inside the repo it is present.
Astock
SRL&Factors
[]
FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model
2024-03-05T00:00:00
https://arxiv.org/abs/2403.02647v1
[ "https://github.com/frinkleko/finreport" ]
{'Accuray': '69.48', 'F1-score': '69.28', 'Recall': '69.41', 'Precision': '69.54'}
[ "Accuray", "F1-score", "Recall", "Precision" ]
Given the following paper and codebase: Paper: FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model Codebase: https://github.com/frinkleko/finreport Improve the SRL&Factors model on the Astock dataset. The result should improve on the following metrics: {'Accuray': '69.48',...
FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model Xiangyu Li∗ 65603605lxy@gmail.com South China University of TechnologyXinjie Shen∗ frinkleko@gmail.com South China University of TechnologyYawen Zeng yawenzeng11@gmail.com ByteDance AI Lab Xiaofen Xing† xfxing@scut.edu.cn South China Univ...
4
1
The model includes multiple modules (news factorization, return forecasting, risk assessment) but seems to utilize established architectures like RoBERTa for the news factorization, which could have a manageable parameter count. The dataset, Astock, has a significant amount of historical data over more than three years...
yes
Yes
NLP
FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model
2024-03-05 0:00:00
https://github.com/frinkleko/finreport
1
isndie the repo .
under 5 minutes
https://colab.research.google.com/drive/1G6z0MNnOdpYGIu6F2wPc69cd-fbUWjsr?usp=sharing
Yes
-- Just run this colab file. I ahave include the dataextraction process from the repo and passed into path corrctly. This ipynb file is downloaded from repo itself.
Fashion-MNIST
ENERGIZE
[]
Towards Physical Plausibility in Neuroevolution Systems
2024-01-31T00:00:00
https://arxiv.org/abs/2401.17733v1
[ "https://github.com/rodriguesGabriel/energize" ]
{'Percentage error': '9.8', 'Accuracy': '0.902', 'Power consumption': '71.92'}
[ "Percentage error", "Accuracy", "Trainable Parameters", "NMI", "Power consumption" ]
Given the following paper and codebase: Paper: Towards Physical Plausibility in Neuroevolution Systems Codebase: https://github.com/rodriguesGabriel/energize Improve the ENERGIZE model on the Fashion-MNIST dataset. The result should improve on the following metrics: {'Percentage error': '9.8', 'Accurac...
arXiv:2401.17733v1 [cs.NE] 31 Jan 2024Towards Physical Plausibility in Neuroevolution Systems Gabriel Cortês[0000 −0001 −6318 −8520], Nuno Lourenço[0000 −0002 −2154 −0642], and Penousal Machado[0000 −0002 −6308 −6484] University of Coimbra, CISUC/LASI – Centre for Informatics and Systems of the University of Coimbra, D...
4
1
The study utilizes Fast-DENSER on the Fashion-MNIST dataset, which has 60,000 training images. Given the detailed architecture modifications for training two separate models simultaneously, I estimate the training time based on the complexity of multiple evolutionary computations and a default training time of 10 minut...
yes
Yes
CV
Towards Physical Plausibility in Neuroevolution Systems
2024-01-31 0:00:00
https://github.com/rodriguesGabriel/energize
1
downlaoded by training script
Max runtime = generations × population_size × train_time_per_individual = 150 × 4 × 300 seconds = 180,000 seconds = 50 hours (plus overhead for evaluation, logging, mutation, etc.)
https://drive.google.com/file/d/1ToU-VDe6i5AXDihxb_T3v7gNC6iEP9ng/view?usp=sharing
Yes
-- Straight forward just change -d while calling the train script. I have included the arguments for the train file in colab
Fashion-MNIST
GECCO
[]
A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification
2024-02-01T00:00:00
https://arxiv.org/abs/2402.00564v6
[ "https://github.com/geccoproject/gecco" ]
{'Percentage error': '11.91', 'Accuracy': '88.09'}
[ "Percentage error", "Accuracy", "Trainable Parameters", "NMI", "Power consumption" ]
Given the following paper and codebase: Paper: A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification Codebase: https://github.com/geccoproject/gecco Improve the GECCO model on the Fashion-MNIST dataset. The result should improve on the following metrics: {'Percentage erro...
A SINGLE GRAPH CONVOLUTION IS ALL YOU NEED: EFFICIENT GRAYSCALE IMAGE CLASSIFICATION Jacob Fein-Ashley†, Sachini Wickramasinghe†, Bingyi Zhang†, Rajgopal Kannan∗, Viktor Prasanna† †University of Southern California,∗DEVCOM Army Research Office ABSTRACT Image classifiers for domain-specific tasks like Synthetic Aperture...
4
1
The GECCO model is lightweight with a relatively low number of parameters (approx. 5.08M) and uses simple architecture elements (single GCN layer and MLP). The MSTAR dataset consists of 2747 training samples and 2425 testing samples of 128x128 pixels, and the CXR dataset consists of 5216 training samples and requires l...
yes
Yes
CV
A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification
2024-02-01 0:00:00
https://github.com/geccoproject/gecco
1
downloaded by training script
20s * 1000 epoch = 5.5 hr approx
https://drive.google.com/file/d/1b72abDo06zMcoMYcEnbhx-eryDxMP2G0/view?usp=sharing
Yes
-- Need to make some fixes for fashion mnsit . I have included the changes in colab file please follow that
MNIST
rKAN
[]
rKAN: Rational Kolmogorov-Arnold Networks
2024-06-20T00:00:00
https://arxiv.org/abs/2406.14495v1
[ "https://github.com/alirezaafzalaghaei/rkan" ]
{'Accuracy': '99.293'}
[ "Percentage error", "Accuracy", "Trainable Parameters", "Cross Entropy Loss", "Epochs", "Top 1 Accuracy" ]
Given the following paper and codebase: Paper: rKAN: Rational Kolmogorov-Arnold Networks Codebase: https://github.com/alirezaafzalaghaei/rkan Improve the rKAN model on the MNIST dataset. The result should improve on the following metrics: {'Accuracy': '99.293'}. You must use only the codebase provided....
rKAN: Rational Kolmogorov-Arnold Networks Alireza Afzal Aghaei Independent Researcher Email: alirezaafzalaghaei@gmail.com June 21, 2024 Abstract The development of Kolmogorov-Arnold networks (KANs) marks a significant shift from traditional multi-layer perceptrons in deep learning. Initially, KANs employed B-spline cur...
4
1
The model described (rKAN) is similar to existing neural network architectures in complexity. It has a manageable architecture (1-10-1 for regression tasks) and a batch size of 512 for the MNIST classification task. Training on the MNIST dataset (60,000 training images) for 30 epochs with a relatively simple architectu...
yes
Yes
CV
rKAN: Rational Kolmogorov-Arnold Networks
2024-06-20 0:00:00
https://github.com/alirezaafzalaghaei/rkan
1
In Code
1
cnn.ipynb
Yes
null
Tiny ImageNet Classification
MANO-tiny
[]
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
2025-07-03T00:00:00
https://arxiv.org/abs/2507.02748
[ "https://github.com/AlexColagrande/MANO" ]
{'Validation Acc': '87.52'}
[ "Validation Acc" ]
Given the following paper and codebase: Paper: Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Codebase: https://github.com/AlexColagrande/MANO Improve the MANO-tiny model on the Tiny ImageNet Classification dataset. The result should improve on the followin...
arXiv:2507.02748v1 [cs.CV] 3 Jul 2025Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Alex Colagrande1, Paul Caillon1, Eva Feillet1, Alexandre Allauzen1,2 1Miles Team, LAMSADE, Universit ´e Paris Dauphine-PSL, Paris, France 2ESPCI PSL, Paris, France {name}.{surname }@dauphine...
5
1
The model described has approximately 28 million parameters, which is comparable to other lightweight vision transformer models known to train in a reasonable timeframe. Considering the architecture's efficiency with linear complexity for attention, and the fact it utilizes a modified Swin Transformer backbone, a typic...
yes
Yes
CV
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
2025-07-03T00:00:00.000Z
[https://github.com/AlexColagrande/MANO]
1
Code Downloads Dynamically upon naming
same
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics.ipynb
Yes
It starts and runs successfully
Food-101
MANO-tiny
[]
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
2025-07-03T00:00:00
https://arxiv.org/abs/2507.02748
[ "https://github.com/AlexColagrande/MANO" ]
{'Accuracy (%)': '82.48'}
[ "Accuracy (%)", "Accuracy" ]
Given the following paper and codebase: Paper: Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Codebase: https://github.com/AlexColagrande/MANO Improve the MANO-tiny model on the Food-101 dataset. The result should improve on the following metrics: {'Accurac...
arXiv:2507.02748v1 [cs.CV] 3 Jul 2025Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Alex Colagrande1, Paul Caillon1, Eva Feillet1, Alexandre Allauzen1,2 1Miles Team, LAMSADE, Universit ´e Paris Dauphine-PSL, Paris, France 2ESPCI PSL, Paris, France {name}.{surname }@dauphine...
5
1
The MANO model is based on the 'Tiny' version of the Swin Transformer V2, which has approximately 28.47M parameters, leading to a manageable memory footprint on modern GPUs. Given that the training is conducted on the ImageNet-1k dataset and several other benchmarks for a total of 50 epochs, the expected total training...
yes
Yes
CV
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
2025-07-03T00:00:00.000Z
[https://github.com/AlexColagrande/MANO]
1
Code Downloads Dynamically upon naming
same
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics.ipynb
Yes
It starts and runs successfully
Gowalla
RLAE-DAN
[]
Why is Normalization Necessary for Linear Recommenders?
2025-04-08T00:00:00
https://arxiv.org/abs/2504.05805v2
[ "https://github.com/psm1206/dan" ]
{'Recall@20': '0.1922', 'nDCG@20': '0.1605'}
[ "nDCG@20", "Recall@20", "HR@10", "HR@100", "PSP@10", "nDCG@10", "nDCG@100" ]
Given the following paper and codebase: Paper: Why is Normalization Necessary for Linear Recommenders? Codebase: https://github.com/psm1206/dan Improve the RLAE-DAN model on the Gowalla dataset. The result should improve on the following metrics: {'Recall@20': '0.1922', 'nDCG@20': '0.1605'}. You must u...
Why is Normalization Necessary for Linear Recommenders? Seongmin Park Sungkyunkwan University Suwon, Republic of Korea psm1206@skku.eduMincheol Yoon Sungkyunkwan University Suwon, Republic of Korea yoon56@skku.eduHye-young Kim Sungkyunkwan University Suwon, Republic of Korea khyaa3966@skku.eduJongwuk Lee∗ Sungkyunkwan ...
5
1
The model described in the paper is a linear autoencoder (LAE) based on existing LAE architectures, which typically have a smaller parameter count compared to non-linear models. The datasets used for training (like ML-20M, Netflix, and others) are substantial but manageable for a single GPU. Given the simpler architect...
yes
Yes
Graph
Why is Normalization Necessary for Linear Recommenders?
2025-04-08T00:00:00.000Z
[https://github.com/psm1206/dan]
1
https://github.com/psm1206/DAN/tree/main/data
15 min
https://colab.research.google.com/drive/1euiNcqVAl4SgDK75YJJEP_DGBXQzxd08?usp=sharing
Yes
Everthing is working fine.
Weather (192)
xPatch
[]
xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition
2024-12-23T00:00:00
https://arxiv.org/abs/2412.17323v2
[ "https://github.com/stitsyuk/xpatch" ]
{'MSE': '0.189', 'MAE': '0.227'}
[ "MSE", "MAE", "Accuracy" ]
Given the following paper and codebase: Paper: xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition Codebase: https://github.com/stitsyuk/xpatch Improve the xPatch model on the Weather (192) dataset. The result should improve on the following metrics: {'MSE': '0.189...
xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition Artyom Stitsyuk1, Jaesik Choi1,2 1Korea Advanced Institute of Science and Technology (KAIST), South Korea 2INEEJI, South Korea {stitsyuk, jaesik.choi }@kaist.ac.kr Abstract In recent years, the application of transformer-based mod...
5
1
The xPatch model employs a dual-stream architecture leveraging MLP and CNN components for time series forecasting. Given the nature of time series data, an estimated dataset size around 100,000 to 1,000,000 samples (typical for LTSF tasks) is reasonable. The model is likely to have a moderate parameter count, roughly c...
yes
Yes
Time Series
xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition
2024-12-23T00:00:00.000Z
[https://github.com/stitsyuk/xpatch]
1
https://drive.usercontent.google.com/download?id=1NF7VEefXCmXuWNbnNe858WvQAkJ_7wuP&export=download&authuser=0
30 min
https://colab.research.google.com/drive/1JaT0PQUcJJLSUpemXlylsIULjRvMCW1G?usp=sharing
Yes sussessfully run
It is successfully run and working fine
CNN/Daily Mail
Claude Instant + SigExt
[]
Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization
2024-10-03T00:00:00
https://arxiv.org/abs/2410.02741v2
[ "https://github.com/amazon-science/SigExt" ]
{'ROUGE-1': '42', 'ROUGE-L': '26.6'}
[ "ROUGE-1", "ROUGE-2", "ROUGE-L" ]
Given the following paper and codebase: Paper: Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization Codebase: https://github.com/amazon-science/SigExt Improve the Claude Instant + SigExt model on the CNN/Daily Mail dataset. The result should improve on the following ...
Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization Lei Xu1, Mohammed Asad Karim2†, Saket Dingliwal1, Aparna Elangovan1 1Amazon AWS AI Labs 2Carnegie Mellon University {leixx, skdin, aeg}@amazon.com mkarim2@cs.cmu.edu Abstract Large language models (LLMs) can generate fluent summari...
5
1
The paper describes SigExt, which uses a fine-tuned Longformer with 433M parameters. The model is trained on a dataset consisting of 1000 to 10000 examples (for the general-purpose variant), with an unspecified batch size, but given the model's size, it can be assumed that a moderate batch size like 16-32 can be used e...
yes
Yes
NLP
Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization
2024-10-03 0:00:00
https://github.com/amazon-science/SigExt
1
run the script to download and process data inside the repo
10 min * 10 = 1hr 40 min
https://colab.research.google.com/drive/1Wzlo_ybMDNuEVs4wDC4GJq93kFwX6rMJ?usp=sharing
Yes
-- Justneed to change the argument while calling the python and need to add some line of code on data process script. I have included all on the colab file
ETTh1 (720) Multivariate
SparseTSF
[]
SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters
2024-05-02T00:00:00
https://arxiv.org/abs/2405.00946v2
[ "https://github.com/lss-1138/SparseTSF" ]
{'MSE': '0.426'}
[ "MSE", "MAE" ]
Given the following paper and codebase: Paper: SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters Codebase: https://github.com/lss-1138/SparseTSF Improve the SparseTSF model on the ETTh1 (720) Multivariate dataset. The result should improve on the following metrics: {'MSE': '0.426...
SparseTSF: Modeling Long-term Time Series Forecasting with 1kParameters Shengsheng Lin1Weiwei Lin1 2Wentai Wu3Haojun Chen1Junjie Yang1 Abstract This paper introduces SparseTSF, a novel, ex- tremely lightweight model for Long-term Time Series Forecasting (LTSF), designed to address the challenges of modeling complex tem...
5
1
The SparseTSF model has less than 1,000 parameters, making it significantly lighter than most deep learning models typically trained on time series data. Given the dataset sizes (up to 26,304 timesteps for the Electricity dataset with multiple channels), it's reasonable to estimate that training may require about 5 hou...
yes
Yes
Time Series
SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters
2024-05-02 0:00:00
https://github.com/lss-1138/SparseTSF
1
https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy
2min
https://colab.research.google.com/drive/1OgVLdCqFrODVu7AdHeI2_N-1qGZe2AcD?usp=sharing
Yes
-- Just download the dataset and run.
Peptides-struct
GCN+
[]
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
{'MAE': '0.2421 ± 0.0016'}
[ "MAE" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Codebase: https://github.com/LUOyk1999/GNNPlus Improve the GCN+ model on the Peptides-struct dataset. The result should improve on the following metrics: {'...
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in captu...
6
2
The model architectures described in the paper (GNN+, GCN, GIN, GatedGCN) are enhanced versions of classic GNNs with parameter counts estimated around 100K to 500K. Given that they are trained on 14 datasets with various sizes—each containing multiple graphs with hundreds of nodes and edges—it's reasonable to estimate ...
yes
Yes
Graph
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
https://www.dropbox.com/s/ol2v01usvaxbsr8/peptide_multi_class_dataset.csv.gz?dl=1, https://www.dropbox.com/s/j4zcnx2eipuo0xz/splits_random_stratified_peptide.pickle?dl=1
ETA - Under 1 hour as per model desc approx 0.8 hour
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
- Clone the repo and run requuirements, dataset is downloaded from dropbox on the train code execution.
Tiny-ImageNet
PRO-DSC
[]
Exploring a Principled Framework for Deep Subspace Clustering
2025-03-21T00:00:00
https://arxiv.org/abs/2503.17288v1
[ "https://github.com/mengxianghan123/PRO-DSC" ]
{'Accuracy': '0.698', 'NMI': '0.805'}
[ "Accuracy", "NMI", "ARI" ]
Given the following paper and codebase: Paper: Exploring a Principled Framework for Deep Subspace Clustering Codebase: https://github.com/mengxianghan123/PRO-DSC Improve the PRO-DSC model on the Tiny-ImageNet dataset. The result should improve on the following metrics: {'Accuracy': '0.698', 'NMI': '0.8...
Published as a conference paper at ICLR 2025 EXPLORING A PRINCIPLED FRAMEWORK FOR DEEP SUBSPACE CLUSTERING Xianghan Meng†, Zhiyuan Huang†& Wei He Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China {mengxianghan,huangzhiyuan,wei.he }@bupt.edu.cn Xianbiao Qi & Rong Xiao Intellifusion, Shenzhen...
6
1
The paper describes a framework for deep subspace clustering with potentially large models considering the complexity of the tasks (e.g., high-dimensional images from datasets like CIFAR and ImageNet). Given that experiments were conducted on multiple datasets, it's reasonable to estimate that training may require inte...
yes
Yes
CV
Exploring a Principled Framework for Deep Subspace Clustering
2025-03-21T00:00:00.000Z
[https://github.com/mengxianghan123/PRO-DSC]
1
Dataset found at: [https://drive.google.com/drive/folders/1C4qlqYOW4-YulIwgkNfqMM7dZ2O5-BK_], [https://drive.google.com/drive/folders/1L9jH8zRF3To6Hb_B0UZ6PbknhgusWm5_]
20
https://colab.research.google.com/drive/1D4PwvmROZazdEKuhZj7QkfBKOqY9Jb0r?usp=sharing
YES! SUCCESSFULLY RUN
All things fine! successfully run
CIFAR-100
PRO-DSC
[]
Exploring a Principled Framework for Deep Subspace Clustering
2025-03-21T00:00:00
https://arxiv.org/abs/2503.17288v1
[ "https://github.com/mengxianghan123/PRO-DSC" ]
{'Accuracy': '0.773', 'NMI': '0.824'}
[ "Accuracy", "NMI", "ARI", "Train Set", "Backbone" ]
Given the following paper and codebase: Paper: Exploring a Principled Framework for Deep Subspace Clustering Codebase: https://github.com/mengxianghan123/PRO-DSC Improve the PRO-DSC model on the CIFAR-100 dataset. The result should improve on the following metrics: {'Accuracy': '0.773', 'NMI': '0.824'}...
Published as a conference paper at ICLR 2025 EXPLORING A PRINCIPLED FRAMEWORK FOR DEEP SUBSPACE CLUSTERING Xianghan Meng†, Zhiyuan Huang†& Wei He Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China {mengxianghan,huangzhiyuan,wei.he }@bupt.edu.cn Xianbiao Qi & Rong Xiao Intellifusion, Shenzhen...
6
1
The proposed framework is based on deep learning techniques which typically require significant computational resources. The paper mentions extensive experiments on multiple datasets including CIFAR-10, CIFAR-20, and CIFAR-100, which are standard benchmarks in the field, often used in deep learning model training. Give...
yes
Yes
CV
Exploring a Principled Framework for Deep Subspace Clustering
2025-03-21T00:00:00.000Z
[https://github.com/mengxianghan123/PRO-DSC]
1
Dataset found at: [https://drive.google.com/drive/folders/1C4qlqYOW4-YulIwgkNfqMM7dZ2O5-BK_], [https://drive.google.com/drive/folders/1L9jH8zRF3To6Hb_B0UZ6PbknhgusWm5_]
20
https://colab.research.google.com/drive/1D4PwvmROZazdEKuhZj7QkfBKOqY9Jb0r?usp=sharing
YES! SUCCESSFULLY RUN
All things fine! successfully run
FB15k-237
DaBR
[]
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04076v2
[ "https://github.com/llqy123/dabr" ]
{'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'}
[ "Hits@1", "Hits@3", "Hits@10", "MRR", "MR", "training time (s)", "Hit@1", "Hit@10" ]
Given the following paper and codebase: Paper: Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Codebase: https://github.com/llqy123/dabr Improve the DaBR model on the FB15k-237 dataset. The result should improve on the following metrics: {'MRR': '0.373', 'Hits@10': '0...
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Weihua Wang1,2,3, *, Qiuyu Liang1, Feilong Bao1,2,3, Guanglai Gao1,2,3 1College of Computer Science, Inner Mongolia University, Hohhot, China 2National and Local Joint Engineering Research Center of Intelligent Information Processing Tec...
6
1
The DaBR model has a unique architecture involving quaternion embeddings and bidirectional rotations, likely making it smaller than multi-layered transformer models but larger than simple embeddings. The paper does not mention exact parameter counts, but it's implied that the embedding size can vary (300-500) and relat...
yes
Yes
Graph
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
2024-12-05T00:00:00.000Z
[https://github.com/llqy123/dabr]
1
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
20 min
https://colab.research.google.com/drive/1nML0U1finrLk-EkU2gHBF3GiLpUh6rLK?usp=sharing
Yes
we fixes some issues and it runs successfully
MM-Vet
FlashSloth-HD
[]
FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04317v1
[ "https://github.com/codefanw/flashsloth" ]
{'GPT-4 score': '49.0', 'Params': '3.2B'}
[ "GPT-4 score", "Params" ]
Given the following paper and codebase: Paper: FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression Codebase: https://github.com/codefanw/flashsloth Improve the FlashSloth-HD model on the MM-Vet dataset. The result should improve on the following metrics: {'GPT-4 score...
FlashSloth : Lightning Multimodal Large Language Models via Embedded Visual Compression Bo Tong1, Bokai Lai1, Yiyi Zhou1*, Gen Luo3, Yunhang Shen2, Ke Li2, Xiaoshuai Sun1, Rongrong Ji1 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P....
6
2
The FlashSloth model is based on a smaller-scale architecture with about 2-3 billion parameters (similar to other tiny MLLMs mentioned such as Qwen2-VL-2B). Given the complexity of multimodal tasks, it may require more time than training single-modality models. The dataset size is assumed to be large as it is optimized...
yes
Yes
Multimodal
FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression
2024-12-05T00:00:00.000Z
[https://github.com/codefanw/flashsloth]
2
https://github.com/codefanw/FlashSloth/tree/main/scripts/eval
20 min
https://colab.research.google.com/drive/1EbXpI0FmQ27nGKgRtKQCVz3m1EpBKiDY?usp=sharing
Yes
Successfully run
CIFAR-10
ABNet-2G-R0
[]
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28T00:00:00
https://arxiv.org/abs/2411.19213v1
[ "https://github.com/dvssajay/New_World" ]
{'Percentage correct': '94.118'}
[ "Percentage correct", "Top-1 Accuracy", "Accuracy", "Parameters", "Top 1 Accuracy", "F1", "Cross Entropy Loss" ]
Given the following paper and codebase: Paper: ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Codebase: https://github.com/dvssajay/New_World Improve the ABNet-2G-R0 model on the CIFAR-10 dataset. The result should improve on the following metrics: {'Percentage correct': '9...
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Venkata Satya Sai Ajay Daliparthi Blekinge Institute of Technology Karlskrona, Sweden venkatasatyasaiajay.daliparthi@bth.se Abstract Inspired by Many-Worlds Interpretation (MWI), this work introduces a novel neural network architecture that spl...
6
1
The ANDHRA Bandersnatch architecture implemented with a branching factor of 2 at three levels results in 8 heads, with a total of 15 convolutional layers based on the geometric formula presented in the paper. Given that the model is intended to be used on the CIFAR-10/100 datasets, which consist of 60,000 (for CIFAR-10...
yes
Yes
CV
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28T00:00:00.000Z
[https://github.com/dvssajay/New_World]
1
dataset or example for training or testing found at: [https://github.com/dvssajay/New_World]
20
https://colab.research.google.com/drive/16oyFcqCzN797OOwZbD6L9uZm818KurD6?usp=sharing
YES, Successfully run on
But it run on just training set complete successfully, but on testing side need to change in code or some thing is missing like model
FB15k-237
DaBR
[]
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
2024-12-05T00:00:00
https://arxiv.org/abs/2412.04076v2
[ "https://github.com/llqy123/dabr" ]
{'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'}
[ "Hits@1", "Hits@3", "Hits@10", "MRR", "MR", "training time (s)", "Hit@1", "Hit@10" ]
Given the following paper and codebase: Paper: Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Codebase: https://github.com/llqy123/dabr Improve the DaBR model on the FB15k-237 dataset. The result should improve on the following metrics: {'MRR': '0.373', 'Hits@10': '0...
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Weihua Wang1,2,3, *, Qiuyu Liang1, Feilong Bao1,2,3, Guanglai Gao1,2,3 1College of Computer Science, Inner Mongolia University, Hohhot, China 2National and Local Joint Engineering Research Center of Intelligent Information Processing Tec...
6
1
The DaBR model has a unique architecture involving quaternion embeddings and bidirectional rotations, likely making it smaller than multi-layered transformer models but larger than simple embeddings. The paper does not mention exact parameter counts, but it's implied that the embedding size can vary (300-500) and relat...
yes
Yes
Graph
Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
2024-12-05 0:00:00
https://github.com/llqy123/dabr
1
Dataset inside benchmark folder.
4 and half days. 10000 epochs and each take 42sec.
https://drive.google.com/file/d/1XLeWvyV4sdoLDoVBzAMB6czlAbOAhB0W/view?usp=sharing
Yes
-- Straight forward clone and just run the train_FB15k-237 file.
CIFAR-10
ABNet-2G-R0
[]
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28T00:00:00
https://arxiv.org/abs/2411.19213v1
[ "https://github.com/dvssajay/New_World" ]
{'Percentage correct': '94.118'}
[ "Percentage correct", "Top-1 Accuracy", "Accuracy", "Parameters", "Top 1 Accuracy", "F1", "Cross Entropy Loss" ]
Given the following paper and codebase: Paper: ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Codebase: https://github.com/dvssajay/New_World Improve the ABNet-2G-R0 model on the CIFAR-10 dataset. The result should improve on the following metrics: {'Percentage correct': '9...
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Venkata Satya Sai Ajay Daliparthi Blekinge Institute of Technology Karlskrona, Sweden venkatasatyasaiajay.daliparthi@bth.se Abstract Inspired by Many-Worlds Interpretation (MWI), this work introduces a novel neural network architecture that spl...
6
1
The ANDHRA Bandersnatch architecture implemented with a branching factor of 2 at three levels results in 8 heads, with a total of 15 convolutional layers based on the geometric formula presented in the paper. Given that the model is intended to be used on the CIFAR-10/100 datasets, which consist of 60,000 (for CIFAR-10...
yes
Yes
CV
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
2024-11-28 0:00:00
https://github.com/dvssajay/New_World
1
embedded inside the file to download CIFAR-10
200 epochs * 75 sec = 4.2 hours
https://drive.google.com/file/d/1RvV1o-KRUtLVHUwzpcTIPZYB6vpTVmHy/view?usp=sharing
Yes
--Run New_World/mainAB2GR0_10_1.py file.Each model has own code.
5-Datasets
CODE-CL
[]
CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning
2024-11-21T00:00:00
https://arxiv.org/abs/2411.15235v2
[ "https://github.com/mapolinario94/CODE-CL" ]
{'Average Accuracy': '93.32', 'BWT': '-0.25'}
[ "Average Accuracy", "BWT" ]
Given the following paper and codebase: Paper: CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning Codebase: https://github.com/mapolinario94/CODE-CL Improve the CODE-CL model on the 5-Datasets dataset. The result should improve on the following metrics: {'Average Accuracy': '93.32...
CODE-CL: Co nceptor-Based Gradient Projection for De ep Continual L earning Marco P. E. Apolinario Sakshi Choudhary Kaushik Roy Elmore Family School of Electrical and Computer Engineering Purdue University, West Lafayete, IN 47906 mapolina@purdue.edu, choudh23@purdue.edu, kaushik@purdue.edu Abstract Continual learning ...
6
1
Considering a 5-layer AlexNet model with a typical parameter count around 5 million. The dataset CIFAR100 with 60,000 images (train + test) divided into 10 tasks suggests 6,000 images per task, trained for 200 epochs with a batch size of 64. This leads to significant computational overhead, but not excessive by modern ...
yes
Yes
CV
CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning
2024-11-21 0:00:00
https://github.com/mapolinario94/CODE-CL
1
downloaded automatically when running script
3.5 to 4.5 hours - Each epoch takes 15 ms and 100 epochs are there. For, 5 dataset it takes total 3.5 to 4.5hrs
https://colab.research.google.com/drive/1-kzSIjBoDKKhnP0x_UUcWJFSE3muxGCC?usp=sharing
Yes
-- Need to pass the arguments. Also dependency was installed accordingly. Everything is on google colab file.
ISTD+
RASM
[]
Regional Attention for Shadow Removal
2024-11-21T00:00:00
https://arxiv.org/abs/2411.14201v1
[ "https://github.com/CalcuLuUus/RASM" ]
{'RMSE': '2.53'}
[ "RMSE", "PSNR", "SSIM", "LPIPS" ]
Given the following paper and codebase: Paper: Regional Attention for Shadow Removal Codebase: https://github.com/CalcuLuUus/RASM Improve the RASM model on the ISTD+ dataset. The result should improve on the following metrics: {'RMSE': '2.53'}. You must use only the codebase provided.
Regional Attention for Shadow Removal Hengxing Liu chrisliu.jz@gmail.com Tianjin University Tianjin, ChinaMingjia Li mingjiali@tju.edu.cn Tianjin University Tianjin, ChinaXiaojie Guo* xj.max.guo@gmail.com Tianjin University Tianjin, China Figure 1: (a) Performance comparison with previous SOTA methods. Our method achie...
6
1
The model proposed, RASM, has a lightweight architecture aiming for efficiency, suggesting a lower parameter count than bulkier models. Given that it adopts a U-shaped encoder-decoder architecture with a feature embedding dimension of 32 and focuses on regional attention rather than full-scale attention, I estimate app...
yes
Yes
CV
Regional Attention for Shadow Removal
2024-11-21 0:00:00
https://github.com/CalcuLuUus/RASM
1
https://drive.usercontent.google.com/download?id=1I0qw-65KBA6np8vIZzO6oeiOvcDBttAY&export=download&authuser=0
6min 23 sec * 1000 = 4.4 days
https://colab.research.google.com/drive/1OqVyOBRCgHGl5p0_lPBeW1xMLYVuZys7?usp=sharing
Yes
-- I have included all the path and commands in colab file. U can change the epoch to reduce the training time.
Training and validation dataset of capsule vision 2024 challenge.
BiomedCLIP+PubmedBERT
[]
A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT
2024-10-25T00:00:00
https://arxiv.org/abs/2410.19944v3
[ "https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge" ]
{'Total Accuracy': '97.75'}
[ "Total Accuracy" ]
Given the following paper and codebase: Paper: A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT Codebase: https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge Improve the BiomedCLIP+PubmedBERT model on the Training and validation dataset of cap...
A MULTIMODAL APPROACH FOR ENDOSCOPIC VCE IMAGE CLASSIFICATION USING BiomedCLIP-PubMedBERT A PREPRINT Dr. Nagarajan Ganapathy∗ Department of Biomedical Engineering Indian Institute of Technology Hyderabad Sangareddy, Hyderabad, India gnagarajan@bme.iith.ac.in Podakanti Satyajith Chary Department of Biomedical Engineerin...
6
1
The BiomedCLIP model utilizes a Vision Transformer (ViT) and PubMedBERT, both known for their large parameter counts. Given the complexity of multimodal tasks (vision and language), a standard transformer model could have around 300 million parameters. Training with 37,607 frames and a batch size of 32 results in about...
yes
Yes
Multimodal
A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT
2024-10-25 0:00:00
https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge
1
https://github.com/misahub2023/Capsule-Vision-2024-Challenge.
1hr * 3 epoch = 3 hour
https://colab.research.google.com/drive/19Y7kge6PwOugIf_jdkhXoxjUYkU3iqSG?usp=sharing
Yes
-- Dataset is downloaded using the script provided in github. Then need to change the path of the dataset link in the colab file.Downlaod the medinfolab-capsule-vision-2024-challenge.ipynb file from repo or just run this colab file i pasted to run the code.
Electricity (192)
CycleNet
[]
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
2024-09-27T00:00:00
https://arxiv.org/abs/2409.18479v2
[ "https://github.com/ACAT-SCUT/CycleNet" ]
{'MSE': '0.144', 'MAE': '0.237'}
[ "MSE", "MAE" ]
Given the following paper and codebase: Paper: CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns Codebase: https://github.com/ACAT-SCUT/CycleNet Improve the CycleNet model on the Electricity (192) dataset. The result should improve on the following metrics: {'MSE': '0.144',...
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns Shengsheng Lin1, Weiwei Lin1,2,∗, Xinyi Hu3, Wentai Wu4, Ruichao Mo1, Haocheng Zhong1 1School of Computer Science and Engineering, South China University of Technology, China 2Pengcheng Laboratory, China 3Department of Computer Science and E...
6
1
The CycleNet model has a minimal parameter count (around 472.9K for MLP and 123.7K for Linear variations), which suggests a relatively lightweight architecture, allowing efficient training on a single GPU. The datasets utilized are reasonably sized, with the largest being the Electricity dataset (26,304 timesteps with ...
yes
Yes
Time Series
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
2024-09-27 0:00:00
https://github.com/ACAT-SCUT/CycleNet
1
https://drive.usercontent.google.com/download?id=1bNbw1y8VYp-8pkRTqbjoW-TA-G8T0EQf&export=download&authuser=0
25s * 30 epochs = 12.5 min for each seq length. There are multiple seq length
https://drive.google.com/file/d/18IdZY2MOml8pmTVAoEcMoWWuU_1fI8aT/view?usp=sharing
Yes
-- Tested just for electricity. I have include the command on colab files. 192 seq was not available so I used 336 which was the nearest. It works just inspect run_main.sh
PeMSD4
PM-DMNet(R)
[]
Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
2024-08-12T00:00:00
https://arxiv.org/abs/2408.07100v1
[ "https://github.com/wengwenchao123/PM-DMNet" ]
{'12 steps MAE': '18.37', '12 steps RMSE': '30.68', '12 steps MAPE': '12.01'}
[ "12 steps MAE", "12 steps MAPE", "12 steps RMSE" ]
Given the following paper and codebase: Paper: Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction Codebase: https://github.com/wengwenchao123/PM-DMNet Improve the PM-DMNet(R) model on the PeMSD4 dataset. The result should improve on the following metrics: {'12 steps MAE': '18.37',...
1 Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction Wenchao Weng, Mei Wu, Hanyu Jiang, Wanzeng Kong, Senior Member, IEEE , Xiangjie Kong, Senior Member, IEEE , and Feng Xia, Senior Member, IEEE Abstract —In recent years, deep learning has increasingly gained attention in the field of traffic pred...
6
1
The PM-DMNet model employs a dynamic memory network with reduced computational complexity of O(N) compared to existing methods. Given the complexity of the architecture and typical dataset sizes in traffic prediction tasks, an estimated training time of 6 hours is reasonable assuming a moderate dataset size of around 1...
yes
Yes
Graph
Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
2024-08-12 0:00:00
https://github.com/wengwenchao123/PM-DMNet
1
https://drive.usercontent.google.com/download?id=1Q8boyeVNmZTz_HASN_57qd9wX1JZeGem&export=download&authuser=0
35 s avg * 500 epochs = 5 hour approx
https://colab.research.google.com/drive/1MGEsXeIEGO7AKBMZ6DBZoEt73bQaCTe2?usp=sharing
Yes
-- Fairly easy one. I have included the pip isntallation on collab file. This repo doesnot contain requirements.txt file.
Kvasir-SEG
EffiSegNet-B5
[]
EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder
2024-07-23T00:00:00
https://arxiv.org/abs/2407.16298v1
[ "https://github.com/ivezakis/effisegnet" ]
{'mean Dice': '0.9488', 'mIoU': '0.9065', 'F-measure': '0.9513', 'Precision': '0.9713', 'Recall': '0.9321'}
[ "mean Dice", "Average MAE", "S-Measure", "max E-Measure", "mIoU", "FPS", "F-measure", "Precision", "Recall" ]
Given the following paper and codebase: Paper: EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder Codebase: https://github.com/ivezakis/effisegnet Improve the EffiSegNet-B5 model on the Kvasir-SEG dataset. The result should improve...
EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder Ioannis A. Vezakis TECREANDO B.V . Amsterdam, The Netherlands 0000-0003-4976-4901Konstantinos Georgas Biomedical Engineering Laboratory School of Electrical and Computer Engineering National Techni...
6
2
The EffiSegNet model has multiple variants with EfficientNet as backbone, ranging from 4.0M to 63.8M parameters. Given the Kvasir-SEG dataset of 1000 images and training for 300 epochs with a batch size of 8, the number of parameters suggests a training time of around 6 hours using 2 GPUs. Each image requires resizing ...
yes
Yes
CV
EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder
2024-07-23 0:00:00
https://github.com/ivezakis/effisegnet
2
Inside the repo
45 sec * 300 epoch = 4 hour around
https://colab.research.google.com/drive/1YzKf-VnfFVZW67_SYj2295KmuwYAFgUB?usp=sharing
Yes
Fairly easy just create env and run
clintox
BiLSTM
[]
Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data
2024-07-08T00:00:00
https://arxiv.org/abs/2407.18919v1
[ "https://github.com/kvrsid/toxic" ]
{'AUC': '0.97'}
[ "AUC" ]
Given the following paper and codebase: Paper: Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data Codebase: https://github.com/kvrsid/toxic Improve the BiLSTM model on the clintox dataset. The result should improve on the following metrics: {'AUC': '0.97'}. You must use only t...
393 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 Accelerating Drug Safety Assessment using Bidirectional -LSTM for SMILES Data K. Venkateswara Rao1, Dr. Kunjam Nageswara Rao2, Dr. G. Sita Ratnam3 1 Research Scholar, 2 Professor Department of Computer Science and Systems Engineering, Andhra University College of Engineering...
6
1
The proposed model employs a bidirectional LSTM architecture, which typically has a reasonable number of parameters compared to other complex models like transformers or very deep networks. Given the structure described, a rough estimate of 6 hours of training time on a standard single GPU is appropriate, taking into a...
yes
Yes
Bioinformatics
Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data
2024-07-08 0:00:00
https://github.com/kvrsid/toxic
1
inside the repo as clintox.csv
Total 5 min on 100 epochs.
https://drive.google.com/file/d/1ut_cYbQzf3Pov5Xdu24TxA5WEEMucV-z/view?usp=sharing
Yes
-- I fixed 2 lines on code. I have commented on the colab file.
ImageNet-10
DPAC
[]
Deep Online Probability Aggregation Clustering
2024-07-07T00:00:00
https://arxiv.org/abs/2407.05246v2
[ "https://github.com/aomandechenai/deep-probability-aggregation-clustering" ]
{'Accuracy': '0.97', 'NMI': '0.925', 'ARI': '0.935', 'Backbone': 'ResNet-34'}
[ "NMI", "Accuracy", "ARI", "Backbone", "Image Size" ]
Given the following paper and codebase: Paper: Deep Online Probability Aggregation Clustering Codebase: https://github.com/aomandechenai/deep-probability-aggregation-clustering Improve the DPAC model on the ImageNet-10 dataset. The result should improve on the following metrics: {'Accuracy': '0.97', 'N...
Deep Online Probability Aggregation Clustering Yuxuan Yan, Na Lu⋆, and Ruofan Yan Systems Engineering Institute, Xi’an Jiaotong University yan1611@stu.xjtu.edu.cn, lvna2009@mail.xjtu.edu.cn, yanruofan@stu.xjtu.edu.cn Abstract. Combining machine clustering with deep models has shown remarkable superiority in deep cluste...
6
1
The DPAC model uses a deep learning framework that incorporates modern neural network architectures suitable for deep clustering tasks, similar to models typical in the domain (like SimCLR). The datasets used (CIFAR-10, CIFAR-100, etc.) are of moderate size, and the paper mentions various datasets with up to 60,000 sam...
yes
Yes
CV
Deep Online Probability Aggregation Clustering
2024-07-07 0:00:00
https://github.com/aomandechenai/deep-probability-aggregation-clustering
1
downloads ciraf10 dataset from pre process step
36 hour just for pre train.
https://drive.google.com/file/d/1-nXU0RbPPY9WObax53y0CrfOoQ-6cry4/view?usp=sharing
Yes
-- Need to change some line on pre_train,py. The changed code is there in colab in comment. Takes 36 hour just for 1 epoch as shown while training. May run with enough resources.
CAT2000
SUM
[]
SUM: Saliency Unification through Mamba for Visual Attention Modeling
2024-06-25T00:00:00
https://arxiv.org/abs/2406.17815v2
[ "https://github.com/Arhosseini77/SUM" ]
{'KL': '0.27'}
[ "KL" ]
Given the following paper and codebase: Paper: SUM: Saliency Unification through Mamba for Visual Attention Modeling Codebase: https://github.com/Arhosseini77/SUM Improve the SUM model on the CAT2000 dataset. The result should improve on the following metrics: {'KL': '0.27'}. You must use only the code...
SUM: Saliency Unification through Mamba for Visual Attention Modeling Alireza Hosseini*,1Amirhossein Kazerouni*,2,3,4Saeed Akhavan1 Michael Brudno2,3,4Babak Taati2,3,4 1University of Tehran2University of Toronto3Vector Institute 4University Health Network {arhosseini77, s.akhavan }@ut.ac.ir, {amirhossein, brudno }@cs.t...
6
1
The SUM model utilizes a U-Net architecture integrated with Mamba, which is known for efficiency due to its linear complexity. While the total parameter count isn't specified, similar models in this domain typically range from 30M to 800M. Given the complexity, a reasonable estimate for training time is 6 hours based o...
yes
Yes
CV
SUM: Saliency Unification through Mamba for Visual Attention Modeling
2024-06-25 0:00:00
https://github.com/Arhosseini77/SUM
1
https://drive.usercontent.google.com/download?id=1Mdk97UB0phYDZv8zgjBayeC1I1_QcUmh&export=download&authuser=0
4 min * 30 epoch = 2 hr
https://colab.research.google.com/drive/1jdVKL-KYdo1CgCdOCzzKMFSmDBqrc8RX?usp=sharing
Yes
-- Dont run requirements.txt as it will produce dependency error. I have included the pip install command and also for running on CAT2000, need to make small changes which I have included on colab file. . Also need to set small command for matplotlib to run on collab.
SumMe
CSTA
[]
CSTA: CNN-based Spatiotemporal Attention for Video Summarization
2024-05-20T00:00:00
https://arxiv.org/abs/2405.11905v2
[ "https://github.com/thswodnjs3/CSTA" ]
{"Kendall's Tau": '0.246', "Spearman's Rho": '0.274'}
[ "F1-score (Canonical)", "F1-score (Augmented)", "Kendall's Tau", "Spearman's Rho" ]
Given the following paper and codebase: Paper: CSTA: CNN-based Spatiotemporal Attention for Video Summarization Codebase: https://github.com/thswodnjs3/CSTA Improve the CSTA model on the SumMe dataset. The result should improve on the following metrics: {"Kendall's Tau": '0.246', "Spearman's Rho": '0.2...
CSTA: CNN-based Spatiotemporal Attention for Video Summarization Jaewon Son, Jaehun Park, Kwangsu Kim* Sungkyunkwan University {31z522x4,pk9403,kim.kwangsu }@skku.edu Abstract Video summarization aims to generate a concise repre- sentation of a video, capturing its essential content and key moments while reducing its o...
6
1
The model, CSTA, is based on a CNN architecture (GoogleNet) and integrates attention mechanisms. Given that model architectures like this usually have around 5-10 million parameters, I estimate approximately 6 hours of training time assuming a dataset with 50 videos (TVSum) and around 25 videos (SumMe), which is standa...
yes
Yes
CV
CSTA: CNN-based Spatiotemporal Attention for Video Summarization
2024-05-20 0:00:00
https://github.com/thswodnjs3/CSTA
1
https://github.com/e-apostolidis/PGL-SUM/tree/master/data
5 MIN FOR SUMme dataset for 50 epochs
https://colab.research.google.com/drive/1zMK8TRHtdhQB7dkwkxA3ImblIstiU9ob?usp=sharing
Yes
-- Run perfectly on SUMme. But crashes on TVSum dataset saying out of GPU memory.
HME100K
ICAL
[]
ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition
2024-05-15T00:00:00
https://arxiv.org/abs/2405.09032v4
[ "https://github.com/qingzhenduyu/ical" ]
{'ExpRate': '69.06'}
[ "ExpRate" ]
Given the following paper and codebase: Paper: ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition Codebase: https://github.com/qingzhenduyu/ical Improve the ICAL model on the HME100K dataset. The result should improve on the following metrics: {'ExpRate...
ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition Jianhua Zhu1[0009 −0000−3982−2739], Liangcai Gao1( ), and Wenqi Zhao1 Wangxuan Institute of Computer Technology, Peking University, Beijing, China zhujianhuapku@pku.edu.cn gaoliangcai@pku.edu.cn wenqizhao@stu.pku.edu.cn...
6
2
The model uses a DenseNet encoder with multiple layers and a Transformer decoder, which suggests a moderate to high complexity. Given that DenseNet and Transformers are known to have significant memory and computational demands, along with the dataset sizes (8,836 training samples for CROHME with about 300,000 characte...
yes
Yes
CV
ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition
2024-05-15 0:00:00
https://github.com/qingzhenduyu/ical
2
https://disk.pku.edu.cn/anyshare/en-us/link/AAF10CCC4D539543F68847A9010C607139/EF71051AA2314E3AA921F528C70BF712/A2D37D1699B54529BA80157162294FA5?_tb=none
1HR per epoch * 120 epoch = 120 hour
https://colab.research.google.com/drive/1ojkqF09KgeqtsgyPSDS0ya64VPddIgiz?usp=sharing
Yes
-- Cannot download the data directly into colab. Need to store in local and upload to the colab or use google drive to unzip the content to colab
Kvasir-SEG
EMCAD
[]
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
2024-05-11T00:00:00
https://arxiv.org/abs/2405.06880v1
[ "https://github.com/sldgroup/emcad" ]
{'mean Dice': '0.928'}
[ "mean Dice", "Average MAE", "S-Measure", "max E-Measure", "mIoU", "FPS", "F-measure", "Precision", "Recall" ]
Given the following paper and codebase: Paper: EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation Codebase: https://github.com/sldgroup/emcad Improve the EMCAD model on the Kvasir-SEG dataset. The result should improve on the following metrics: {'mean Dice': '0...
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation Md Mostafijur Rahman, Mustafa Munir, and Radu Marculescu The University of Texas at Austin Austin, Texas, USA mostafijur.rahman, mmunir, radum@utexas.edu Abstract An efficient and effective decoding mechanism is crucial in medi...
6
1
The EMCAD model has approximately 1.91 million parameters and 0.381G FLOPs for a standard encoder, making it relatively lightweight compared to larger models like UNet and TransUNet, which have tens of millions of parameters and significantly higher FLOP counts. Given the medical image segmentation task, a typical trai...
yes
Yes
CV
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
2024-05-11 0:00:00
https://github.com/sldgroup/emcad
1
https://drive.google.com/drive/folders/1ACJEoTp-uqfFJ73qS3eUObQh52nGuzCd
21 hour for SYNAPSE dataset.
https://colab.research.google.com/drive/1jYDic29ht3AjFGx5rXY_Fp5hcoxoRP6M?usp=sharing
Yes
-- No definiton on how to run on Kvsair - SEG datas. Need to create separate dataloader as input for Kavirseg data. But it ran on Synapse dataset provided as their default training set.
ETTh1 (336) Multivariate
SOFTS
[]
SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion
2024-04-22T00:00:00
https://arxiv.org/abs/2404.14197v3
[ "https://github.com/secilia-cxy/softs" ]
{'MSE': '0.480', 'MAE': '0.452'}
[ "MSE", "MAE" ]
Given the following paper and codebase: Paper: SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion Codebase: https://github.com/secilia-cxy/softs Improve the SOFTS model on the ETTh1 (336) Multivariate dataset. The result should improve on the following metrics: {'MSE': '0.480...
SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion Lu Han∗, Xu-Yang Chen∗, Han-Jia Ye†, De-Chuan Zhan School of Artificial Intelligence, Nanjing University, China National Key Laboratory for Novel Software Technology, Nanjing University, China {hanlu, chenxy, yehj, zhandc}@lamda.nju.edu.cn Ab...
6
1
The SOFTS architecture is based on an MLP and is designed to have linear complexity in terms of the number of channels, while the datasets reported have around 170 to 883 channels. A reasonable estimate for the number of parameters in this model, given the MLP structure, is in the order of 1-5 million parameters. Consi...
yes
Yes
Time Series
SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion
2024-04-22 0:00:00
https://github.com/secilia-cxy/softs
1
https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy
30s for 336 seq
https://colab.research.google.com/drive/14p_kyKxFS9780yR-GJpq4foiumZUznJ4?usp=sharing
Yes
-- I have listed the requirement to install on collab cell. Just need to comment some line to run for only 336 seq.
FER2013
VGG based
[]
IdentiFace : A VGG Based Multimodal Facial Biometric System
2024-01-02T00:00:00
https://arxiv.org/abs/2401.01227v2
[ "https://github.com/MahmoudRabea13/IdentiFace" ]
{'5-class test accuracy': '66.13%'}
[ "5-class test accuracy" ]
Given the following paper and codebase: Paper: IdentiFace : A VGG Based Multimodal Facial Biometric System Codebase: https://github.com/MahmoudRabea13/IdentiFace Improve the VGG based model on the FER2013 dataset. The result should improve on the following metrics: {'5-class test accuracy': '66.13%'}. ...
IdentiFace: A VGGNet -Based Multimodal Facial Biometric System Mahmoud Rabea, Hanya Ahmed, Sohaila Mahmoud, Nourhan Sayed Systems and Biomedical Department, Faculty of Engineering Cairo University Abstract - The development of facial biometric systems has contributed greatly to the development of the computer vision fi...
6
1
The model described is based on a simplified VGG-16 architecture with a lower number of layers and parameters compared to the original model. Given that this architecture has several layers and parameters, I estimate around 6 hours of training time based on the size of the datasets involved and the computational cost o...
yes
Yes
CV
IdentiFace : A VGG Based Multimodal Facial Biometric System
2024-01-02 0:00:00
https://github.com/MahmoudRabea13/IdentiFace
1
https://www.kaggle.com/datasets/msambare/fer2013
30s * 40 epoch = 20 min
https://drive.google.com/file/d/1NLLV2fLLpzBI3IQlCa6xac_SSr6q7ofN/view?usp=sharing
Yes
-- The training code is included in /Notebooks/Emotion/FER Dataset/Model.ipynb and inside the repo. I have linked the repo with proper fixes. Just run the colab file i have linked here.
ETTh1 (336) Multivariate
AMD
[]
Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting
2024-06-06T00:00:00
https://arxiv.org/abs/2406.03751v1
[ "https://github.com/troubadour000/amd" ]
{'MSE': '0.418', 'MAE': '0.427'}
[ "MSE", "MAE" ]
Given the following paper and codebase: Paper: Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting Codebase: https://github.com/troubadour000/amd Improve the AMD model on the ETTh1 (336) Multivariate dataset. The result should improve on the following metrics: {'MSE': '0.418', 'MAE...
Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting Yifan Hu1,∗Peiyuan Liu3,∗Peng Zhu1Dawei Cheng1,BTao Dai2 1Tongji University2Shenzhen University 3Tsinghua Shenzhen International Graduate School {pengzhu, dcheng}@tongji.edu.cn {huyf0122, peiyuanliu.edu, daitao.edu}@gmail.com Abstract Transformer-...
6
1
The model proposed in the paper is an MLP-based Adaptive Multi-Scale Decomposition (AMD) framework, which likely has fewer parameters than Transformer-based models. The paper indicates a memory usage of 1349 MB, implying a moderate model size appropriate for single-GPU training. The training time of 17 ms/iteration sug...
yes
Yes
Time Series
Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting
2024-06-06 0:00:00
https://github.com/troubadour000/amd
1
Autoformer
1
Untitled12.ipynb
Yes
Working as expected with manually downloading and uploading data
ZINC
NeuralWalker
[]
Learning Long Range Dependencies on Graphs via Random Walks
2024-06-05T00:00:00
https://arxiv.org/abs/2406.03386v2
[ "https://github.com/borgwardtlab/neuralwalker" ]
{'MAE': '0.065 ± 0.001'}
[ "MAE" ]
Given the following paper and codebase: Paper: Learning Long Range Dependencies on Graphs via Random Walks Codebase: https://github.com/borgwardtlab/neuralwalker Improve the NeuralWalker model on the ZINC dataset. The result should improve on the following metrics: {'MAE': '0.065 ± 0.001'}. You must us...
Learning Long Range Dependencies on Graphs via Random Walks Dexiong Chen Till Hendrik Schulz Karsten Borgwardt Max Planck Institute of Biochemistry 82152 Martinsried, Germany {dchen, tschulz, borgwardt}@biochem.mpg.de Abstract Message-passing graph neural networks (GNNs) excel at capturing local relation- ships but str...
6
1
The proposed NeuralWalker architecture leverages random walks and message-passing mechanisms, which typically have a moderate level of complexity. It combines local and long-range dependencies, making it a compact yet powerful model. Given the extensive graphs and nodes it claims to handle (up to 1.6M nodes), one can e...
yes
Yes
Graph
Learning Long Range Dependencies on Graphs via Random Walks
2024-06-05 0:00:00
https://github.com/borgwardtlab/neuralwalker
1
In Code
1
Untitled13.ipynb
Yes
Works with installing micromamba first
MalNet-Tiny
GatedGCN+
[]
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
{'Accuracy': '94.600±0.570'}
[ "Accuracy", "MCC" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Codebase: https://github.com/LUOyk1999/GNNPlus Improve the GatedGCN+ model on the MalNet-Tiny dataset. The result should improve on the following metrics: {...
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in captu...
8
1
The training process involves using 3 classic GNN architectures (GCN, GIN, GatedGCN) enhanced by the GNN+ framework. Given that each model has around 500K parameters and will be trained on 14 datasets with different sizes (ZINC has 12K graphs while ogbg-code2 has over 450K). The average time per epoch as reported is lo...
yes
Yes
Graph
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
http://malnet.cc.gatech.edu/graph-data/malnet-graphs-tiny.tar.gz,http://malnet.cc.gatech.edu/split-info/split_info_tiny.zip
1 hour - (avg 24 sec * 150 epochs)
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
null
STL-10, 40 Labels
SemiOccam
[]
ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels
2025-06-04T00:00:00
https://arxiv.org/abs/2506.03582v1
[ "https://github.com/Shu1L0n9/SemiOccam" ]
{'Accuracy': '95.43'}
[ "Accuracy" ]
Given the following paper and codebase: Paper: ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels Codebase: https://github.com/Shu1L0n9/SemiOccam Improve the SemiOccam model on the STL-10, 40 Labels dataset. The result should improve on the following metrics: {'Accuracy': '...
Rui et al. Harbin Engineering University VITSGMM: A R OBUST SEMI-SUPERVISED IMAGE RECOGNITION NETWORK USING SPARSE LABELS Rui Yann∗ Shu1L0n9@gmail.comXianglei Xing† xingxl@hrbeu.edu.cn General Artificial Intelligence Laboratory College of Intelligent Systems Science and Engineering Harbin Engineering University Harbin,...
8
1
The ViTSGMM model utilizes the Vision Transformer architecture (likely ViT-base or ViT-large), which has approximately 86 million parameters for ViT-base and over 300 million for ViT-large. Considering the CIFAR-10 dataset has 60,000 images and STL-10 has about 13,000 images, the added computational complexity from sem...
yes
Yes
CV
ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels
2025-06-04T00:00:00.000Z
[https://github.com/Shu1L0n9/SemiOccam]
1
Code Downloads Dynamically after cahnging the dataset name
3 Hours
Copy of experiment.ipynb
Yes
It starts and runs successfully
CIFAR-10
ResNet18 (FSGDM)
[]
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
2024-11-29T00:00:00
https://arxiv.org/abs/2411.19671v6
[ "https://github.com/yinleung/FSGDM" ]
{'Percentage correct': '95.66'}
[ "Percentage correct", "Top-1 Accuracy", "Accuracy", "Parameters", "Top 1 Accuracy", "F1", "Cross Entropy Loss" ]
Given the following paper and codebase: Paper: On the Performance Analysis of Momentum Method: A Frequency Domain Perspective Codebase: https://github.com/yinleung/FSGDM Improve the ResNet18 (FSGDM) model on the CIFAR-10 dataset. The result should improve on the following metrics: {'Percentage correct'...
Published as a conference paper at ICLR 2025 ON THE PERFORMANCE ANALYSIS OF MOMENTUM METHOD : A F REQUENCY DOMAIN PERSPECTIVE Xianliang Li∗1,2, Jun Luo∗1,2, Zhiwei Zheng∗3, Hanxiao Wang2,4, Li Luo5, Lingkun Wen2,6, Linlong Wu7, Sheng Xu†1 1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2Univers...
8
1
The paper involves training two models: ResNet50 on CIFAR-100 and VGG16 on CIFAR-10. ResNet50 has approximately 25.6 million parameters, while VGG16 has around 138 million parameters. Both datasets (CIFAR-10 and CIFAR-100) are relatively small (60,000 images total for CIFAR-10 and 100,000 images for CIFAR-100) and are ...
yes
Yes
CV
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
2024-11-29T00:00:00.000Z
[https://github.com/yinleung/FSGDM]
1
dataset or example for train found at: [https://github.com/yinleung/FSGDM/tree/main/examples/CIFAR100]
10
https://colab.research.google.com/drive/1rYHru1icUH3Yj4kvEvVuriMhdqM--kCS?usp=sharing
YES, Successfully Run!
But need to chnage in little bit code and optimize it. And for training on a example need too much time.
CIFAR-10
ResNet18 (FSGDM)
[]
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
2024-11-29T00:00:00
https://arxiv.org/abs/2411.19671v6
[ "https://github.com/yinleung/FSGDM" ]
{'Percentage correct': '95.66'}
[ "Percentage correct", "Top-1 Accuracy", "Accuracy", "Parameters", "Top 1 Accuracy", "F1", "Cross Entropy Loss" ]
Given the following paper and codebase: Paper: On the Performance Analysis of Momentum Method: A Frequency Domain Perspective Codebase: https://github.com/yinleung/FSGDM Improve the ResNet18 (FSGDM) model on the CIFAR-10 dataset. The result should improve on the following metrics: {'Percentage correct'...
Published as a conference paper at ICLR 2025 ON THE PERFORMANCE ANALYSIS OF MOMENTUM METHOD : A F REQUENCY DOMAIN PERSPECTIVE Xianliang Li∗1,2, Jun Luo∗1,2, Zhiwei Zheng∗3, Hanxiao Wang2,4, Li Luo5, Lingkun Wen2,6, Linlong Wu7, Sheng Xu†1 1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2Univers...
8
1
The paper involves training two models: ResNet50 on CIFAR-100 and VGG16 on CIFAR-10. ResNet50 has approximately 25.6 million parameters, while VGG16 has around 138 million parameters. Both datasets (CIFAR-10 and CIFAR-100) are relatively small (60,000 images total for CIFAR-10 and 100,000 images for CIFAR-100) and are ...
yes
Yes
CV
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
2024-11-29 0:00:00
https://github.com/yinleung/FSGDM
1
inside the repo examples CIFAR1OO Folder
300 epochs * 2.5 min = 12.5 hOURS
https://drive.google.com/file/d/1grWsTDyc3MOwfbwob2EbMbL7GPmjsKfI/view?usp=sharing
Yes
-- Run by going inside the examples/cifar1oo/main.py
ogbl-ddi
GCN (node embedding)
[]
Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods
2024-11-22T00:00:00
https://arxiv.org/abs/2411.14711v1
[ "https://github.com/astroming/GNNHE" ]
{'Test Hits@20': '0.9549 ± 0.0073', 'Validation Hits@20': '0.9098 ± 0.0294', 'Number of params': '5125250', 'Ext. data': 'No'}
[ "Ext. data", "Test Hits@20", "Validation Hits@20", "Number of params" ]
Given the following paper and codebase: Paper: Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods Codebase: https://github.com/astroming/GNNHE Improve the GCN (node embedding) model on the ogbl-ddi dataset. The result should improve on the following metrics: {'Te...
Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods Shuming Liang*1, Yu Ding2, Zhidong Li1, Bin Liang1, Siqi Zhang3, Yang Wang1, Fang Chen1 1University of Technology Sydney first name.last name@uts.edu.au 2University of Wollongong dyu@uow.edu.au 3Zhejiang University siqizhang@zju....
8
1
The experiments utilize OGB datasets which are well-known benchmarks in link prediction tasks. Based on similar models in the literature, training one of these GNNs on datasets like ogbl-collab or ogbl-ddi typically ranges from 4 to 8 hours on a single GPU when standard hyperparameters are used. Given the dataset sizes...
yes
Yes
Graph
Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods
2024-11-22 0:00:00
https://github.com/astroming/GNNHE
1
Inside the /GNNHE/ogbl-ddi_95.49_10runs/dataset folder of repo
30 sec * 2000 = 16.7hours
https://colab.research.google.com/drive/1LD0gm45pSoZMyKFWrm23d4s0_DQpgtx8?usp=sharing
Yes
-- Since the requirements were vauge .. I used grok to fix the dependency issue and the installment process is recorded in collab notebook.
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
yolov5n
[]
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
2024-07-18T00:00:00
https://arxiv.org/abs/2407.13214v1
[ "https://github.com/lugan113/TXL-PBC_Dataset" ]
{'mAP50': '0.958'}
[ "mAP50" ]
Given the following paper and codebase: Paper: TXL-PBC: a freely accessible labeled peripheral blood cell dataset Codebase: https://github.com/lugan113/TXL-PBC_Dataset Improve the yolov5n model on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset. The result should improve ...
TXL-PBC: A FREELY ACCESSIBLE LABELED PERIPHERAL BLOOD CELL DATASET Lu Gan Northern Arizona University Flagstaff, AZ, USA lg2465@nau.eduXi Li Independent Researcher Chengdu, China reilixi723@gmail.com ABSTRACT In a recent study, we found that publicly BCCD and BCD datasets have significant issues such as labeling errors...
8
1
The TXL-PBC dataset has 1,008 training samples, with a batch size of 16 and an image resolution of 320x320 pixels. Training with YOLOv8n for 100 epochs means that there are a total of 1,008/16 = 63 iterations per epoch, resulting in approximately 6,300 total iterations. Given the complexity of YOLOv8n and the semi-auto...
yes
Yes
CV
TXL-PBC: a freely accessible labeled peripheral blood cell dataset
2024-07-18 0:00:00
https://github.com/lugan113/TXL-PBC_Dataset
1
isnside the repo on TXL-PBC folder
17s * 100 epoch = 29 minutes aprox
https://drive.google.com/file/d/1NdhlcOZdyojbL8kctTFOo8eFA03PMWdl/view?usp=sharing
Yes
-- I have fixed the train.py file with correct arguments and file path. I have commented the fixes on collab file.
ZJU-RGB-P
CSFNet-2
[]
CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes
2024-07-01T00:00:00
https://arxiv.org/abs/2407.01328v1
[ "https://github.com/Danial-Qashqai/CSFNet" ]
{'mIoU': '91.40', 'Frame (fps)': '75 (3090)'}
[ "mIoU", "Frame (fps)" ]
Given the following paper and codebase: Paper: CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes Codebase: https://github.com/Danial-Qashqai/CSFNet Improve the CSFNet-2 model on the ZJU-RGB-P dataset. The result should improve on the following metric...
CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB -X Semantic Segmentation of Driving Scenes Danial Qashqaia,*, Emad Mousaviana, Shahriar B. Shokouhia, Sattar Mirzakuchakia aDepartment of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran Abstract Semantic segmentation, as a cruc...
8
1
The CSFNet model described in the paper is utilizing a dual and single-branch architecture with low complexity, specifically designed for faster inference. Given that it is trained on Cityscapes, MFNet, and ZJU datasets with a moderate number of parameters (around 11.31M to 19.37M), and considering the training setting...
yes
Yes
CV
CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes
2024-07-01 0:00:00
https://github.com/Danial-Qashqai/CSFNet
1
https://drive.google.com/file/d/1TugQ16fcxbmPBJD0EPMHHmjdK9IE4SAO/view
34s * 600 epoch = 5.67 hour
https://drive.google.com/file/d/12nPSCuyG-9-eA3bAaqDBWAzJGoddQyDn/view?usp=sharing
Yes
-- Just need to change the path on the argument while calling training script. All the data in proper structure is in colab file. Also the backbone has been downloaded and add resp.
MIMIC-III
FLD
[]
Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting
2024-05-06T00:00:00
https://arxiv.org/abs/2405.03582v2
[ "https://github.com/kloetergensc/functional-latent_dynamics" ]
{'MSE': '0.444 ± 0.027'}
[ "MSE", "NegLL" ]
Given the following paper and codebase: Paper: Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting Codebase: https://github.com/kloetergensc/functional-latent_dynamics Improve the FLD model on the MIMIC-III dataset. The result should improve on the following metrics: {'MSE': '0.4...
Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting Christian Kl¨ otergens( )12, Vijaya Krishna Yalavarthi1, Maximilian Stubbemann12, and Lars Schmidt-Thieme12 1ISMLL, University of Hildesheim, Germany {kloetergens, yalavarthi, stubbemann, schmidt-thieme }@ismll.de 2VWFS Data Analytics Research C...
8
1
The Functional Latent Dynamics (FLD) model employs a multi-head attention mechanism and a feedforward neural network for its architecture, which entails a moderate level of complexity. Considering the datasets used (which have varying sample sizes and sparsity), realistic estimates suggest that with diverse datasets li...
yes
Yes
Time Series
Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting
2024-05-06 0:00:00
https://github.com/kloetergensc/functional-latent_dynamics
1
https://physionet.org/content/mimiciii/1.4/, Goodwin dataset inside the repo.
4 min for 100 epoch on goodwin dataset
https://colab.research.google.com/drive/1c3AQIu4CXDrXGjt_Ft_W2B3OMPepaQ97?usp=sharing
Yes
-- MIMIC-III dataser REQUIRES some training course on the ir website to be completed to acess. But the model runs on goodwin dataset.
Stanford Cars
ProMetaR
[]
Prompt Learning via Meta-Regularization
2024-04-01T00:00:00
https://arxiv.org/abs/2404.00851v1
[ "https://github.com/mlvlab/prometar" ]
{'Harmonic mean': '76.72'}
[ "Harmonic mean" ]
Given the following paper and codebase: Paper: Prompt Learning via Meta-Regularization Codebase: https://github.com/mlvlab/prometar Improve the ProMetaR model on the Stanford Cars dataset. The result should improve on the following metrics: {'Harmonic mean': '76.72'}. You must use only the codebase pro...
Prompt Learning via Meta-Regularization Jinyoung Park, Juyeon Ko, Hyunwoo J. Kim* Department of Computer Science and Engineering, Korea University {lpmn678, juyon98, hyunwoojkim }@korea.ac.kr Abstract Pre-trained vision-language models have shown impres- sive success on various computer vision tasks with their zero-sho...
8
1
The proposed ProMetaR framework builds upon existing vision-language models (VLMs) like CLIP, which are pre-trained on millions of image-text pairs. Given the extensive experiments mentioned, it's reasonable to assume a substantial dataset for fine-tuning, likely similar to CLIP's 400 million pairs. Fine-tuning such mo...
yes
Yes
CV
Prompt Learning via Meta-Regularization
2024-04-01 0:00:00
https://github.com/mlvlab/prometar
1
!git clone https://github.com/jhpohovey/StanfordCars.git !mv StanfordCars/stanford_cars ./stanford_cars!
2hr 10min for 10 epochs according to logs
https://drive.google.com/file/d/1gthiYFsffpGbuJcRv9QhtjyG8CRBi-rt/view?usp=sharing
Yes
-- Official website is down but found a git repo for dataset.
CIFAR-10-LT (ρ=50)
SURE(ResNet-32)
[]
SURE: SUrvey REcipes for building reliable and robust deep networks
2024-03-01T00:00:00
https://arxiv.org/abs/2403.00543v1
[ "https://github.com/YutingLi0606/SURE" ]
{'Error Rate': '9.78'}
[ "Error Rate" ]
Given the following paper and codebase: Paper: SURE: SUrvey REcipes for building reliable and robust deep networks Codebase: https://github.com/YutingLi0606/SURE Improve the SURE(ResNet-32) model on the CIFAR-10-LT (ρ=50) dataset. The result should improve on the following metrics: {'Error Rate': '9.78...
SURE: SUrvey REcipes for building reliable and robust deep networks Yuting Li1,2, Yingyi Chen3, Xuanlong Yu4,5, Dexiong Chen†6, and Xi Shen†1 1Intellindust, China 2China Three Gorges University, China 3ESAT-STADIUS, KU Leuven, Belgium 4SATIE, Paris-Saclay University, France 5U2IS, ENSTA Paris, Institut Polytechnique de...
8
1
The paper mentions using ResNet architectures and training over 200 epochs with a batch size of 128 on datasets like CIFAR-10 and CIFAR-100. Given that CIFAR-10 has 60,000 images (with a standard resolution of 32x32) and CIFAR-100 has 60,000 images as well, training these relatively smaller datasets with modern archite...
yes
Yes
CV
SURE: SUrvey REcipes for building reliable and robust deep networks
2024-03-01 0:00:00
https://github.com/YutingLi0606/SURE
1
downloaded through script
1.5 min * 200 epoch = 5 hours
https://drive.google.com/file/d/1vEVFbmY0jFyW0WQj34SxupBXznFySsPL/view?usp=sharing
Yes
-- Just change the requirements.yml file as i noted and need to change the folder name an jsuut run the script.