Model Card for RankMistral
RankMistral, finetuned from Mistral-7B-v0.3 using rank_llm dataset.
Results
From QPP-RA: Aggregating Large Language Model Rankings Using the Rank LLM Library.
Citation
If you use this model please cite:
@inproceedings{10.1145/3731120.3744575,
author = {Betello, Filippo and Russo, Matteo and D\"{u}tting, Paul and Leonardi, Stefano and Silvestri, Fabrizio},
title = {QPP-RA: Aggregating Large Language Model Rankings},
year = {2025},
isbn = {9798400718618},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3731120.3744575},
doi = {10.1145/3731120.3744575},
booktitle = {Proceedings of the 2025 International ACM SIGIR Conference on Innovative Concepts and Theories in Information Retrieval (ICTIR)},
pages = {103โ114},
numpages = {12},
keywords = {llm, query performance prediction, rank aggregation},
location = {Padua, Italy},
series = {ICTIR '25}
}
- Downloads last month
- 5
