Skip to content

kinit-sk/PEFT-Bench

Repository files navigation

# PEFT-Bench


Unified Benchmark for PEFT methods

This repository contains all files necessary to run PEFT-Bench and to replicate our paper:

PEFT-Bench: A Parameter-Efficient Fine-Tuning Methods Benchmark

Installation and Setup

First you will need to clone this repository. And change the directory into it.

git clone https://github.com/kinit-sk/PEFT-Bench.git

cd PEFT-Bench

After that you need to install the peftfactory framework which runs all of the training.

Wandb logging

You can also install wandb. This will automatically create a wandb project for each method called peft-factory-multiple-{peft-method}.

pip install peftfactory

Run full benchmark

This will run the ./scripts/run_exp.py, which will run sequential training and evaluation on 27 datasets, 7 PEFT methods, 5 seeds and 1 model.

⚠️ ⚠️ This will run 945 training and 945 evaluations ⚠️⚠️

./run_full_benchmark.py

Run full benchmark with slurm

First you need to setup your slurm settings in scripts/peftbench/slurm/run_train_eval.sh. This scripts runs single training and eval (this script will be later called using sbatch).

vim scripts/peftbench/slurm/run_train_eval.sh

Now simply run the script.

⚠️⚠️ This will Create 945 slurm jobs ⚠️⚠️

./run_full_benchmark_slurm.sh

Citation

@article{belanec2025peft,
  title={PEFT-Bench: A Parameter-Efficient Fine-Tuning Methods Benchmark},
  author={Belanec, Robert and Pecher, Branislav and Srba, Ivan and Bielikova, Maria},
  journal={arXiv preprint arXiv:2511.21285},
  year={2025}
}

About

A unified benchmark for parameter-efficient fine-tuning methods

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors