PQuantML

PQuant is a library for training compressed machine learning models, developed at CERN as part of the Next Generation Triggers project. It is designed to bridge the gap between high-performance ML models and hardware constraints by providing efficient tools for quantization and pruning.
Key Features
- Dual Framework Support: Works seamlessly with both PyTorch and TensorFlow.
- Compression Layers: Replaces standard layers with “Compressed” (for weights) or “Quantized” (for activations) variants.
- Advanced Pruning: Supports various pruning methods with customizable pre-training, training, and fine-tuning steps.
- Hardware Aware: Optimizes models for deployment on constrained hardware like FPGAs.
Installation
pip install pquant-ml
# Or with specific backend support:
pip install pquant-ml[torch]
pip install pquant-ml[tensorflow]
Authors
Developed by Roope Niemi, Anastasiia Petrovych, Arghya Ranjan Das, and others at CERN and partner institutions.

Hi, I am a Ph.D. student at Purdue University and cuuently based at Fermilab as LPC G&V Fellow, working on the CMS experiment. My current work focuses specifically on Di-Higgs searches, developing ML solution for real-time detector readout and Outer tracker upgrades.
I am also interested in Theoratical Astrophysics & Cosmology and High-energy physics.