Learning to Reason in 13 Parameters

AI
LLMs
reasoning
generative AI
fine-tuning
links
TinyLoRA: an 8B Qwen2.5 reaches 91% on GSM8K with only 13 trained bf16 parameters — 26 bytes of learned weights.
Author

synesis

Published

March 31, 2026

Figure 1 from the TinyLoRA paper.

TinyLoRA pushes low-rank adaptation down to almost nothing [1].


References

[1] “Learning to Reason in 13 Parameters.” arXiv. https://arxiv.org/abs/2602.04118

Originally posted on LinkedIn.