arXiv:2604.20933v1 Announce Type: new Abstract: Self-play fine-tuning enables large language models to improve beyond supervised fine-tuning without additional human annotations by contrasting annotated responses with self-generated ones. Many existing methods rely on a fixed divergence regime. SPIN is closely related to a KL-based regime, SPACE to a Jensen-Shannon-style objective via noise contra
IRIS: Interpolative R\'enyi Iterative Self-play for Large Language Model Fine-Tuning
Wenjie Liao, Like Wu, Liangjie Zhao, Shihui Xu, Shigeru Fujimura·arXiv cs.LG··1 min read
a
Continue reading on arXiv cs.LG
This article was sourced from arXiv cs.LG's RSS feed. Visit the original for the complete story.