arXiv:2604.19790v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed under diverse numerical precision configurations, including standard floating-point formats (e.g., bfloat16 and float16) and quantized integer formats (e.g., int16 and int8), to meet efficiency and resource constraints. However, minor inconsistencies between LLMs of different precisions are diffi
Hidden Reliability Risks in Large Language Models: Systematic Identification of Precision-Induced Output Disagreements
Yifei Wang, Tianlin Li, Xiaohan Zhang, Xiaoyu Zhang, Wei Ma, Mingfei Cheng, Li Pan·arXiv cs.AI··1 min read
a
Continue reading on arXiv cs.AI
This article was sourced from arXiv cs.AI's RSS feed. Visit the original for the complete story.