Now available in preview, DeepSeek V4 cuts inference costs to a fraction of R1 Chinese AI darling DeepSeek is back with a new open weights large language model that promises performance to rival the best proprietary American LLMs. Perhaps more importantly, it claims to dramatically reduce inference costs and it extends support for Huawei's Ascend family of AI accelerators.…
DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs
Tobias Mann·The Register··1 min read
T
Continue reading on The Register
This article was sourced from The Register's RSS feed. Visit the original for the complete story.