What if you could train massive machine learning models in half the time without compromising performance? For researchers and developers tackling the ever-growing complexity of AI, this isn’t just a ...
OpenAI researchers have introduced a novel method that acts as a "truth serum" for large language models (LLMs), compelling them to self-report their own misbehavior, hallucinations and policy ...
Artificial intelligence data annotation startup Encord, officially known as Cord Technologies Inc., wants to break down barriers to training multimodal AI models. To do that, it has just released what ...
DeepSeek debuted Manifold-Constrained Hyper-Connections, or mHCs. They offer a way to scale LLMs without incurring huge costs. The company postponed the release of its R2 model in mid-2025. Just ...
OpenAI trained GPT-5 Thinking to confess to misbehavior. It's an early study, but it could lead to more trustworthy LLMs. Models will often hallucinate or cheat due to mixed objectives. OpenAI is ...
Utkarsh Amitabh says he definitely wasn't in the market for a new job in January 2025, when data labeling startup micro1 approached him about joining its network of human experts who help companies ...
For much of 2025, the frontier of open-weight language models has been defined not in Silicon Valley or New York City, but in Beijing and Hangzhou. Now, one small U.S. company is pushing back. Today, ...
The rapid advancement of artificial intelligence — particularly the training of large-scale models that are used to power many of today’s widely used applications — is driving renewed growth in ...
High school students gain PhD-led mentorship, publish original research, and build real-world AI models through ...