The robotics industry has a consensus problem. Nearly every startup and major research lab training robots to handle objects has converged on the same method: ...
Figure AI has unveiled HELIX, a pioneering Vision-Language-Action (VLA) model that integrates vision, language comprehension, and action execution into a single neural network. This innovation allows ...
What if a robot could not only see and understand the world around it but also respond to your commands with the precision and adaptability of a human? Imagine instructing a humanoid robot to “set the ...
AI² Robotics, a Chinese humanoid robot startup, has secured over CNY1 billion (USD145 million) to enhance its embodied ...
Microsoft has unveiled Rho-alpha, a next-generation robotics AI model designed to help machines understand the physical world, make decisions in ...
Hugging Face Inc. today open-sourced SmolVLM-256M, a new vision language model with the lowest parameter count in its category. The algorithm’s small footprint allows it to run on devices such as ...
Just when you thought the pace of change of AI models couldn’t get any faster, it accelerates yet again. In the popular news media, the introduction of DeepSeek in January 2025 created a moment that ...
Cohere For AI, AI startup Cohere’s nonprofit research lab, this week released a multimodal “open” AI model, Aya Vision, the lab claimed is best-in-class. Aya Vision can perform tasks like writing ...
As I highlighted in my last article, two decades after the DARPA Grand Challenge, the autonomous vehicle (AV) industry is still waiting for breakthroughs—particularly in addressing the “long tail ...
COPENHAGEN, Denmark—Milestone Systems, a provider of data-driven video technology, has released an advanced vision language model (VLM) specializing in traffic understanding and powered by NVIDIA ...