Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the ...
Read strategies and leadership insights to help government agencies protect sensitive data and strengthen overall ...
Discover how aligning AI transformation with sustainability can boost efficiency, resilience, and long‑term competitiveness ...
The Official Microsoft Blog on MSN
How Microsoft is empowering Frontier Transformation with Intelligence + Trust
At Microsoft Ignite in November, we introduced Frontier Transformation — a holistic reimagining of business aligning AI with human ambition to help organizations achieve their highest aspirations and ...
Once upon a time, Microsoft pledged to replenish more water than it consumes by 2030. Those efforts seem to have gone up in ...
Microsoft Unveils 365 Premium, Its New Top-Tier AI and Productivity Bundle Your email has been sent Microsoft has launched Microsoft 365 Premium, a new subscription plan that blends its well-known ...
UPDATE: May 8, 2025, 5:20 p.m. EDT In a statement to Mashable, Microsoft confirmed that it will no longer sell the $999.99 configurations of its flagship Surface Laptop 7 and Surface Pro 11 with ...
Energy regulators and advocates are cautiously optimistic about Microsoft's promise to "pay its own way" for the power needed to serve its AI footprint.
Hosted on MSN
Microsoft acquires data analytics startup Osmos to fuel push into 'autonomous data engineering'
Microsoft announced Monday that it acquired Osmos, a Seattle startup that helps companies automate data engineering work. Terms of the deal were not disclosed. Osmos’ team will join the engineering ...
The tech giant is responding to concerns that data centers are driving up electricity costs in some communities.
Microsoft says it is laying off nearly 3% of its entire workforce. The tech giant didn’t disclose the total amount of lost jobs but it will amount to about 6,000 people. Microsoft employed 228,000 ...
Maia 200 packs 140+ billion transistors, 216 GB of HBM3E, and a massive 272 MB of on-chip SRAM to tackle the efficiency crisis in real-time inference. Hyperscalers prioritiz ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results