No. of Employees
10000+
NVIDIA Open Sources Aerial Software to Advance AI-Native 6G Networks
October 28, 2025 - NVIDIA has announced that its Aerial software suite will be made open source, marking a significant development in the telecom sector's shift toward AI-native 5G and 6G network innovation.
The Aerial software will be available across NVIDIA platforms, including the DGX Spark desktop supercomputer. This move gives researchers and developers the ability to move from rapid prototyping to real-world deployment in hours rather than months, removing previous barriers tied to proprietary hardware and licensed software.
By adopting an open-source model, NVIDIA aims to accelerate collaborative innovation in wireless networking, setting the stage for building next-generation mobile networks at AI speed.
Transforming Wireless Innovation Through Open Source
The company’s commitment to open-source contributions is not new. NVIDIA's earlier release, the Sionna software, has already seen over 200,000 downloads and more than 500 citations. Building on that momentum, the Aerial software suite — including Aerial CUDA-Accelerated RAN, Aerial Omniverse Digital Twin (AODT), and the new Aerial Framework — will be released under Apache 2.0 licensing on GitHub.
The Aerial software will be open-sourced in December 2025, with AODT to follow in March 2026. These tools will allow developers to create complete AI-native 5G and 6G RAN stacks without restrictions.
Key features of the Aerial open-source release include:
Aerial Framework to convert Python into high-performance CUDA code.
AI-powered neural models to enhance wireless capabilities, such as advanced channel estimation.
A decentralized app (dApp) framework enabling secure, real-time access to physical layer data via APIs.
Modular pipelines allowing full-stack customization and integration of developer-authored code.
These components have already powered the first U.S.-built AI-native wireless stack, supporting use cases such as spectrum agility and integrated sensing with communication.
DGX Spark Supercharges Wireless R&D
NVIDIA’s DGX Spark, the world’s smallest AI supercomputer, now supports both the Aerial and Sionna software suites. It provides the necessary performance to build, train, and deploy full-stack AI-native wireless networks within a compact and cost-efficient footprint.
The Sionna Research Kit, now compatible with DGX Spark and NVIDIA Jetson AGX Orin, functions as an all-in-one portable lab for AI-native 6G prototyping. This setup enables researchers to establish live 5G networks in a matter of hours and test ML algorithms on real radio environments without being tied to a physical lab.
The Aerial Testbed, which also supports NVIDIA GH200 Grace Hopper Superchips, facilitates seamless integration between network digital twins and live environments, accelerating product development cycles.
Dell Technologies has introduced the Dell Pro Max with GB10 — a DGX Spark-powered platform designed for high-performance telecom workloads. It delivers stable and scalable infrastructure for testing, simulation, and validation of 5G and 6G technologies.
Wider Collaboration on AI-Native 6G
Thousands of researchers worldwide are already leveraging NVIDIA's AI Aerial ecosystem. U.S. institutions including Northeastern University, Virginia Tech, Arizona State University, DeepSig, and MIT’s WINSLab and LIDS are all conducting research that could influence 6G standards and implementations.
Additionally, the AI-RAN Alliance — comprising over 100 telecom industry leaders — is actively shaping AI-native wireless network architectures. Many of its live demos and benchmarks have been powered by NVIDIA’s AI Aerial tools.
“With NVIDIA’s open-source Aerial software and DGX Spark, developers can create modular, software-defined wireless systems and experiment freely — from labs to live environments,” said Alex Jinsung Choi, Chairman of the AI-RAN Alliance. “This is a critical enabler for fueling AI-RAN innovations that boost spectrum efficiency, enhance network performance and power new AI applications — at a pace the industry has never experienced.”
Redefining Telecom Innovation in the AI Era
NVIDIA’s strategic move to open source its advanced wireless software and tools reflects a broader commitment to inclusivity and collaboration. By inviting developers beyond traditional telecom sectors, the company is setting the stage for rapid advancements in AI-powered 5G and 6G networks.
This milestone marks a new chapter for the telecom industry — one driven by open innovation, global collaboration, and software-defined infrastructure that evolves at the speed of AI.
Source: https://blogs.nvidia.com/blog/open-source-aerial-ai-native-6g/
Nvidia’s Expanding AI Empire: Inside Its Biggest Startup Bets
October 12, 2025 - No company has benefited more dramatically from the generative AI boom than Nvidia. Since the debut of ChatGPT over two years ago, Nvidia’s financials have soared—reflected in its skyrocketing revenue, profits, and cash reserves. The company’s stock performance has mirrored this growth, lifting its market capitalization to $4.5 trillion.
With its dominant position as a GPU provider for AI applications, Nvidia has rapidly increased its venture investment activity. In 2025 alone, the company has already completed 50 venture capital deals, surpassing the 48 deals it recorded in all of 2024. These figures exclude investments from its corporate VC arm, NVentures, which has also expanded its activity—engaging in 21 deals this year compared to just one in 2022.
Nvidia has stated that its investment strategy is focused on growing the AI ecosystem by backing what it calls “game changers and market makers.”
Below is a rundown of startups that have raised over $100 million since 2023 with Nvidia listed as an investor, ranked by size of the funding round.
Billion-Dollar Rounds
OpenAI: Nvidia made its first investment in OpenAI in October 2024, contributing $100 million to a $6.6 billion round that valued the company at $157 billion. While it did not join the $40 billion round in March 2025, Nvidia announced a strategic commitment to invest up to $100 billion in OpenAI to help build out AI infrastructure.
xAI: Despite OpenAI’s attempts to deter investors from backing rivals, Nvidia participated in a $6 billion round for Elon Musk’s xAI in December 2024. The company is also expected to invest up to $2 billion in a future $20 billion equity round aimed at hardware procurement.
Mistral AI: Nvidia joined a €1.7 billion (approximately $2 billion) Series C round for Mistral AI in September, marking its third investment in the French LLM developer. The round valued the company at €11.7 billion ($13.5 billion).
Reflection AI: In October 2025, Nvidia led a $2 billion round in Reflection AI, a year-old U.S. startup valued at $8 billion. The company is positioning itself as an open-source alternative to closed-source LLM providers.
Thinking Machines Lab: Nvidia participated in a $2 billion seed round for the AI startup founded by former OpenAI CTO Mira Murati. The funding valued the company at $12 billion.
Inflection: Nvidia was a lead investor in Inflection’s $1.3 billion raise in June 2023. Less than a year later, Microsoft acquired the team and a non-exclusive license for $620 million, leaving Inflection with a smaller workforce and less clear direction.
Nscale: After a $1.1 billion raise in September, Nvidia took part in a $433 million SAFE round in October. Nscale is building AI-focused data centers in the UK and Norway for OpenAI's Stargate project.
Wayve: Nvidia joined a $1.05 billion round in May 2024 for Wayve, a UK-based autonomous driving startup. A further $500 million investment is expected. The company is testing in the UK and the San Francisco Bay Area.
Figure AI: Nvidia participated in Figure AI’s $1 billion+ Series C round in September, valuing the robotics company at $39 billion. It previously invested in Figure’s $675 million Series B in early 2024.
Scale AI: In May 2024, Nvidia joined a $1 billion round for Scale AI, valuing the data labeling company at nearly $14 billion. In June, Meta bought a 49% stake for $14.3 billion and hired away its CEO and several executives.
High-Value Rounds ($100M–$999M)
Commonwealth Fusion: Nvidia joined an $863 million round in August 2025. The nuclear fusion company was valued at $3 billion.
Crusoe: Nvidia participated in a $686 million raise in November 2024 for Crusoe, a startup building AI-focused data centers.
Cohere: Nvidia invested in Cohere’s $500 million Series D in August, valuing the LLM company at $6.8 billion. It has been an investor since 2023.
Perplexity: Nvidia first invested in Perplexity in November 2023 and joined several subsequent rounds, including a $500 million raise in December 2024. It did not participate in the September 2025 $200 million round.
Poolside: Nvidia backed a $500 million round in October 2024, valuing the AI code assistant startup at $3 billion.
Lambda: Nvidia participated in Lambda’s $480 million Series D in February. The AI cloud firm, valued at $2.5 billion, rents Nvidia GPU-powered servers.
CoreWeave: Nvidia invested in CoreWeave in April 2023 when it raised $221 million. It remains a major shareholder even after CoreWeave went public.
Together AI: Nvidia took part in a $305 million Series B in February, valuing the cloud-based AI infrastructure provider at $3.3 billion.
Firmus Technologies: In September, Nvidia invested in a $215 million round for Firmus, which is building a green AI data center in Tasmania.
Sakana AI: Nvidia participated in a $214 million Series A in September 2024 for the Japanese generative AI firm, valued at $1.5 billion.
Nuro: In August, Nvidia joined a $203 million round for Nuro. The self-driving delivery company’s valuation dropped to $6 billion from a 2021 peak of $8.6 billion.
Imbue: Nvidia participated in a $200 million round in September 2023 for Imbue, which is developing reasoning and coding AI systems.
Waabi: In June 2024, Nvidia invested in a $200 million Series B for autonomous trucking startup Waabi.
Ayar Labs: In December, Nvidia joined a $155 million funding round for Ayar Labs, a startup focused on optical interconnects for AI compute.
Kore.ai: Nvidia invested in a $150 million raise in December 2023 for Kore.ai, an enterprise AI chatbot provider.
Sandbox AQ: In April, Nvidia joined a $150 million investment into Sandbox AQ, bringing its Series E to $450 million and valuation to $5.75 billion.
Hippocratic AI: Nvidia joined a $141 million Series B in January, valuing the healthcare-focused AI company at $1.64 billion.
Weka: In May 2024, Nvidia invested in Weka’s $140 million raise, which valued the data management startup at $1.6 billion.
Runway: Nvidia participated in a $308 million round in April for Runway, a generative AI media company now valued at $3.55 billion.
Bright Machines: In June 2024, Nvidia invested in a $126 million Series C for Bright Machines, which develops AI-powered robotics.
Enfabrica: Nvidia joined a $125 million Series B in September 2023. It did not participate in the company’s next round in November.
Reka AI: In July, Nvidia backed a $110 million raise for Reka AI, which tripled its valuation to over $1 billion.
Source: https://techcrunch.com/2025/10/12/nvidias-ai-empire-a-look-at-its-top-startup-investments/
Rivals Intensify Efforts to Challenge Nvidia's AI Chip Supremacy
October 06, 2025 - As artificial intelligence continues to transform global industries, Nvidia's competitors are accelerating their efforts to compete with the chipmaker's dominant position in the AI hardware market. Despite mounting challenges from tech giants like Google and Amazon, as well as Chinese firms, analysts say Nvidia's lead remains firmly intact.
Nvidia’s Market Leadership
Once a lesser-known player, Nvidia has surged to become the highest-revenue chipmaker globally, thanks to its powerful graphics processing units (GPUs)—the essential hardware behind AI models such as ChatGPT. While not the first company to develop GPUs, Nvidia was an early mover in the field during the late 1990s and leveraged that head start into unmatched expertise.
According to Dylan Patel, head of consultancy SemiAnalysis, Nvidia is more than a chip designer—it's "a three-headed dragon" that integrates hardware, networking, and software into a unified infrastructure.
"Nvidia can satisfy every level of need in the datacenter with world-class product," said Jon Peddie of Jon Peddie Research.
Emerging Competitors
Despite Nvidia holding an estimated 80% market share, challengers are stepping up. AMD, long considered its primary U.S. rival, still focuses primarily on CPUs—less suitable for AI tasks—and may struggle to divert resources.
Major cloud service providers have responded by building their own processors. Google began deploying its Tensor Processing Units (TPUs) nearly ten years ago, while Amazon Web Services introduced Trainium in 2020. Combined, they now command over 10% of the market and, according to SemiAnalysis's Jordan Nanos, have surpassed AMD in key areas including performance and scalability.
While Google has reportedly begun offering its chips to external clients, Amazon does not yet sell Trainium chips beyond its own ecosystem.
China’s Position
China, the only country close to rivaling the U.S. in this domain, is racing to bridge the gap despite export restrictions limiting access to advanced U.S. semiconductors. Huawei is seen as one of Nvidia’s strongest global competitors, potentially surpassing AMD, according to Nanos.
Chinese tech firms Baidu and Alibaba are also investing in proprietary AI chips. However, Jon Peddie notes that technical parity remains elusive due to reliance on domestic fabrication capabilities. Still, with large-scale investment and a highly skilled workforce, experts expect China to eventually develop cutting-edge chipmaking capacity.
Outlook for Nvidia
For now, analysts agree Nvidia’s dominance is secure. "Nvidia underpins the vast majority of AI applications today," said John Belton, analyst at Gabelli Funds. "And despite their lead, they keep their foot on the gas by launching a product every year, a pace that will be difficult for competitors to match."
In September, Nvidia unveiled its next-generation chip architecture, Rubin, set for release in late 2026. The chip is projected to deliver AI performance 7.5 times greater than the current Blackwell series—further raising the bar for would-be challengers.
Source: https://economictimes.indiatimes.com/tech/technology/competition-heats-up-to-challenge-nvidias-ai-chip-dominance/articleshow/124329646.cms
Nvidia Commits Up to $100B Investment in OpenAI for AI Data Center Expansion
September 22, 2025 - Nvidia announced plans Monday to invest as much as $100 billion in OpenAI, aiming to build expansive data centers to support advanced AI model training and deployment. The two companies have signed a letter of intent to deploy 10 gigawatts of Nvidia-powered systems — enough energy to supply millions of households — for OpenAI’s next-generation AI infrastructure.
This strategic move could help OpenAI diversify beyond Microsoft, its largest investor and primary cloud services provider. In January, Microsoft modified its agreement with OpenAI, granting the AI company the flexibility to collaborate with additional infrastructure partners. Since then, OpenAI has entered into various AI data center ventures, including the Stargate project.
According to Nvidia, the investment will complement OpenAI’s current collaborations with Microsoft, Oracle, and SoftBank. OpenAI stated it will partner with Nvidia as its “preferred strategic compute and networking partner” for scaling its AI factories.
Details of the $100 billion investment — whether in the form of GPUs, cloud credits, capital, or other assets — have not yet been disclosed.
Source: https://techcrunch.com/2025/09/22/nvidia-plans-to-invest-up-to-100b-in-openai/
Nvidia Introduces Rubin CPX GPU for Ultra-Long Context AI Tasks
September 9, 2025 - At the AI Infrastructure Summit on Tuesday, Nvidia unveiled its latest GPU innovation, the Rubin CPX, specifically engineered for handling context windows exceeding 1 million tokens.
The Rubin CPX is part of Nvidia’s upcoming Rubin series and is optimized for processing extensive sequences, supporting a “disaggregated inference” model designed to scale complex workloads. This architecture aims to deliver enhanced performance for long-context applications such as video generation and software development.
The announcement comes amid Nvidia’s continued momentum in the AI hardware space. The company recently posted $41.1 billion in data center revenue for the most recent quarter—highlighting its dominant position in the market.
The Rubin CPX is expected to become commercially available by the end of 2026.
Source: https://techcrunch.com/2025/09/09/nvidia-unveils-new-gpu-designed-for-long-context-inference/
Two Unnamed Customers Drove 39% of Nvidia's Q2 Revenue Surge
August 30, 2025 - Nvidia disclosed that two unnamed clients were responsible for a combined 39% of its second-quarter revenue, according to a recent SEC filing.
The chipmaker posted record-breaking revenue of $46.7 billion for the quarter ending July 27, marking a 56% increase year-over-year, fueled largely by soaring demand in the AI data center market. However, the financial filing reveals that a significant portion of this growth came from just a few major customers.
Nvidia stated that one customer contributed 23% of its Q2 revenue, while another accounted for 16%. The company did not name these entities, referring to them only as "Customer A" and "Customer B."
For the first half of the fiscal year, Customer A and Customer B represented 20% and 15% of total revenue, respectively. Additionally, four other clients each made up 14%, 11%, another 11%, and 10% of Q2 revenue.
These are categorized as "direct" customers in Nvidia's report, including original equipment manufacturers (OEMs), system integrators, or distributors who buy chips directly from Nvidia. The company clarified that indirect customers — such as cloud providers and consumer internet firms — typically purchase through these direct channels.
This suggests it's unlikely that major cloud players like Microsoft, Amazon, Google, or Oracle are directly represented as Customer A or B, although they may be indirectly responsible for the bulk purchasing.
Nvidia CFO Nicole Kress told CNBC that large cloud service providers were responsible for 50% of Nvidia's data center revenue, which itself comprised 88% of the company’s total revenue.
Commenting on the revenue concentration, Gimme Credit analyst Dave Novosel told Fortune, “Concentration of revenue among such a small group of customers does present a significant risk,” but added, “these customers have bountiful cash on hand, generate massive amounts of free cash flow, and are expected to spend lavishly on data centers over the next couple of years.”
Source: https://techcrunch.com/2025/08/30/nvidia-says-two-mystery-customers-accounted-for-39-of-q2-revenue/
Nvidia Posts Record Revenue Amid Surging AI Demand
August 27, 2025 - Nvidia, currently the world’s most valuable company, announced record-breaking quarterly earnings on Wednesday, reporting $46.7 billion in revenue—a 56% year-over-year increase. This surge was largely driven by the company’s booming data center segment, heavily influenced by continued AI expansion, which alone contributed $41.1 billion in revenue.
Net income also saw a notable rise, reaching $26.4 billion for the second quarter—up 59% from the same period last year. A significant portion of this came from Nvidia’s cutting-edge Blackwell chips, which generated $27 billion in sales.
“Blackwell is the AI platform the world has been waiting for,” said CEO Jensen Huang in a statement. “The AI race is on, and Blackwell is the platform at its center.”
Huang also projected that global AI infrastructure investments could hit $3 to $4 trillion by the decade's end. “$3 to 4 trillion is fairly sensible for the next five years,” he told analysts.
Nvidia highlighted its involvement in the recent launch of OpenAI’s open source gpt-oss models. According to the company, the rollout achieved a throughput of “1.5 million tokens per second on a single Nvidia Blackwell GB200 NVL72 rack-scale system.”
However, the report also shed light on Nvidia’s challenges in the Chinese market. The company confirmed zero sales of its H20 chip to customers in China last quarter, although $650 million worth of H20 units were sold to a non-Chinese client.
U.S. restrictions on advanced GPU exports to China remain in place, but a new policy under President Trump allows sales if companies pay a 15% export tax to the U.S. Treasury. Legal scholars have criticized the policy as a potential constitutional overreach.
Nvidia CFO Colette Kress addressed the issue during the earnings call, stating, “While a select number of our China-based customers have received licenses over the past few weeks, we have not shipped any H20 devices based on those licenses.”
Compounding the issue, Chinese authorities have advised domestic firms against using Nvidia hardware, prompting the company to reportedly pause H20 production earlier this month.
Looking ahead, Nvidia projects $54 billion in revenue for the third quarter. The forecast, which includes a possible 2% variance, does not factor in any H20 shipments to China.
Source: https://techcrunch.com/2025/08/27/nvidia-reports-record-sales-as-the-ai-boom-continues/
Nvidia Reportedly Developing New AI Chip for China Market Amid Export Challenges
August 19, 2025 - Nvidia, the world's most valuable semiconductor company, is reportedly working on a new AI chip specifically for the Chinese market, according to Reuters. The chip, known internally as the B30A, is said to deliver half the power of Nvidia's flagship B300 Blackwell GPU.
Despite its lower performance, the B30A would offer an upgrade over the H20 GPUs currently permitted for sale in China. Unlike the dual-die architecture of the B300, the B30A will feature a single-die design. It will still include capabilities such as high-bandwidth memory, NVLink support, and fast data transmission — similar to the H20.
Reuters noted that the B30A is a separate initiative from another AI chip Nvidia is believed to be developing for China.
"We evaluate a variety of products for our roadmap, so that we can be prepared to compete to the extent that governments allow. Everything we offer is with the full approval of the applicable authorities and designed solely for beneficial commercial use," Nvidia stated via email.
The report comes amid shifting U.S. policy on AI chip exports. The Trump administration has recently softened its restrictions on high-performance AI chip shipments to China, but Reuters sources indicated that the B30A's approval for export is still uncertain.
As geopolitical tensions around AI development intensify, some U.S. policymakers argue for stricter control over tech exports to maintain a competitive edge over China. Meanwhile, Nvidia and other chipmakers contend that retreating from the lucrative Chinese market could open the door for rivals such as Huawei to dominate.
Source: https://techcrunch.com/2025/08/19/nvidia-said-to-be-developing-new-more-powerful-ai-chip-for-sale-in-china/
Nvidia Launches New Cosmos AI Models and Infrastructure for Robotics and Physical AI
August 11, 2025 - Nvidia has introduced a fresh lineup of world AI models, developer libraries, and infrastructure tools aimed at advancing robotics and physical AI applications. Leading the announcement is Cosmos Reason, a 7-billion-parameter vision-language model designed to enhance “reasoning” capabilities in robots and other physical AI systems.
Joining the existing Cosmos model suite are Cosmos Transfer-2, which speeds up synthetic data generation from 3D simulation environments or spatial control inputs, and a streamlined, faster version of Cosmos Transfers optimized for performance.
Revealed at the SIGGRAPH conference on Monday, Nvidia stated that these models enable the creation of synthetic text, image, and video datasets for training both robots and AI agents. According to the company, Cosmos Reason leverages memory and physics-based understanding to “serve as a planning model to reason what steps an embodied agent might take next.” Potential uses include data curation, robotic planning, and video analytics.
Nvidia also rolled out new neural reconstruction libraries, including a rendering technique that transforms sensor data into realistic 3D simulations. This technology is being integrated into the open-source CARLA simulator, a widely used platform for developers. Additionally, updates were announced for the Omniverse software development kit.
On the hardware front, Nvidia unveiled the RTX Pro Blackwell Server, a unified architecture designed for robotics development workloads, and Nvidia DGX Cloud, a cloud-based platform for managing such projects.
These moves mark Nvidia’s continued push into robotics, positioning its AI GPUs for applications beyond data center operations.
Source: https://techcrunch.com/2025/08/11/nvidia-unveils-new-cosmos-world-models-other-infra-for-physical-applications-of-ai/
Deadline Extended: Submit a Project G-Assist Plug-In for a Chance to Win NVIDIA RTX GPUs and Laptops
July 15, 2025 - The deadline for submissions to NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon has been extended to Sunday, July 20, at 11:59 PM PT. Participants can leverage resources from RTX AI Garage to create innovative plug-ins for a chance to win top-tier NVIDIA hardware.
The hackathon challenges developers to expand the functionality of Project G-Assist, an experimental AI assistant in the NVIDIA App that helps users control and optimize GeForce RTX systems.
Winners will receive prizes including a GeForce RTX 5090 laptop, NVIDIA GeForce RTX 5080 and RTX 5070 Founders Edition graphics cards, and NVIDIA Deep Learning Institute credits. Finalists could also be featured on NVIDIA’s social media channels.
Project G-Assist: AI for Seamless Control
Project G-Assist allows users to manage RTX GPUs and system settings with natural language commands. Powered by a small on-device language model, it is accessible directly through the NVIDIA overlay in the NVIDIA App—no need to leave applications or disrupt workflows.
Developers can enhance G-Assist via plug-ins, connecting it to agentic frameworks such as Langflow. Plug-ins can be built in Python for rapid prototyping, in C++ for performance-intensive tasks, or tailored for custom hardware and OS automation.
System Requirements
Project G-Assist supports GeForce RTX 50, 40, or 30 Series desktop GPUs with at least 12GB of VRAM, Windows 10 or 11, a compatible CPU (Intel Pentium G Series and above or AMD Ryzen 3 and higher), recent GeForce Game Ready or NVIDIA Studio drivers, and specific storage requirements.
Get Ready to Submit
As the submission deadline nears, NVIDIA has provided key resources:
On-demand webinar: NVIDIA senior software engineer Sydney Altobell shares tips on building G-Assist plug-ins, available on the NVIDIA Developer YouTube channel.
Developer support: Engage with NVIDIA’s engineering team and fellow developers in the NVIDIA Developer Discord.
GitHub repository: Access step-by-step guides, documentation, and sample plug-ins, including integrations for Discord, IFTTT, and Google Gemini.
ChatGPT Plug-In Builder: Simplify development with OpenAI’s GPT builder for generating plug-in code.
For a detailed walkthrough of plug-in architecture, check out NVIDIA’s technical blog, which uses a Twitch integration as an example.
Visit the Hackathon entry page for submission requirements and details.
Stay connected via NVIDIA AI PC on Facebook, Instagram, TikTok, and X, or subscribe to the RTX AI PC newsletter for updates. Follow NVIDIA Workstation on LinkedIn and X to explore more community innovations.
Source: https://blogs.nvidia.com/blog/rtx-ai-garage-g-assist-hackathon-plug-in-last-chance/
Add a review