Site logo
  • No comments yet.
  • Add a review
    Contact Form

      News

      Nvidia Reportedly Demands Upfront Payment from Chinese Buyers for H200 AI Chips

      January 8, 2026 - Nvidia is reportedly implementing a strict new payment policy for its H200 AI chips in China, now requiring full payment upfront, according to Reuters, which cited unnamed sources.

      The company is said to be offering no options for refunds or order modifications under the new terms. While a few clients might be allowed to use asset collateral or commercial insurance, these conditions are significantly tougher than Nvidia's previous practices that sometimes accepted partial deposits, Reuters noted.

      Nvidia declined to provide a comment on the matter.

      According to Bloomberg, China is expected to greenlight the sale of Nvidia’s H200 chips within its borders. However, Beijing aims to ensure these chips are not utilized by military entities, state-run enterprises, or critical infrastructure sectors.

      Despite the regulatory uncertainty, demand in China for Nvidia’s H200 remains high. Local companies have reportedly ordered more than two million units of the chip for 2026, leading the firm to boost its production capacity.

      The U.S.-based semiconductor giant continues to navigate a delicate geopolitical landscape, balancing surging Chinese demand with evolving regulatory frameworks in both Washington and Beijing. Nvidia previously faced a significant financial blow when the Trump administration required licenses for H20 chip exports to China, forcing a $5.5 billion inventory write-down.

      Source: https://techcrunch.com/2026/01/08/nvidias-reportedly-asking-chinese-customers-to-pay-upfront-its-for-h200-ai-chips/

      NVIDIA GeForce NOW Reportedly Gaining Native Linux Support This Year

      05 January 2026 - Native Linux support may soon become a reality for NVIDIA's cloud gaming service, GeForce NOW, potentially boosting the Linux gaming community.

      According to a report by VideoCardz, NVIDIA is expected to introduce full native Linux support for GeForce NOW later in 2026. Currently, Linux users rely on unofficial applications or browser-based workarounds, which are often unreliable and break following updates. The anticipated update would allow desktop Linux users to stream games directly through officially supported channels.

      While exact details and timelines remain unclear, VideoCardz notes that additional information may be revealed at CES 2026. As of now, NVIDIA has not made any official announcements regarding the matter.

      Industry observers speculate that native Linux support for GeForce NOW could increase the platform's Linux user base. This shift may be further influenced by the end of Windows 10 support in late 2025, prompting some users to explore alternative operating systems. Valve's latest survey indicates that Linux gamers currently represent around 3.2% of the gaming market.

      An announcement at CES would also serve a strategic public relations purpose for NVIDIA, as the company recently implemented a 100-hour monthly cap on all GeForce NOW subscriptions effective January 1, 2026. While casual users may be unaffected, the limit could impact more frequent gamers.

      As always, this report should be viewed cautiously until NVIDIA issues an official confirmation. Fortunately, with CES 2026 just around the corner, definitive details may be available soon.

      Stay connected with us for the latest gaming and tech industry updates. Follow us on Twitter, LinkedIn, Telegram, and Instagram, and subscribe to our newsletter for more news, insights, and exclusive content.

      Source: https://80.lv/articles/nvidia-s-geforce-now-will-allegedly-get-native-linux-support

      Nvidia Licenses Groq's AI Chip Technology, Brings On CEO and Key Executives

      December 24, 2025 - Nvidia has signed a non-exclusive licensing agreement with AI chip startup Groq and will onboard Groq CEO Jonathan Ross, President Sunny Madra, and additional team members as part of the strategic move.

      According to CNBC, Nvidia is acquiring select assets from Groq in a transaction valued at $20 billion. While Nvidia confirmed to TechCrunch that the arrangement does not constitute a full acquisition, the scope of the deal remains undisclosed. If CNBC’s valuation proves accurate, this would represent the largest transaction in Nvidia’s history, significantly expanding its AI chip portfolio.

      As the demand for AI computing surges, Nvidia’s GPUs have become central to the industry. Groq, however, has taken a different approach by developing the LPU (Language Processing Unit), a chip architecture it claims can operate large language models (LLMs) up to 10 times faster and with just one-tenth the power consumption compared to existing technologies.

      Groq’s CEO Jonathan Ross, known for co-creating Google’s Tensor Processing Unit (TPU), is recognized for his contributions to AI hardware innovation. His leadership is expected to bring additional momentum to Nvidia’s ongoing chip development efforts.

      Groq has seen rapid growth over the past year. The company raised $750 million in September at a valuation of $6.9 billion. It also reported a substantial increase in its developer base, stating it now supports AI applications for over 2 million developers, a sharp rise from 356,000 the previous year.

      Source: https://techcrunch.com/2025/12/24/nvidia-acquires-ai-chip-challenger-groq-for-20b-report-says/

      Nvidia Responds to Report That China’s DeepSeek Is Using Its Banned Blackwell AI Chips

      December 10, 2025 - Nvidia issued a response Wednesday following a report that Chinese AI startup DeepSeek is utilizing smuggled Blackwell AI chips to power its forthcoming model.

      The Information reported that DeepSeek is developing its next-generation AI model using Nvidia’s Blackwell chips, which were allegedly brought into China despite U.S. export restrictions. The chips, among Nvidia’s most advanced, are banned from export to China under U.S. regulations aimed at maintaining a technological edge in the AI sector.

      "We haven’t seen any substantiation or received tips of 'phantom data centers' constructed to deceive us and our [original equipment manufacturer] partners, then deconstructed, smuggled and reconstructed somewhere else," an Nvidia spokesperson said. "While such smuggling seems far-fetched, we pursue any tip we receive."

      The Blackwell chips are central to Nvidia’s dominance in the AI hardware space, powering GPUs used in training complex models and managing large-scale workloads. As such, the company’s interactions with China have become a focal point of political scrutiny in the U.S.

      Earlier this week, President Donald Trump announced that Nvidia would be permitted to sell its H200 chips to "approved customers" in China and other countries, provided 25% of those sales are returned to the U.S. The decision has drawn criticism from some Republican lawmakers.

      DeepSeek had previously rattled the U.S. tech sector in January with the release of its reasoning model, R1, which rapidly climbed app store rankings and performance charts. Notably, R1 was developed at a significantly lower cost compared to U.S.-based models, according to analyst estimates.

      In August, DeepSeek suggested that domestic Chinese chips capable of supporting future AI models were in development, signaling Beijing’s intent to reduce reliance on foreign hardware.

      Source: https://www.cnbc.com/2025/12/10/nvidia-report-china-deepseek-ai-blackwell-chips.html

      Should Nvidia's AI Market Dominance Concern Investors? Jensen Huang's 21 Words Provide a Clear Perspective

      December 7, 2025 - Nvidia (NASDAQ: NVDA), once best known for its video game graphics processors, has redefined itself as a powerhouse in artificial intelligence (AI) chip technology. This strategic shift has paid off, propelling the company's annual revenue by a staggering 2,500% over the last ten years.

      By entering the AI chip space early and continuously innovating, Nvidia cemented its market leadership. However, with new competitors on the horizon—including some of its own clients—investors are increasingly cautious. For example, Amazon has begun producing and deploying its own AI chips alongside offerings from various suppliers.

      A Strategic Early Move in AI Nvidia recognized the potential of AI early on, developing GPUs that suited the specific demands of machine learning and data processing. It has since expanded its portfolio to include a wide array of products and services, helping push annual revenue past $130 billion in the latest fiscal year. The company also boasts high profit margins, typically exceeding 70%.

      Still, Nvidia is not alone in this space. Competitors such as Advanced Micro Devices (AMD) and Broadcom, as well as major cloud providers like Amazon Web Services (AWS) and Google Cloud (owned by Alphabet), have entered the AI chip race. These companies have reported strong demand for their own chip solutions.

      The Role of GPUs in LLMs Initially, Nvidia's GPUs were used mainly to train large language models (LLMs), providing the computational power to teach these systems how to interpret and respond to information. Now, these GPUs are also central to the "inference" phase—the process by which LLMs apply what they've learned to generate real-time responses.

      Nvidia believes inference will be a significant driver of future AI chip demand. Its latest Blackwell GPUs have demonstrated exceptional performance in this area. According to tests by MLCommons, Blackwell outperformed previous Hopper chips by delivering 10 times better performance per watt and a 10x reduction in cost per token when applied to the DeepSeek R1 model.

      Jensen Huang on Nvidia's Edge CEO Jensen Huang addressed concerns about competition during the company's recent earnings call:

      "It's gonna take a long time before somebody is able to take that on," Huang said. "And our leadership there is surely multiyear."

      This highlights Nvidia's continued edge in providing high-performance AI computing, which remains attractive to customers focused on speed, efficiency, and long-term cost savings. While competitors may carve out their own niches—particularly in cases where the most powerful GPU isn't required—Nvidia remains the go-to for cutting-edge performance.

      Given the growing importance of inference in AI, Nvidia appears well-positioned to maintain its lead.

      Bottom Line: Should Investors Worry? Despite growing competition, Jensen Huang's confident assertion provides a compelling answer: no. Nvidia remains on course to dominate the AI chip space, particularly in inference, for years to come.

      Thinking of Investing $1,000 in Nvidia? Before doing so, consider this: The Motley Fool Stock Advisor team recently unveiled their top 10 stock picks—and Nvidia wasn't on the list. These chosen stocks are projected to offer substantial long-term returns.

      For instance, had you invested $1,000 in Netflix when it was recommended on December 17, 2004, you'd now have $540,587. Or, a $1,000 investment in Nvidia from the April 15, 2005 recommendation would be worth $1,118,210 today.*

      Stock Advisor's average return stands at 991%, significantly outpacing the S&P 500's 195%. Discover their latest top 10 picks by joining Stock Advisor today.

      Source: https://finviz.com/news/247793/should-you-worry-about-nvidias-ai-market-leadership-21-words-from-jensen-huang-offer-a-strikingly-clear-answer

      Nvidia Unveils Open-Source Vision Language Model for Autonomous Driving at NeurIPS 2025

      December 1, 2025 - Nvidia has introduced a new suite of open AI tools and models designed to support the development of physical AI technologies, including autonomous vehicles and robotics capable of perceiving and interacting with their environment.

      At the NeurIPS AI conference in San Diego, Nvidia revealed Alpamayo-R1, an open-source reasoning vision language model tailored specifically for autonomous driving research. Described by the company as the first vision language action model in this domain, Alpamayo-R1 enables vehicles to interpret both visual and textual inputs, improving their decision-making capabilities in real-time driving scenarios.

      The new model is built on Nvidia's Cosmos-Reason, part of the Cosmos model family first introduced in January 2025, with subsequent releases in August. Cosmos-Reason is designed to simulate human-like reasoning by evaluating options before executing a response.

      According to Nvidia, such models are essential for achieving level 4 autonomous driving, which allows vehicles to operate without human intervention within defined conditions. The company emphasized that reasoning-driven AI can give self-driving systems the "common sense" necessary for handling complex and subtle driving situations more like a human driver.

      Alpamayo-R1 is now available for developers via GitHub and Hugging Face.

      In conjunction with the model release, Nvidia also launched the Cosmos Cookbook—a comprehensive collection of guides and tools on GitHub. This resource includes detailed instructions on inference, post-training workflows, data curation, synthetic data generation, and model evaluation to help developers tailor Cosmos models to their specific applications.

      The announcement underscores Nvidia's strategic push into physical AI, an area the company sees as the next major frontier for AI-enabled technologies. CEO and co-founder Jensen Huang has consistently described physical AI as the next phase of AI evolution. Echoing this, Nvidia Chief Scientist Bill Dally told TechCrunch earlier this year, "I think eventually robots are going to be a huge player in the world and we want to basically be making the brains of all the robots. To do that, we need to start developing the key technologies."

      These developments come as Nvidia continues to position its AI GPUs and infrastructure at the center of the rapidly growing physical AI ecosystem.

      Source: https://techcrunch.com/2025/12/01/nvidia-announces-new-open-ai-models-and-tools-for-autonomous-driving-research/

      Nvidia plays down Google chip threat concerns

      26 November, 2025 - Nvidia has asserted that it remains “a generation ahead” of competitors in the artificial intelligence (AI) space, pushing back firmly against growing speculation that a rival could challenge its dominant market position and multi‑trillion‑dollar valuation.

      The chip heavyweight’s shares dropped on Tuesday after reports emerged that Meta is preparing to invest billions in AI chips developed by Google for its data centres.

      On X, Nvidia — now the world’s most valuable company — stated that it alone offers a platform that “runs every AI model and does it everywhere computing is done.” Meanwhile, Google responded by voicing its commitment to “supporting both” its own and Nvidia’s chips.

      Nvidia’s processors form a crucial foundation for many of the leading AI services today, including tools like ChatGPT. In October, the company made history as the first ever to reach a valuation of $5 trillion (£3.8 trillion).

      Recently, the US firm has broadened its reach, announcing in October that it will supply some of its most advanced AI chips to South Korea’s government, as well as to major industry players such as Samsung, LG and Hyundai.

      “Healthy” competition?

      Google rents out access to its tensor processing units (TPUs) via Google Cloud for AI development — a service traditionally reserved for its internal data centres. Should the company decide to sell these chips externally, as recent reports suggest, it would mark a significant strategic shift.

      Following the news, Nvidia’s stock tumbled nearly 6%, while shares in Alphabet — Google’s parent company — climbed by roughly the same margin.

      In the hours after the drop, Nvidia took to X to reiterate that it continues to deliver “greater performance” and “versatility” compared with the chips Google is currently producing.

      Over the past year, other tech titans such as Amazon and Microsoft have also announced plans to develop their own AI chips — signaling rising competition in the hardware arena.

      Commenting on the prospective partnership between Google and Meta, Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton, described the developments as “healthy” for the market.

      “Investment is pouring into this area,” she said.

      “At the moment there is no real return on that investment except for Nvidia.”

      Source: https://www.bbc.com/news/articles/c7vme6rrqz5o

      Nvidia Deepens AI Alliances with Hyundai, Samsung, SK, and Naver in South Korea

      October 31, 2025 - Nvidia CEO Jensen Huang is visiting South Korea for the first time in 15 years, unveiling a series of expanded collaborations with leading Korean technology firms — including Hyundai Motor, Samsung, SK Group, and Naver. The announcements, made during this week’s APEC Summit 2025, mark a significant step in enhancing South Korea’s AI capabilities through a strategic partnership between Nvidia and the South Korean government.

      The initiative follows recent tech agreements between the U.S., Japan, and South Korea, aimed at strengthening cooperation on emerging technologies such as AI, semiconductors, quantum computing, biotechnology, and 6G.

      On Friday, the South Korean government confirmed it will acquire more than 260,000 of Nvidia’s advanced GPUs to address growing AI demands. Of these, approximately 50,000 GPUs will support public-sector projects, including national AI data centers and domestic foundation model development. The remainder will go to major corporations like Samsung, SK, Hyundai Motor Group, and Naver to advance AI-driven manufacturing and develop industry-specific AI models.

      Samsung and Nvidia Launch AI Mega-Factory, Collaborate on 6G AI-RAN

      Samsung revealed plans to build an AI mega-factory in collaboration with Nvidia, integrating AI throughout its manufacturing processes for semiconductors, robotics, and mobile devices. This facility will utilize over 50,000 Nvidia GPUs and the Omniverse platform to create a real-time, intelligent production network capable of analysis, prediction, and optimization.

      The two firms — long-time partners with over 25 years of collaboration — are also working on HBM4, the next-gen high-bandwidth memory tailored for future AI applications.

      According to South Korea’s Ministry of Science and ICT, Nvidia will team up with Samsung, telecom operators SK Telecom, KT, and LG Uplus, as well as ETRI, to jointly develop AI-RAN technology. This system merges AI with mobile base stations to enhance network efficiency and reduce energy usage. Under a new agreement, the parties will also build a global AI-RAN testbed.

      Hyundai and Nvidia to Advance Autonomous Mobility and Physical AI

      Hyundai Motor and Nvidia are partnering to build robust AI infrastructure and develop physical AI technologies with a focus on autonomous mobility, robotics, and smart manufacturing. The alliance includes a supply of 50,000 Nvidia Blackwell GPUs for training, validating, and deploying AI models. Additionally, both companies will establish AI research centers in South Korea to bolster the nation’s physical AI sector.

      “AI is revolutionizing every facet of every industry, and in transportation alone — from vehicle design and manufacturing to robotics and autonomous driving — Nvidia’s AI and computing platforms are transforming how the world moves,” said Jensen Huang. “Together with Hyundai Motor Group — Korea’s industrial powerhouse and one of the world’s top mobility solutions providers — we’re building intelligent cars and factories that will shape the future of the multitrillion-dollar mobility industry.”

      SK Group Builds AI Cloud; Naver Develops Physical AI Platform

      SK Group, the parent of SK Hynix, is partnering with Nvidia to launch Asia’s first enterprise-led manufacturing AI cloud. This cloud platform will utilize Nvidia’s digital twin and simulation technologies and will be accessible to government entities, public institutions, and domestic startups.

      Naver Cloud, the cloud division of South Korea’s leading search engine, is collaborating with Nvidia to create a “Physical AI” platform that bridges the digital and physical worlds. The platform will support AI deployments across sectors such as semiconductors, shipbuilding, energy, and biotech, aiming to fast-track real-world industrial AI integration.

      “Just as the automotive industry is transitioning to SDVs, the era of ‘Physical AI,’ where AI operates directly within real industrial sites and systems, is unfolding,” said Naver founder Hae-jin Lee in a company statement.

      Nvidia’s growing network of partnerships with South Korea’s tech titans — spanning Samsung’s AI factories, Hyundai’s smart mobility, SK’s AI cloud services, and Naver’s industrial AI infrastructure — underscores a global push to merge AI with hardware innovation across multiple sectors.

      Earlier this week, Nvidia also revealed new alliances with firms such as Eli Lilly, Palantir, Uber, Samsung, Hyundai, Joby Aviation, and the U.S. Department of Energy. These moves helped boost Nvidia’s stock, propelling it to become the first publicly traded company to exceed a $5 trillion market cap.

      Source: https://techcrunch.com/2025/10/31/nvidia-expands-ai-ties-with-hyundai-samsung-sk-naver/

      NVIDIA Open Sources Aerial Software to Advance AI-Native 6G Networks

      October 28, 2025 - NVIDIA has announced that its Aerial software suite will be made open source, marking a significant development in the telecom sector's shift toward AI-native 5G and 6G network innovation.

      The Aerial software will be available across NVIDIA platforms, including the DGX Spark desktop supercomputer. This move gives researchers and developers the ability to move from rapid prototyping to real-world deployment in hours rather than months, removing previous barriers tied to proprietary hardware and licensed software.

      By adopting an open-source model, NVIDIA aims to accelerate collaborative innovation in wireless networking, setting the stage for building next-generation mobile networks at AI speed.

      Transforming Wireless Innovation Through Open Source

      The company’s commitment to open-source contributions is not new. NVIDIA's earlier release, the Sionna software, has already seen over 200,000 downloads and more than 500 citations. Building on that momentum, the Aerial software suite — including Aerial CUDA-Accelerated RAN, Aerial Omniverse Digital Twin (AODT), and the new Aerial Framework — will be released under Apache 2.0 licensing on GitHub.

      The Aerial software will be open-sourced in December 2025, with AODT to follow in March 2026. These tools will allow developers to create complete AI-native 5G and 6G RAN stacks without restrictions.

      Key features of the Aerial open-source release include:

      Aerial Framework to convert Python into high-performance CUDA code.

      AI-powered neural models to enhance wireless capabilities, such as advanced channel estimation.

      A decentralized app (dApp) framework enabling secure, real-time access to physical layer data via APIs.

      Modular pipelines allowing full-stack customization and integration of developer-authored code.

      These components have already powered the first U.S.-built AI-native wireless stack, supporting use cases such as spectrum agility and integrated sensing with communication.

      DGX Spark Supercharges Wireless R&D

      NVIDIA’s DGX Spark, the world’s smallest AI supercomputer, now supports both the Aerial and Sionna software suites. It provides the necessary performance to build, train, and deploy full-stack AI-native wireless networks within a compact and cost-efficient footprint.

      The Sionna Research Kit, now compatible with DGX Spark and NVIDIA Jetson AGX Orin, functions as an all-in-one portable lab for AI-native 6G prototyping. This setup enables researchers to establish live 5G networks in a matter of hours and test ML algorithms on real radio environments without being tied to a physical lab.

      The Aerial Testbed, which also supports NVIDIA GH200 Grace Hopper Superchips, facilitates seamless integration between network digital twins and live environments, accelerating product development cycles.

      Dell Technologies has introduced the Dell Pro Max with GB10 — a DGX Spark-powered platform designed for high-performance telecom workloads. It delivers stable and scalable infrastructure for testing, simulation, and validation of 5G and 6G technologies.

      Wider Collaboration on AI-Native 6G

      Thousands of researchers worldwide are already leveraging NVIDIA's AI Aerial ecosystem. U.S. institutions including Northeastern University, Virginia Tech, Arizona State University, DeepSig, and MIT’s WINSLab and LIDS are all conducting research that could influence 6G standards and implementations.

      Additionally, the AI-RAN Alliance — comprising over 100 telecom industry leaders — is actively shaping AI-native wireless network architectures. Many of its live demos and benchmarks have been powered by NVIDIA’s AI Aerial tools.

      “With NVIDIA’s open-source Aerial software and DGX Spark, developers can create modular, software-defined wireless systems and experiment freely — from labs to live environments,” said Alex Jinsung Choi, Chairman of the AI-RAN Alliance. “This is a critical enabler for fueling AI-RAN innovations that boost spectrum efficiency, enhance network performance and power new AI applications — at a pace the industry has never experienced.”

      Redefining Telecom Innovation in the AI Era

      NVIDIA’s strategic move to open source its advanced wireless software and tools reflects a broader commitment to inclusivity and collaboration. By inviting developers beyond traditional telecom sectors, the company is setting the stage for rapid advancements in AI-powered 5G and 6G networks.

      This milestone marks a new chapter for the telecom industry — one driven by open innovation, global collaboration, and software-defined infrastructure that evolves at the speed of AI.

      Source: https://blogs.nvidia.com/blog/open-source-aerial-ai-native-6g/

      Nvidia’s Expanding AI Empire: Inside Its Biggest Startup Bets

      October 12, 2025 - No company has benefited more dramatically from the generative AI boom than Nvidia. Since the debut of ChatGPT over two years ago, Nvidia’s financials have soared—reflected in its skyrocketing revenue, profits, and cash reserves. The company’s stock performance has mirrored this growth, lifting its market capitalization to $4.5 trillion.

      With its dominant position as a GPU provider for AI applications, Nvidia has rapidly increased its venture investment activity. In 2025 alone, the company has already completed 50 venture capital deals, surpassing the 48 deals it recorded in all of 2024. These figures exclude investments from its corporate VC arm, NVentures, which has also expanded its activity—engaging in 21 deals this year compared to just one in 2022.

      Nvidia has stated that its investment strategy is focused on growing the AI ecosystem by backing what it calls “game changers and market makers.”

      Below is a rundown of startups that have raised over $100 million since 2023 with Nvidia listed as an investor, ranked by size of the funding round.

      Billion-Dollar Rounds

      OpenAI: Nvidia made its first investment in OpenAI in October 2024, contributing $100 million to a $6.6 billion round that valued the company at $157 billion. While it did not join the $40 billion round in March 2025, Nvidia announced a strategic commitment to invest up to $100 billion in OpenAI to help build out AI infrastructure.

      xAI: Despite OpenAI’s attempts to deter investors from backing rivals, Nvidia participated in a $6 billion round for Elon Musk’s xAI in December 2024. The company is also expected to invest up to $2 billion in a future $20 billion equity round aimed at hardware procurement.

      Mistral AI: Nvidia joined a €1.7 billion (approximately $2 billion) Series C round for Mistral AI in September, marking its third investment in the French LLM developer. The round valued the company at €11.7 billion ($13.5 billion).

      Reflection AI: In October 2025, Nvidia led a $2 billion round in Reflection AI, a year-old U.S. startup valued at $8 billion. The company is positioning itself as an open-source alternative to closed-source LLM providers.

      Thinking Machines Lab: Nvidia participated in a $2 billion seed round for the AI startup founded by former OpenAI CTO Mira Murati. The funding valued the company at $12 billion.

      Inflection: Nvidia was a lead investor in Inflection’s $1.3 billion raise in June 2023. Less than a year later, Microsoft acquired the team and a non-exclusive license for $620 million, leaving Inflection with a smaller workforce and less clear direction.

      Nscale: After a $1.1 billion raise in September, Nvidia took part in a $433 million SAFE round in October. Nscale is building AI-focused data centers in the UK and Norway for OpenAI's Stargate project.

      Wayve: Nvidia joined a $1.05 billion round in May 2024 for Wayve, a UK-based autonomous driving startup. A further $500 million investment is expected. The company is testing in the UK and the San Francisco Bay Area.

      Figure AI: Nvidia participated in Figure AI’s $1 billion+ Series C round in September, valuing the robotics company at $39 billion. It previously invested in Figure’s $675 million Series B in early 2024.

      Scale AI: In May 2024, Nvidia joined a $1 billion round for Scale AI, valuing the data labeling company at nearly $14 billion. In June, Meta bought a 49% stake for $14.3 billion and hired away its CEO and several executives.

      High-Value Rounds ($100M–$999M)

      Commonwealth Fusion: Nvidia joined an $863 million round in August 2025. The nuclear fusion company was valued at $3 billion.

      Crusoe: Nvidia participated in a $686 million raise in November 2024 for Crusoe, a startup building AI-focused data centers.

      Cohere: Nvidia invested in Cohere’s $500 million Series D in August, valuing the LLM company at $6.8 billion. It has been an investor since 2023.

      Perplexity: Nvidia first invested in Perplexity in November 2023 and joined several subsequent rounds, including a $500 million raise in December 2024. It did not participate in the September 2025 $200 million round.

      Poolside: Nvidia backed a $500 million round in October 2024, valuing the AI code assistant startup at $3 billion.

      Lambda: Nvidia participated in Lambda’s $480 million Series D in February. The AI cloud firm, valued at $2.5 billion, rents Nvidia GPU-powered servers.

      CoreWeave: Nvidia invested in CoreWeave in April 2023 when it raised $221 million. It remains a major shareholder even after CoreWeave went public.

      Together AI: Nvidia took part in a $305 million Series B in February, valuing the cloud-based AI infrastructure provider at $3.3 billion.

      Firmus Technologies: In September, Nvidia invested in a $215 million round for Firmus, which is building a green AI data center in Tasmania.

      Sakana AI: Nvidia participated in a $214 million Series A in September 2024 for the Japanese generative AI firm, valued at $1.5 billion.

      Nuro: In August, Nvidia joined a $203 million round for Nuro. The self-driving delivery company’s valuation dropped to $6 billion from a 2021 peak of $8.6 billion.

      Imbue: Nvidia participated in a $200 million round in September 2023 for Imbue, which is developing reasoning and coding AI systems.

      Waabi: In June 2024, Nvidia invested in a $200 million Series B for autonomous trucking startup Waabi.

      Ayar Labs: In December, Nvidia joined a $155 million funding round for Ayar Labs, a startup focused on optical interconnects for AI compute.

      Kore.ai: Nvidia invested in a $150 million raise in December 2023 for Kore.ai, an enterprise AI chatbot provider.

      Sandbox AQ: In April, Nvidia joined a $150 million investment into Sandbox AQ, bringing its Series E to $450 million and valuation to $5.75 billion.

      Hippocratic AI: Nvidia joined a $141 million Series B in January, valuing the healthcare-focused AI company at $1.64 billion.

      Weka: In May 2024, Nvidia invested in Weka’s $140 million raise, which valued the data management startup at $1.6 billion.

      Runway: Nvidia participated in a $308 million round in April for Runway, a generative AI media company now valued at $3.55 billion.

      Bright Machines: In June 2024, Nvidia invested in a $126 million Series C for Bright Machines, which develops AI-powered robotics.

      Enfabrica: Nvidia joined a $125 million Series B in September 2023. It did not participate in the company’s next round in November.

      Reka AI: In July, Nvidia backed a $110 million raise for Reka AI, which tripled its valuation to over $1 billion.

      Source: https://techcrunch.com/2025/10/12/nvidias-ai-empire-a-look-at-its-top-startup-investments/

      Rivals Intensify Efforts to Challenge Nvidia's AI Chip Supremacy

      October 06, 2025 - As artificial intelligence continues to transform global industries, Nvidia's competitors are accelerating their efforts to compete with the chipmaker's dominant position in the AI hardware market. Despite mounting challenges from tech giants like Google and Amazon, as well as Chinese firms, analysts say Nvidia's lead remains firmly intact.

      Nvidia’s Market Leadership

      Once a lesser-known player, Nvidia has surged to become the highest-revenue chipmaker globally, thanks to its powerful graphics processing units (GPUs)—the essential hardware behind AI models such as ChatGPT. While not the first company to develop GPUs, Nvidia was an early mover in the field during the late 1990s and leveraged that head start into unmatched expertise.

      According to Dylan Patel, head of consultancy SemiAnalysis, Nvidia is more than a chip designer—it's "a three-headed dragon" that integrates hardware, networking, and software into a unified infrastructure.

      "Nvidia can satisfy every level of need in the datacenter with world-class product," said Jon Peddie of Jon Peddie Research.

      Emerging Competitors

      Despite Nvidia holding an estimated 80% market share, challengers are stepping up. AMD, long considered its primary U.S. rival, still focuses primarily on CPUs—less suitable for AI tasks—and may struggle to divert resources.

      Major cloud service providers have responded by building their own processors. Google began deploying its Tensor Processing Units (TPUs) nearly ten years ago, while Amazon Web Services introduced Trainium in 2020. Combined, they now command over 10% of the market and, according to SemiAnalysis's Jordan Nanos, have surpassed AMD in key areas including performance and scalability.

      While Google has reportedly begun offering its chips to external clients, Amazon does not yet sell Trainium chips beyond its own ecosystem.

      China’s Position

      China, the only country close to rivaling the U.S. in this domain, is racing to bridge the gap despite export restrictions limiting access to advanced U.S. semiconductors. Huawei is seen as one of Nvidia’s strongest global competitors, potentially surpassing AMD, according to Nanos.

      Chinese tech firms Baidu and Alibaba are also investing in proprietary AI chips. However, Jon Peddie notes that technical parity remains elusive due to reliance on domestic fabrication capabilities. Still, with large-scale investment and a highly skilled workforce, experts expect China to eventually develop cutting-edge chipmaking capacity.

      Outlook for Nvidia

      For now, analysts agree Nvidia’s dominance is secure. "Nvidia underpins the vast majority of AI applications today," said John Belton, analyst at Gabelli Funds. "And despite their lead, they keep their foot on the gas by launching a product every year, a pace that will be difficult for competitors to match."

      In September, Nvidia unveiled its next-generation chip architecture, Rubin, set for release in late 2026. The chip is projected to deliver AI performance 7.5 times greater than the current Blackwell series—further raising the bar for would-be challengers.

      Source: https://economictimes.indiatimes.com/tech/technology/competition-heats-up-to-challenge-nvidias-ai-chip-dominance/articleshow/124329646.cms

      Nvidia Commits Up to $100B Investment in OpenAI for AI Data Center Expansion

      September 22, 2025 - Nvidia announced plans Monday to invest as much as $100 billion in OpenAI, aiming to build expansive data centers to support advanced AI model training and deployment. The two companies have signed a letter of intent to deploy 10 gigawatts of Nvidia-powered systems — enough energy to supply millions of households — for OpenAI’s next-generation AI infrastructure.

      This strategic move could help OpenAI diversify beyond Microsoft, its largest investor and primary cloud services provider. In January, Microsoft modified its agreement with OpenAI, granting the AI company the flexibility to collaborate with additional infrastructure partners. Since then, OpenAI has entered into various AI data center ventures, including the Stargate project.

      According to Nvidia, the investment will complement OpenAI’s current collaborations with Microsoft, Oracle, and SoftBank. OpenAI stated it will partner with Nvidia as its “preferred strategic compute and networking partner” for scaling its AI factories.

      Details of the $100 billion investment — whether in the form of GPUs, cloud credits, capital, or other assets — have not yet been disclosed.

      Source: https://techcrunch.com/2025/09/22/nvidia-plans-to-invest-up-to-100b-in-openai/

      Nvidia Introduces Rubin CPX GPU for Ultra-Long Context AI Tasks

      September 9, 2025 - At the AI Infrastructure Summit on Tuesday, Nvidia unveiled its latest GPU innovation, the Rubin CPX, specifically engineered for handling context windows exceeding 1 million tokens.

      The Rubin CPX is part of Nvidia’s upcoming Rubin series and is optimized for processing extensive sequences, supporting a “disaggregated inference” model designed to scale complex workloads. This architecture aims to deliver enhanced performance for long-context applications such as video generation and software development.

      The announcement comes amid Nvidia’s continued momentum in the AI hardware space. The company recently posted $41.1 billion in data center revenue for the most recent quarter—highlighting its dominant position in the market.

      The Rubin CPX is expected to become commercially available by the end of 2026.

      Source: https://techcrunch.com/2025/09/09/nvidia-unveils-new-gpu-designed-for-long-context-inference/

      Two Unnamed Customers Drove 39% of Nvidia's Q2 Revenue Surge

      August 30, 2025 - Nvidia disclosed that two unnamed clients were responsible for a combined 39% of its second-quarter revenue, according to a recent SEC filing.

      The chipmaker posted record-breaking revenue of $46.7 billion for the quarter ending July 27, marking a 56% increase year-over-year, fueled largely by soaring demand in the AI data center market. However, the financial filing reveals that a significant portion of this growth came from just a few major customers.

      Nvidia stated that one customer contributed 23% of its Q2 revenue, while another accounted for 16%. The company did not name these entities, referring to them only as "Customer A" and "Customer B."

      For the first half of the fiscal year, Customer A and Customer B represented 20% and 15% of total revenue, respectively. Additionally, four other clients each made up 14%, 11%, another 11%, and 10% of Q2 revenue.

      These are categorized as "direct" customers in Nvidia's report, including original equipment manufacturers (OEMs), system integrators, or distributors who buy chips directly from Nvidia. The company clarified that indirect customers — such as cloud providers and consumer internet firms — typically purchase through these direct channels.

      This suggests it's unlikely that major cloud players like Microsoft, Amazon, Google, or Oracle are directly represented as Customer A or B, although they may be indirectly responsible for the bulk purchasing.

      Nvidia CFO Nicole Kress told CNBC that large cloud service providers were responsible for 50% of Nvidia's data center revenue, which itself comprised 88% of the company’s total revenue.

      Commenting on the revenue concentration, Gimme Credit analyst Dave Novosel told Fortune, “Concentration of revenue among such a small group of customers does present a significant risk,” but added, “these customers have bountiful cash on hand, generate massive amounts of free cash flow, and are expected to spend lavishly on data centers over the next couple of years.”

      Source: https://techcrunch.com/2025/08/30/nvidia-says-two-mystery-customers-accounted-for-39-of-q2-revenue/

      Nvidia Posts Record Revenue Amid Surging AI Demand

      August 27, 2025 - Nvidia, currently the world’s most valuable company, announced record-breaking quarterly earnings on Wednesday, reporting $46.7 billion in revenue—a 56% year-over-year increase. This surge was largely driven by the company’s booming data center segment, heavily influenced by continued AI expansion, which alone contributed $41.1 billion in revenue.

      Net income also saw a notable rise, reaching $26.4 billion for the second quarter—up 59% from the same period last year. A significant portion of this came from Nvidia’s cutting-edge Blackwell chips, which generated $27 billion in sales.

      “Blackwell is the AI platform the world has been waiting for,” said CEO Jensen Huang in a statement. “The AI race is on, and Blackwell is the platform at its center.”

      Huang also projected that global AI infrastructure investments could hit $3 to $4 trillion by the decade's end. “$3 to 4 trillion is fairly sensible for the next five years,” he told analysts.

      Nvidia highlighted its involvement in the recent launch of OpenAI’s open source gpt-oss models. According to the company, the rollout achieved a throughput of “1.5 million tokens per second on a single Nvidia Blackwell GB200 NVL72 rack-scale system.”

      However, the report also shed light on Nvidia’s challenges in the Chinese market. The company confirmed zero sales of its H20 chip to customers in China last quarter, although $650 million worth of H20 units were sold to a non-Chinese client.

      U.S. restrictions on advanced GPU exports to China remain in place, but a new policy under President Trump allows sales if companies pay a 15% export tax to the U.S. Treasury. Legal scholars have criticized the policy as a potential constitutional overreach.

      Nvidia CFO Colette Kress addressed the issue during the earnings call, stating, “While a select number of our China-based customers have received licenses over the past few weeks, we have not shipped any H20 devices based on those licenses.”

      Compounding the issue, Chinese authorities have advised domestic firms against using Nvidia hardware, prompting the company to reportedly pause H20 production earlier this month.

      Looking ahead, Nvidia projects $54 billion in revenue for the third quarter. The forecast, which includes a possible 2% variance, does not factor in any H20 shipments to China.

      Source: https://techcrunch.com/2025/08/27/nvidia-reports-record-sales-as-the-ai-boom-continues/

      Nvidia Reportedly Developing New AI Chip for China Market Amid Export Challenges

      August 19, 2025 - Nvidia, the world's most valuable semiconductor company, is reportedly working on a new AI chip specifically for the Chinese market, according to Reuters. The chip, known internally as the B30A, is said to deliver half the power of Nvidia's flagship B300 Blackwell GPU.

      Despite its lower performance, the B30A would offer an upgrade over the H20 GPUs currently permitted for sale in China. Unlike the dual-die architecture of the B300, the B30A will feature a single-die design. It will still include capabilities such as high-bandwidth memory, NVLink support, and fast data transmission — similar to the H20.

      Reuters noted that the B30A is a separate initiative from another AI chip Nvidia is believed to be developing for China.

      "We evaluate a variety of products for our roadmap, so that we can be prepared to compete to the extent that governments allow. Everything we offer is with the full approval of the applicable authorities and designed solely for beneficial commercial use," Nvidia stated via email.

      The report comes amid shifting U.S. policy on AI chip exports. The Trump administration has recently softened its restrictions on high-performance AI chip shipments to China, but Reuters sources indicated that the B30A's approval for export is still uncertain.

      As geopolitical tensions around AI development intensify, some U.S. policymakers argue for stricter control over tech exports to maintain a competitive edge over China. Meanwhile, Nvidia and other chipmakers contend that retreating from the lucrative Chinese market could open the door for rivals such as Huawei to dominate.

      Source: https://techcrunch.com/2025/08/19/nvidia-said-to-be-developing-new-more-powerful-ai-chip-for-sale-in-china/

      Nvidia Launches New Cosmos AI Models and Infrastructure for Robotics and Physical AI

      August 11, 2025 - Nvidia has introduced a fresh lineup of world AI models, developer libraries, and infrastructure tools aimed at advancing robotics and physical AI applications. Leading the announcement is Cosmos Reason, a 7-billion-parameter vision-language model designed to enhance “reasoning” capabilities in robots and other physical AI systems.

      Joining the existing Cosmos model suite are Cosmos Transfer-2, which speeds up synthetic data generation from 3D simulation environments or spatial control inputs, and a streamlined, faster version of Cosmos Transfers optimized for performance.

      Revealed at the SIGGRAPH conference on Monday, Nvidia stated that these models enable the creation of synthetic text, image, and video datasets for training both robots and AI agents. According to the company, Cosmos Reason leverages memory and physics-based understanding to “serve as a planning model to reason what steps an embodied agent might take next.” Potential uses include data curation, robotic planning, and video analytics.

      Nvidia also rolled out new neural reconstruction libraries, including a rendering technique that transforms sensor data into realistic 3D simulations. This technology is being integrated into the open-source CARLA simulator, a widely used platform for developers. Additionally, updates were announced for the Omniverse software development kit.

      On the hardware front, Nvidia unveiled the RTX Pro Blackwell Server, a unified architecture designed for robotics development workloads, and Nvidia DGX Cloud, a cloud-based platform for managing such projects.

      These moves mark Nvidia’s continued push into robotics, positioning its AI GPUs for applications beyond data center operations.

      Source: https://techcrunch.com/2025/08/11/nvidia-unveils-new-cosmos-world-models-other-infra-for-physical-applications-of-ai/

      Deadline Extended: Submit a Project G-Assist Plug-In for a Chance to Win NVIDIA RTX GPUs and Laptops

      July 15, 2025 - The deadline for submissions to NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon has been extended to Sunday, July 20, at 11:59 PM PT. Participants can leverage resources from RTX AI Garage to create innovative plug-ins for a chance to win top-tier NVIDIA hardware.

      The hackathon challenges developers to expand the functionality of Project G-Assist, an experimental AI assistant in the NVIDIA App that helps users control and optimize GeForce RTX systems.

      Winners will receive prizes including a GeForce RTX 5090 laptop, NVIDIA GeForce RTX 5080 and RTX 5070 Founders Edition graphics cards, and NVIDIA Deep Learning Institute credits. Finalists could also be featured on NVIDIA’s social media channels.

      Project G-Assist: AI for Seamless Control

      Project G-Assist allows users to manage RTX GPUs and system settings with natural language commands. Powered by a small on-device language model, it is accessible directly through the NVIDIA overlay in the NVIDIA App—no need to leave applications or disrupt workflows.

      Developers can enhance G-Assist via plug-ins, connecting it to agentic frameworks such as Langflow. Plug-ins can be built in Python for rapid prototyping, in C++ for performance-intensive tasks, or tailored for custom hardware and OS automation.

      System Requirements

      Project G-Assist supports GeForce RTX 50, 40, or 30 Series desktop GPUs with at least 12GB of VRAM, Windows 10 or 11, a compatible CPU (Intel Pentium G Series and above or AMD Ryzen 3 and higher), recent GeForce Game Ready or NVIDIA Studio drivers, and specific storage requirements.

      Get Ready to Submit

      As the submission deadline nears, NVIDIA has provided key resources:

      On-demand webinar: NVIDIA senior software engineer Sydney Altobell shares tips on building G-Assist plug-ins, available on the NVIDIA Developer YouTube channel.

      Developer support: Engage with NVIDIA’s engineering team and fellow developers in the NVIDIA Developer Discord.

      GitHub repository: Access step-by-step guides, documentation, and sample plug-ins, including integrations for Discord, IFTTT, and Google Gemini.

      ChatGPT Plug-In Builder: Simplify development with OpenAI’s GPT builder for generating plug-in code.

      For a detailed walkthrough of plug-in architecture, check out NVIDIA’s technical blog, which uses a Twitch integration as an example.

      Visit the Hackathon entry page for submission requirements and details.

      Stay connected via NVIDIA AI PC on Facebook, Instagram, TikTok, and X, or subscribe to the RTX AI PC newsletter for updates. Follow NVIDIA Workstation on LinkedIn and X to explore more community innovations.

      Source: https://blogs.nvidia.com/blog/rtx-ai-garage-g-assist-hackathon-plug-in-last-chance/

      You May Also Be Interested In