Las Vegas, United States – January 7, 2026 – Lenovo has announced a strategic partnership with NVIDIA aimed at helping cloud providers and enterprises rapidly deploy large-scale AI data centers, as demand for accelerated computing infrastructure continues to surge globally. The collaboration was unveiled at CES 2026 in Las Vegas and centers on a new initiative called the Lenovo AI Cloud Gigafactory with NVIDIA.
Under the partnership, Lenovo will combine its end-to-end data center infrastructure capabilities with NVIDIA’s accelerated computing platforms to deliver what the companies describe as “AI-ready factories” that can move from design to deployment in a matter of weeks rather than months. The offering is designed to address mounting challenges around speed, scale, power density, and cooling as AI workloads grow more complex and compute-intensive.
Lenovo said the program brings together its manufacturing scale, lifecycle services, and Neptune liquid cooling technology with NVIDIA’s latest GPU and networking platforms. These include systems built around NVIDIA’s newest accelerated computing architectures intended for training and inference workloads across cloud, enterprise, and sovereign AI environments. The companies said the approach is aimed at simplifying data center construction and integration while reducing time to production for AI services.
Speaking at the launch, Lenovo Chairman and CEO Yang Yuanqing said the initiative “sets a new benchmark for scalable AI factory design,” enabling customers to deploy advanced AI infrastructure at unprecedented speed. NVIDIA founder and CEO Jensen Huang said the collaboration delivers “full-stack AI platforms” that integrate computing, networking, and software to power next-generation AI systems across data centers and edge environments.
Independent reporting noted that the Lenovo AI Cloud Gigafactory model is built around pre-integrated building blocks, reference architectures, and industrialized deployment processes. These elements are intended to help customers standardize AI infrastructure while retaining flexibility for custom configurations, including high-density liquid-cooled clusters and specialized AI workloads.
The announcement comes as data center operators face increasing pressure to scale capacity quickly while managing power availability, cooling constraints, and rising capital costs. Analysts say partnerships between infrastructure providers and chipmakers are becoming more critical as AI systems demand tighter integration between hardware, software, and facility design.
Lenovo and NVIDIA have worked together for years across AI and high-performance computing platforms, but the companies said this latest initiative represents a deeper operational alignment focused specifically on accelerating data center deployment timelines. The collaboration also reflects a broader industry shift toward turnkey AI infrastructure solutions as enterprises and cloud providers race to support generative AI, large language models, and real-time inference at scale.
With the launch of the AI Cloud Gigafactory, Lenovo is positioning itself as a central player in the global AI infrastructure buildout, while NVIDIA continues to extend its influence beyond chips into full-stack data center platforms. Together, the companies aim to help customers bring large-scale AI data centers online faster as competition for computing intensifies worldwide.