HPE & NVIDIA expand AI portfolio to deliver secure and scalable AI
HPE has announced an expansion of its NVIDIA AI Computing by HPE portfolio, targeting the deployment and scaling of secure, private AI infrastructure for governments, regulated industries, and enterprises.
Organisations across many sectors continue to face challenges with fragmented AI strategies and data silos. According to HPE's 2025 Architecting an AI Advantage report, nearly 60 per cent of organisations have disjointed AI goals, and a similar proportion lack comprehensive data management for their AI initiatives. HPE and NVIDIA have taken a unified approach to infrastructure, introducing a suite of solutions to help companies develop holistic AI environments and strategies.
Fidelma Russo, Executive Vice President and General Manager, Hybrid Cloud and CTO at HPE, commented:
"To accelerate widespread AI adoption in enterprises, technology must directly address the core challenges that organisations face around complex deployments and fragmented, highly sensitive data. Together with NVIDIA, we offer a different approach with full-stack, private AI factories that simplify operations and help enterprises and governments scale quickly while staying compliant."
Justin Boitano, Vice President, Enterprise Software at NVIDIA, also spoke about the collaboration:
"AI factories are the new infrastructure of the intelligence era - built to generate tokens of intelligence at massive scale. NVIDIA and HPE are building these full-stack systems - integrating NVIDIA Blackwell, NVIDIA networking software and AI Data Platform reference designs - to power agentic AI, unlock automation and accelerate digital transformation for every industry."
Secure and scalable AI factories
The latest expansion brings the second generation of HPE Private Cloud AI, co-developed with NVIDIA, to customers in a smaller form factor, aiming to reduce the time needed to realise value from AI deployments. The updated platform now offers ProLiant Compute DL380a Gen12 servers and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These components deliver three times improved price-to-performance for enterprise AI workloads. The ProLiant Compute DL380a Gen12 servers have achieved seven top rankings in MLPerf Inference v5.1 benchmarks, including a record with the RTX PRO 6000.
For government use, HPE Private Cloud AI will support the NVIDIA AI Factory for Government. This is a full-stack, end-to-end reference design intended to meet compliance requirements for high-assurance organisations deploying multiple AI workloads either on-premises or in a hybrid cloud.
Security enhancements include air-gapped management, enabling the creation of isolated cloud environments, supporting the needs of government and regulated industries for secure and compliant AI deployment.
HPE Services is also offering new capabilities using HPE Private Cloud AI and NVIDIA NeMo to deliver digital avatar assistants, designed to enhance customer interaction and support across sectors such as smart cities, retail, manufacturing, healthcare, banking and finance, and education.
To aid with faster AI project outcomes, HPE now provides a system adoption accelerator service for its Private Cloud Developer Edition, including post-installation testing, sample pipelines, and knowledge transfer sessions for data science teams.
The Town of Vail has become a lighthouse customer, launching the HPE Agentic Smart City Solution. This initiative aims to scale citywide smart infrastructure using secure and integrated AI systems, including helping to address accessibility compliance, permitting, and wildfire detection. Partners such as SHI and NVIDIA are working with the Town of Vail on this programme.
Agentic AI data management
HPE's new unified data layer now incorporates agentic AI and supports unstructured data storage even in air-gapped environments. Speeding up the AI data lifecycle, the solution leverages the global namespace of HPE Data Fabric Software and HPE Alletra Storage MP X10000. When used with NVIDIA AI Data Platform, enterprises can support applications, models, and agents with AI-ready data.
The X10000 storage system can be managed in air-gapped environments, offering cloud-like management in places where external network access must be limited. It now supports NVIDIA S3oRDMA, which enables up to double the performance through direct memory access transfers between GPUs, system memory, and the storage platform.
HPE Data Fabric brings agentic AI-powered governance using Model Context Protocol (MCP), enabling secure model-to-data interactions and consistent compliance enforcement. This federates data of all types to provide "data without borders" for AI pipelines.
Expanded support
The NVIDIA AI Computing by HPE portfolio has been expanded to cater for developers, enterprises, and sovereign entities looking to scale their AI deployments. The HPE ProLiant Compute XD685 now accommodates eight NVIDIA Blackwell Ultra B300 GPUs in a 5U, direct-liquid cooled HGX chassis, featuring built-in security and integrated cluster management for large, secure AI clusters.
The NVIDIA GB300 NVL72 by HPE, aimed at supporting AI models with over 1 trillion parameters, is now available for order, with shipping expected in December 2025.
The HPE ProLiant Compute DL380 Gen12 Server Premier Solution for Azure Local is set to deliver secure hybrid cloud for enterprises wishing to run Azure services anywhere, accelerating AI innovation with direct datacentre deployment and support for NVIDIA RTX PRO 6000 GPUs.
Support for NVIDIA GPUs has expanded across eight HPE ProLiant platforms, giving customers in sectors such as research, healthcare, finance, and retail the flexibility for scalable and high-performance AI workloads. HPE ProLiant and HPE Cray servers are expected to be among the first to support the latest NVIDIA Rubin CPXs and Vera Rubin CPX platforms, as well as the NVIDIA ConnectX-9 SuperNIC and BlueField-4 DPU. BlueField-4 delivers 800 Gb/s throughput, multi-tenant networking, secure AI runtimes, and high-performance data access and inference processing for large-scale AI projects.