ChannelLife Australia - Industry insider news for technology resellers
Story image
HPE & NVIDIA update GreenLake, setting AI benchmarks
Wed, 20th Mar 2024

Hewlett Packard Enterprise (HPE), in collaboration with NVIDIA, has announced a transformative update to HPE GreenLake for File Storage. This update introduces high-density all-flash solutions designed for the most demanding AI and data lake workloads, focusing on operational efficiency and performance. These enhancements aim to set new benchmarks in AI and present a glimpse into the future of AI technology.

HPE unveiled various updates to an extensive AI-native portfolio to advance the operationalisation of generative AI, deep learning, and machine learning applications. These updates include the availability of two co-engineered full-stack generative AI solutions by HPE and NVIDIA, an HPE Machine Learning Inference Software preview, a retrieval-augmented generation (RAG) reference architecture, and support to develop future products based on the new NVIDIA Blackwell platform.

"To deliver on the promise of generative AI and effectively address the full AI lifecycle, solutions must be hybrid by design," stated Antonio Neri, President and CEO at HPE. He further added, "HPE and NVIDIA have a long history of collaborative innovation, and we will continue to deliver co-designed AI software and hardware solutions that help our customers accelerate the development and deployment of generative AI from concept into production."

The solution announced enables large enterprises, research institutions, and government entities to streamline the model development process with an AI/ML software stack that helps accelerate generative AI and deep learning projects. This turnkey solution is dedicated to use in AI research centres and large enterprises to realise improved time-to-value and speed-up training.

HPE's enterprise computing solution for generative AI is now available to customers directly or through GreenLake with a flexible and scalable pay-per-use model. This full-stack solution allows businesses to tailor foundational models using private data and deploy generative AI applications within a hybrid cloud model.

Featuring a high-performance AI compute cluster and software from HPE and NVIDIA; the solution is ideal for lightweight fine-tuning of models, RAG, and scale-out inference. It also decreases linearly with node count, improving business productivity with AI applications like virtual assistants, intelligent chatbots, and enterprise search.

Faced with the AI skills gap, HPE Services experts will assist enterprises in designing, deploying, and managing the solution, which helps apply model tuning techniques. HPE and NVIDIA have also collaborated on software solutions geared towards transforming AI and ML proofs-of-concept into production applications. These solutions will integrate with NVIDIA NIM to deliver optimised foundation models using pre-built containers.

To support enterprises that need to quickly build and deploy generative AI applications that use private data, HPE developed a reference architecture for RAG. This offering provides a comprehensive data foundation for businesses to create customised chatbots, generators, or copilots.

This transformative update also indicates a future collaboration between HPE and NVIDIA, with HPE planning to develop future products based on the newly announced NVIDIA Blackwell platform, designed to accelerate GenAI workloads.