Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Dell is rolling out a new set of services designed to help businesses build their own generative AI systems and host them in on-premise data centers.
The computer giant worked with longtime partner Nvidia to expand on the Project Helix offering it debuted earlier this year to help companies run large language models (LLMs) from on-site servers.
The new additions include infrastructure and software for setting up an LLM or image generation model, and workstations equipped with tools for data scientists and developers to build AI models locally before deploying them at scale.
The rollout comes as businesses of all sizes have rushed to figure out ways the much-hyped generative AI technology can fit into their operations and internal workflows. Dell’s announcement of Project Helix mentioned use cases like digital assistants, coding help, customer service support, and healthcare research.
“It’s very clear in our conversations with customers that there is a unique sense of urgency that organizations of all sizes, verticals, and geographies are facing right now in terms of adopting,” Varun Chhabra, Dell’s SVP of infrastructure and telecom marketing, said in a call with reporters and analysts.
Much of this activity so far has taken place on major cloud platforms; Google CEO Sundar Pichai recently claimed that 70% of generative AI unicorns are Google Cloud customers. But many companies have concerns about how their private data might be exposed when using public tools. Some would prefer to host their AI operations within their own servers, according to Chhabra.
“How do you minimize the risks? The risks of sharing company data on tools that have mixed privacy records at best, [and] unclear road maps around what happens with any data that’s entered in many of the tools that are out there today, as well as the risk of the data being leaked,” Chhabra said. “These are all things that organizations are grappling with as they consider how to tackle generative AI.”
Chhabra said the new suite of generative AI services and hardware can help companies either build an LLM from scratch or fine-tune an existing foundational model on internal company data, the latter being by far the easier and more common option for enterprises.
While Chhabra said Dell does offer options for businesses looking to deploy on cloud platforms, the services the company announced this week specifically focus on on-premise or edge computing development and deployment, as well as clients interested in porting AI systems built in the cloud into private data centers.
“This is meant to help customers that are looking to bring either portions or all of the generative AI workloads on-prem or start their initiatives on-prem because of data privacy, data security concerns,” Chhabra said. “So it’s really meant for on-prem but it can work for customers who are taking pre-existing models or doing some testing in the public cloud, but then want to deploy it on-prem at scale.”