Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
ServiceNow, a vendor known for automating enterprise workflows, is making a move with generative AI to transform traditionally slow business processes.
At its ongoing Knowledge 23 conference, the Santa Clara, California-based company said it is partnering with Nvidia to develop custom generative AI models for various functions of the enterprise, starting with IT workflows.
“IT is the nervous system of every modern enterprise in every industry. Our collaboration to build super-specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform,” Nvidia founder and CEO Jensen Huang said while making the announcement with ServiceNow president and COO CJ Desai.
The partnership will see ServiceNow leverage Nvidia’s software, services and accelerated infrastructure. It comes at a time when global enterprises continue to explore the potential of generative AI for driving efficiencies.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The need for custom generative AI
While generative AI models like the popular GPT series, do their job pretty well, it’s well established that they learn from public-domain data to deliver results. For enterprise use cases, this may not be very effective, as the models have not been exposed to internal company data.
For instance, if an employee asks about connecting to a company’s VPN or about an internal policy, public models may not be able to answer the question accurately.
With this partnership with Nvidia, ServiceNow is looking to address this gap by building custom generative AI models for enterprises, fine-tuned to learn from a company’s vocabulary and provide accurate, domain-specific answers.
“It will all start with the IT domain, using Nvidia’s NeMo foundational models as the starting point as well as Nvidia GPUs. Upon these, the capabilities will be built,” Rama Akkiraju, VP of AI/ML for IT at Nvidia, said in a press briefing. The custom generative AI models will be provided via ServiceNow’s Now platform, which already offers AI functions to automate enterprise workflows across departments.
Planned use cases
With custom LLMs, Akkiraju said, ServiceNow will allow its customers to target multiple use cases within IT service management and IT operations management, including support ticket summarization and resolution, incident severity prediction, and semantic search for IT policies (and other documentation) through a central chatbot experience.
Moving ahead, the same technology could also come in handy in improving employee experiences by providing them with growth opportunities. For instance, the model could deliver customized learning and development recommendations, like courses, based on natural language queries and information from an employee’s profile.
As a customer of ServiceNow, Nvidia also plans to share its data for initial research and development of custom models aimed at handling IT-specific use cases. The companies are starting off with ticket summarization, a process that takes about seven to eight minutes when done manually by agents but could be instantly handled by AI models.
For this task, ServiceNow is using Nvidia AI Foundations cloud services and the Nvidia AI Enterprise software platform, which includes the NeMo framework and NeMo guardrails. The custom models, Nvidia said, will be running on hybrid-cloud infrastructure consisting of Nvidia DGX Cloud and on-premises Nvidia DGX SuperPOD AI supercomputers.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
(Except for the headline, this story has not been edited by PostX Digital and is published from a syndicated feed.)