Search

LivePerson Launches Generative AI Products and Tools

Deployment of New AI Tools Includes a Focus on Customer Safety and Control

The hype surrounding the use of generative AI and large language models (LLMs) has been hard to avoid, with many CX platform vendors and conversational technology providers quickly launching press releases highlighting their products’ utilization of these new tools. However, few vendors have taken a similarly aggressive approach at the product level as LivePerson, which announced on May 2 the availability of its Conversational Cloud platform with new generative AI and LLM-driven capabilities, as well as a future capabilities roadmap.

Representatives from LivePerson, a provider of conversational experiences to B2B companies, said at the launch event that the deployment of generative AI and LLM tools will be focused on an extensive range of CX-and EX-focused use cases: empowering agents with guided workflows; providing automated summaries or delivering fully automated, voice conversations; streamlining employee engagement by automating HR and other business workflows; providing customer insights and business intelligence from conversational data; and allowing individuals to create their own personal AI bots via LivePerson’s Bella AI platform.

“One of our most significant new features will be conversational insights,” explains Alex Kroman, EVP Product and Technology at LivePerson. “This is an improved conversational intelligence experience that will enable you to create more effective automations and better understand your customers. We are also working on enhanced integrations, which will enable conversations to trigger thousands of business actions by integrating with commonly used business platforms.”

LivePerson is fully leaning into the use of these new AI tools, reflecting the company’s confidence in the guardrails that have been incorporated into its platform to ensure safe and fair use of these new tools.

“The primary determinant of what your AI wants to and is able to talk about is the data you expose to it,” says Joe Bradley, Chief Data Scientist at LivePerson, noting that LLM responses are restricted to a curated collection of knowledge content that is managed within LivePerson’s controlled environment.

Bradley adds that the customer will be able to choose between both more permissive and stricter governors of the generative AI, in the form of prompts and other controls tested on hundreds of use cases. Brands can also simply use these tools behind the scenes, serving a human agent, letting the human agent work faster and more efficiently.

LivePerson also noted that the platform allows customers to test the technology extensively before exposing it to their own customers, and will include tools to measure and handle conversation-and-answer quality regression to make sure new versions of a bot are safer than the older versions. The platform also includes monitoring and interruption capabilities, which can be set up to ensure that the platform does not go off the rails.

“You’ll also have access to real-time sensors that can identify new types of problems inherent with generative AI like hallucination, prompt abuse, and critically, you’ll have the ability to train and refine your own sensors like these with your data and your human agent feedback,” Bradley says. “You’ll even have a separate generative AI bot that can test your AI with thousands of conversations using your conversational data to simulate your own customer’s behavior with your AI.”

Perhaps most importantly for customers, LivePerson clearly laid out its roadmap for the incorporation of generative AI and LLM within their platform. This approach stands in contrast to many other CX platforms and technology providers, which have been far less transparent in their plans for incorporating generative AI and LLMs into their products.

With the initial platform launch, generative AI capabilities powered by LLMs will be available to help agents deliver a better omnichannel experience (providing recommended answers, content summarization, and the creation of non-code-virtual assistants that can interact via voice or digital channels). Also available at launch are LLM-powered voice bots that can handle phone conversations and direct customers to the channel that is best suited to help them based on intent, sentiment, and specific needs, along with Voicebase analytics for training and improvement. The self-service Bella AI service is also available now, allowing the automated creation of conversational experiences without the need for complex setup processes or programming expertise.

Later in the summer, conversational insights, enhanced employee engagement templates for IT and HR use cases, and 1,500+ integrations connecting LLMs with automated content curation will be launched, enabling the resolution of any action across Voice or Messaging AI. Additionally, LLM functionality across the three initial launch categories (Generative AI, Voice AI, and Bella AI) will be made, including enhanced safety features, self-service options, and insights capabilities.

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

SHARE:

Latest Insights:

T-Mobile Raises 2024 Guidance Driven by Q1 2024 Service Revenue, Profitability, and High-Speed Internet Breakthroughs Plus Record Low Postpaid Phone Churn
The Futurum Group’s Ron Westfall and Daniel Newman examine T-Mobile’s Q1 2024 results and why they expect T-Mobile to fulfill its raised 2024 guidance as the company is outperforming its rivals across important mobile network service categories.
Generative AI-Powered Workflows Are Helping to Fuel Performance Across All Key Business Areas
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover ServiceNow’s Q1 2024 earnings and discuss how the company has successfully leveraged generative AI across its platform to drive revenue growth.
A Game-Changer in the Cloud Software Space
The Futurum Group’s Paul Nashawaty and Sam Holschuh provide their insights on the convergence of IBM, Red Hat, and now potentially HashiCorp and the compelling synergy in terms of developer tools, security offerings, and automation capabilities.

Latest Research:

In our latest Research Brief, The Case for Integrated Building Management: Achieving Operational Efficiency with Honeywell Enterprise Buildings Integrator, done in partnership with Honeywell, we analyze Honeywell’s EBI solution and how it addresses key customer pain points.
In this white paper, Operationalizing the Circular Economy: How HP is Reinventing Sustainability for the Tech Sector, you will learn the five fundamental challenges standing in the way of this transition, and how to address them.
Our latest research report, Endpoint Security Trends 2023, digs into modern attack techniques and how IT and security practitioners can most effectively respond and react, grounded in quantitative survey feedback.