By Steve Astorino, VP of Development, Data and AI and Canada Lab Director Software, IBM
“Open the pod bay doors, HAL.” “I’m sorry, Dave. I’m afraid I can’t do that.” Those are two classic lines from the 1968 Stanley Kubrik movie “2001: A space Odyssey”. Over the...
“Open the pod bay doors, HAL.”
“I’m sorry, Dave. I’m afraid I can’t do that.”
Those are two classic lines from the 1968 Stanley Kubrik movie “2001: A space Odyssey”.
Over the years, there have been many movies about Artificial Intelligence (AI) that have attempted to introduce us to AI. But a general lack of understanding, transparency, and explainability around AI has spread mistrust.
AI continues to demonstrate its value by enabling organizations to gain deeper insights into their data. It can help identify new trends and business opportunities that would have otherwise been missed. It can help accelerate time to value when introducing new products and services to market, help predict outcomes accurately, faster, consistently , and help prescribe actions – all of which create the potential for smarter business outcomes.
According to the McKinsey report “The state of AI in 2022—and a half decade in review”, AI adoption and AI capabilities in organizations have doubled since 2017, and budget allocations for AI as part of a digital transformation strategy have grown from 5% in 2018 to 52% in 2022.
I see the market being split into two groups: organizations that successfully embrace and use AI for business, and those that do not. The latter may struggle to compete and may eventually become extinct.
AI for the Masses
In recent months, there has been a lot of buzz around public AI-generative engines that automatically generate text based on written prompts in a way that appears very advanced, creative, and conversational. This technology uses large language models trained on data from the internet with an interface simple enough for the public to use.
Putting AI into the hands of the masses is exciting. However, the answers from these seemingly impressive conversational AI raises many issues including but not limited to:
- Accuracy of the answers generated – as the internet contains misinformation, inaccurate data, conspiracies, hate speech, etc.,
- Accountability issues – often referencing resources and scientific papers that may or may not exist
- Lack of transparency and explainability – as to why these systems arrived at the answers they did.
Trustworthy, Explainable AI – Enterprise Ready
An increasing number of enterprises are transitioning their data integration and analytics capabilities to “AI-first” capabilities (prediction, automation, machine learning).
The impact of AI is being felt across industries as AI is being used everywhere for customer service, personal assistant applications or for automated customer support. With the continued advancements in natural-language text understanding even more applications will embrace this capability.
Organizations need their AI systems to deliver accurate insights on their ever changing and ever-growing enterprise data. They must have full understanding of their AI decision-making processes with model monitoring and accountability of AI and not trust them blindly.
Accountability and explainability help build trustworthy AI. Only by embedding ethical principles into AI applications and processes can we build systems based on trust. For reference, IBM has laid out its perspective on AI Ethics.
IBM has spent decades building a portfolio of business-ready tools, applications, and solutions designed to help reduce the hurdles of AI adoption while optimizing for outcomes and responsible use.
IBM actively engages clients through its ecosystems to incorporate deep industry knowledge and technical expertise to meet the business needs of an organization. And researchers continue to invest in developing the next big advances in software and hardware to bring frictionless, cloud-native development, and use of foundation models to enterprise AI.
AI for Business
Watson Assistant is designed to learn as it goes, improving automatically over time by gaining knowledge from every conversation, through a process called autolearning. Designed to accurately recognize what users want, Watson Assistant comes out of the box with the latest NLP techniques. Watson understands the flow of natural language, to help organizations build robust assistants that understand natural conversations. It can learn the vocabulary of an industry, internal terminology unique to an organization and can be customized to understand nuances like regional dialects and colloquialisms.
For ‘answer generation’, Watson Assistant provides the ability to extend organizations’ conversational AI capabilities with an offering called NeuralSeek by Cerebral Blue. Organizations can leverage queries asked in Watson Assistant and use them to retrieve content via Watson Discovery. Generative pretrained transformer technology is then deployed to generate a response based on the retrieved content, the query, and full context of the conversation.
I view this as a positive way to leverage other generative AI technologies, because a business is still able to capture context and relevant domain language within the responses generated.
Summary and Next steps
Generative AI, while impressive, is just one element of the bigger AI landscape. When used within an enterprise business setting, organizations must have Trustworthy AI. Accuracy, accountability, explainability, and ethics become the foundation of successful AI-based applications that the public can trust.
Steve Astorino, VP of Development, Data and AI and Canada Lab Director Software, IBM