How to Accelerate the Adoption of a Transparent and Adaptable AI?
Introduction
ChatGPT is an unprecedented gamechanger, especially in terms of breaking product lifecycle records and landing in the trough of disillusionment before we could realize its full potential. On the other hand, enterprises of all sizes are investing time and energy to quickly develop and release their own Large Language Models (LLMs). A recent Gartner poll found that 45% of 2,500 executives have increased investment in AI due to ChatGPT, with 70% of executives exploring Gen AI in their organizations, and 19% in pilot or production mode. According to Bloomberg, the global Gen AI market is projected to reach a staggering USD 1.3 trillion by 2032, which highlights its immense capability and potential.
Fig: Expected GenAI revenue
A recent McKinsey report mentions that nearly 75% of the value generated by Gen AI use cases predominantly lie in four key areas: customer operations, marketing and sales, software engineering, and R&D.
Fig: Transformation enabled by Gen AI
But, we should be aware of the fact that Stanford Center for AI Safety has recently concluded in their white paper that most of the developers are not even aware of the characteristics of the AI. Additionally, there are three major challenges with these models:
- Hallucination effect in which a model output that is either fictitious or completely false
- Generating biased, toxic, and harmful responses
- Inability to cite the right source information
Though there is no panacea to mitigate these challenges immediately, our entire ecosystem comprising of research institutions, private entities, governments are collaborating to find innovative solutions that address the above-mentioned complex challenges efficiently and effectively. One such radical improvement that has caught the attention of experts in recent times is Constitutional AI. This cutting-edge technology proposed by Anthropic in its thesis holds huge potential to revolutionize governance, by using an empirical approach to build safe AI.
What is Constitutional AI?
Constitutional AI (CAI) aims to integrate AI models with constitutional law and governance. At its core, it attempts to augment the decision-making process of models by analyzing huge pile of legal frameworks, historical precedents, and social dynamics to ensure fair and unbiased governance. By leveraging robust Machine Learning (ML) algorithms, CAI can process vast amounts of data, enabling users to make well-informed decisions in real-time.
Fig: Anthropic’s CAI approach to training models
In the broader sense, CAI refers to the use of AI techniques to supervise and manage the behavior of AI models, including LLMs like GPT-3.5/4. It establishes rules, principles, and ethical guidelines and sets constraints on the outputs generated by AI models to ensure they align with societal values, prevent harmful biases, and maintain ethical standards. Thus, it aims to govern AI models to uphold human rights, privacy, and fairness.
Anthropic, an organization that focuses on AI governance, has developed Controlled AI systems using the elements of CAI. OpenAI is also developing AI-powered tools to support legal research and assist policymakers.
These systems provide a way to interact with AI models in a safer and more controlled manner, emphasizing responsible and ethical AI use. These Generative AI models cater to distinct aspects of governance like impact analysis, decision augmentation, facilitation, promote transparency, and safeguard constitutional values etc.
Do leading hyperscalers care about CAI?
Amazon Web Services (AWS)
While specific CAI initiatives were not widely highlighted, AWS has shown interest in R&D related to rule-based AI systems and explainable AI to enhance transparency. However, AWS offers services like Amazon Bedrock, SageMaker, Amazon Rekognition, Amazon Polly, and Amazon Lex that can be customized to align with specific user-defined guidelines for the models.
The Amazon Titan FMs are also built to detect and remove harmful content in the data and reject inappropriate content in the user input. They also filter the model’s outputs containing inappropriate content such as hate speech, profanity, and violence. Recently, Amazon has decided to invest billions of dollars in Anthropic to train and deploy future foundation models on AWS Trainium and Inferentia chips, taking advantage of AWS’s high-performance, low-cost machine learning accelerators
Microsoft Azure
Microsoft has provided guidelines and best practices for responsible AI, which could indirectly contribute to the development of AI models with user-defined rules. Azure offers tools like Azure Machine Learning, Azure’s Cognitive Services suite that bundles services like text analytics and language understanding, which can be customized to adhere to user-defined rules.
Google Cloud Platform (GCP)
Google’s parent company, Alphabet, is backing up Anthropic with a vision to create safe AI. Besides these, GCP offers services like AutoML like other hyperscalers.
All three hyperscale providers are investing a significant amount of time, energy and money in AI research, development, and partnerships to accelerate the ethical aspects of AI, which could contribute to the development of AI systems with user-defined rules.
Potential use cases for enterprises
The adoption of CAI by enterprises and businesses is also gaining traction. Organizations are slowly leveraging AI to ensure compliance with laws and regulations thereby enhancing corporate governance and mitigating legal risks.
Other potential use cases being contemplated are aiding medical professionals to analyze patients’ medical records, generating responsible responses in customer service or public Q&A/emails/chats, providing superior contact center agent performance, productivity-related search, document editing and content generation, increasing hiring process efficiency by creating job descriptions or analyzing interviews, empowering educational institutions to offer personalized coaching, offering strategic insights to any enterprises to name a few.
We at LTIMindtree have enabled many of our customers to embark on this Gen AI-led digital transformation journey. Some of our interesting Gen AI use cases, backed by responsible AI models, include field agent assist, digital twin for industries, earning GPT for investor relations, accelerated code generator, document processing, and content moderation to name a few.
Conclusion
While AI is becoming an integral part of our daily lives, it is inevitable to have a robust AI governance mechanism. Like the way our constitutions and governance frameworks support, empower and govern us, CAI appears to be a promising governing framework to safeguard us against biased algorithms, revolutionize decision-making processes, enhance transparency, uphold human values, promote equitable, ethical, transparent, and prosperous future in every digital-driven field and industry.
CAI promises a world where technology can coexist harmoniously with humanity. And it’s just getting started.
Please reach out to us to not only harness the value of Generative AI but also build responsible and safe Generative AI models at scale.
More from Rakesh Suresh
An omnipresent, omniscient and omnipotent semiconductorSemiconductors are everywhere and…
Latest Blogs
Introduction to RAG To truly understand Graph RAG implementation, it’s essential to first…
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…