The Need for Ethical and Mindful Use of Gen AI
“Trust is the currency of interactions,” says Rachel Botsman[i], author of Who Can You Trust? and the creator of the first course on trust in the digital world at Oxford University’s Saïd Business School. When the interactions and relationships with a business cannot be trusted, it can lose as much as 30 percent of its value[ii]. The arrival of Generative AI (Gen AI) throws a curved ball into the management of trust. Although Gen AI makes the tantalizing promise of revolutionizing business productivity, there are deep concerns around its ethical use, which can affect trust if not handled correctly. Understanding ethical issues and using that wisdom to keep the trust of customers, partners, and society is an immediate responsibility that businesses must fulfill.
Gen AI uses multi-modal large language models (MLLMs) to understand human language and for self-supervised learning. Algorithms, neural networks, and deep learning methodologies are then used over this model to generate new text, summarize documents, carry out translations, create code, conduct chats, and produce music, video, and images that did not exist before. This process is not without gray areas, bringing common ethical principles under the crosshairs of business. These ethical issues concern data provenance and bias, copyright violations, transparency, accountability, and data privacy.
There are several instances where Gen AI has proven to be problematic. In a recent incident, a lawyer in the US reportedly used ChatGPT for case research. The judge found that six cases being referred to did not exist and had bogus judicial citations. The lawyer was unaware that the contents created by ChatGPT could be inaccurate or even false[iii]. This is not an isolated incident. Parents, teachers, and administrators are worried that children will use Gen AI applications to create their college assignments and pass them off as their own. Truthfulness and accuracy are at stake.
Another clear problem is associated with the data being used to train LLMs. The data could have originated anywhere (mostly the internet), and using it without permission can result in copyright violations. This September, 17 authors and the Authors Guild in the US sued OpenAI for copyright infringement, claiming that OpenAI used the authors’ work to train its AI tools without permission[iv]. Designer and educator Steven Zapata, who is fighting to protect artists’ rights, says, “The performance of the model would not be possible without all of the data fed into it – much of it copyrighted.”[v]
For now, the most widely felt ethical problems revolve around using biased or false information as input, data laundering or using someone else’s data to manufacture your content to run your systems and applications and passing off Gen AI content as your own.
To unravel how businesses were approaching Gen AI and the challenges around its use, LTIMindtree surveyed 450 early adopters of the technology across the US, Europe, and the Nordics. Called The State of Generative AI Adoption, the study found that leaders were focusing on developing “mindful” AI. As many as 60 percent of the organizations that had extensively adopted Gen AI across multiple functions, or the entire organization said they regularly monitored and evaluated AI systems for potential biases and took corrective action where needed. Those with moderate adoption of the technology (67 percent) were also doing likewise. Across the surveyed group, 79 percent regularly audited their usage of Gen AI. These leaders and early adopters tell us a story: If standards of safety, reliability, security, and ethics are not maintained, there will be trouble ahead; there will be a loss of trust. Besides brand erosion, legal penalties may be expensive.
Using generative AI to advance business and the cost of creating ethical policies go hand in hand. Organizations that solve ethical challenges will move ahead with confidence. They will engage AI practitioners and researchers to help build models that filter misinformation; they will use ethically sourced and non-biased data for training their models; the usage of these models will be controlled; regulatory bodies will be welcomed to examine their systems; and these organizations will remain transparent to their customers, clearly providing indicators of where Gen AI is used in their processes.
Businesses must create strategies – backed by talent – to build trust models when using Gen AI. They must have safeguards for the use of data and their self-learning algorithms. They must create processes that identify and stop the use of misinformation. They must proactively inform customers and users of flaws and breaches that endanger their privacy or safety.
Organizations will do well to consider the early creation of a body such as the Department of Digital Trust with a full-time Digital Ethics Officer at its head. Deploying Gen AI cannot be considered successful until a structured approach to ethics is in place.
Our study distills the strategies of 450 leading decision-makers around Gen AI. It looks at who is adopting the technology, why it is being adopted, and the best ways to guarantee successful adoption.
[i] Rachel Botsman: An Economy of Trust, Mike Sturm, NORDIC Business Report, February 4, 2018: https://www.nbforum.com/nbreport/rachel-botsman-economy-trust/
[ii] Good News for Disgraced Companies: You Can Regain Trust, Lane Lambert, Working Knowledge, Harvard Business School, July 7, 2021: https://hbswk.hbs.edu/item/good-news-for-disgraced-companies-you-can-regain-trust
[iii] ChatGPT: US lawyer admits using AI for case research, Kathryn Armstrong, BBC News, May 27, 2023: https://www.bbc.com/news/world-us-canada-65735769
[iv] George R.R. Martin Among 17 Authors Suing OpenAI for Copyright Infringement, Charlie Wacholz, IGN India, September 21, 2023: https://in.ign.com/news/194577/george-rr-martin-among-17-authors-suing-openai-for-copyright-infringement
[v] The End of Art: An Argument Against Image AIs, Steven Zapata Art, October 18, 2022: https://www.youtube.com/watch?v=tjSxFAGP9Ss&t=1831s
More from Nachiket Deshpande
Nachiket Deshpande Chief Operating Officer and Executive Board Member, LTI Nachiket’s…
Latest Blogs
Introduction to RAG To truly understand Graph RAG implementation, it’s essential to first…
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…