Shaping the Future of AI with Line of Business: Copenhagen Roundtable Highlights
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating the adoption and impact of responsible AI. The gathering featured diverse perspectives, with discussions centered on the future of AI in business and how organizations can responsibly integrate advanced technologies. This summary captures the essence of the evening’s conversations, highlighting key insights and takeaways on the journey toward building trustworthy and responsible AI systems.
Starting the evening
The evening kicked off with two presentations emphasizing the importance of a use-case-driven approach in AI projects. A curated list of use cases set the stage for a deeper discussion. Did you know that, according to a recent global IBM study, 75% of surveyed CEOs believe that future of AI in business hinges on access to the most advanced generative AI technologies?1 Meanwhile, 43% of respondents revealed their organizations are already leveraging generative AI for strategic decisions.
This data highlighted the urgency for businesses to align with AI-driven innovation, setting the stage for deeper discussions on its transformative potential.
View on Responsible AI
Responsible AI is the practice of developing and deploying AI in a way that is ethical, transparent, and trustworthy. It emphasizes creating AI systems that align with core ethical principles, ensuring they are fair, unbiased, and respectful of individual privacy. This approach acknowledges the rapid adoption of AI technologies and the unique ethical challenges they bring, such as addressing algorithmic bias, enabling transparency in decision-making, and safeguarding personal data.
A key element of responsible AI is building a governance framework to guide its development and use. This framework involves processes and tools to proactively manage risks, monitor unfair outcomes, and ensure compliance with ethical and legal standards. Such measures are crucial to fostering an ecosystem of accountability and responsibility in the field of AI.
Human-centered design is another pillar of responsible AI. This method focuses on designing AI systems that prioritize the needs and concerns of people. It calls for ensuring these technologies build trust and confidence among users, meeting both functional expectations and ethical responsibilities. By putting people at the heart of the design process, responsible AI aims to create solutions that are not only effective but also socially beneficial.
Organizations also play a vital role in advancing responsible AI. So, do you think trust in AI should be a priority? At the roundtable, a recurring theme was the idea that transparency and ethical integrity are non-negotiable for long-term success. Organizations must inspire confidence by being upfront about how their AI systems operate and adhering to established ethical standards.
Ultimately, responsible AI seeks to ensure that AI technologies serve humanity in a fair and equitable manner, mitigating risks and empowering society with tools that are thoughtful, inclusive, and beneficial.
The EU AI Act: A new era of regulation
One of the evening’s focal points was the EU AI Act, which emphasizes embedding ethical considerations into AI development. How do organizations begin aligning with such regulations? The need for robust governance frameworks was discussed extensively, including tools to automate risk management, monitor biases, and ensure compliance.
Additionally, we discussed the need for developing a framework and guidelines for navigating the complexities of responsible AI, ensuring that technological advancements align with organizational values and regulatory standards. The audience acknowledged this as a global trend, with over 40 U.S. states actively working on or enforcing AI regulations.2
How do you begin the process?
After a detailed discussion on the EU AI Act, participants shared actionable steps for businesses operating in the European Union to achieve compliance:
- Conduct an AI gap analysis: Assess your current and planned use of AI against the act’s requirements. Conduct a risk assessment and prioritize high-risk areas to ensure compliance with the EU AI Act.
- Identify your AI systems: Create an overview of your AI systems and where AI is used in existing systems provided by external suppliers.
- Engage with legal experts: Consult with legal professionals specializing in EU regulations can clarify the act’s implications for your specific business context.
- Invest in ethical AI practices: Strengthen your ethical AI guidelines to align with the act’s standards, including transparent data usage, bias mitigation, and privacy assurance.
- Implement robust data governance: The act emphasizes data quality and security. Hence, evaluate your data governance policies to ensure they meet or exceed the EU AI Act’s stipulations.
- Foster an AI-literate workforce: Educating and training programs can prepare your team for the shift towards regulated AI use, emphasizing ethical considerations and compliance.
These steps serve as a practical roadmap for organizations embarking on their responsible AI journey.
Audience’s AI maturity level
A show of hands revealed an impressive statistic that left many in the room pleasantly surprised. Out of 37 delegates, a remarkable 34 were actively engaged in the AI adoption phase in some capacity. This overwhelming majority highlighted how quickly organizations are recognizing the value and potential of AI-driven innovation, or perhaps they are motivated by the “fear of missing out,” as stated by many during the evening. Only three delegates were still exploring AI in the awareness phase, and only two had not yet made any moves toward AI engagement.
AI-powered support function
This prompted a question from a delegate: Is anyone actively scaling these pilot projects to a production phase? A delegate shared insights into a successful AI support project designed to automate the company’s support function. He explained that the first line of support had been enhanced, enabling faster and more accurate responses for end-users. As a side effect, an AI co-worker offered faster onboarding of new support staff members with minimal up-front training.
From a technical perspective, they used an LLM model trained on a narrow dataset, specifically focused on internal information related to the support function. Why was this significant? This approach reduced costs but also ensured outcomes were aligned with ethical principles like accountability and transparency.
Expanding AI to blue-collar workers
When asked which areas should businesses focus on, a C-level executive highlighted the current AI initiatives aimed at optimizing white-collar tasks at the headquarters. Efforts are also underway to optimize blue-collar jobs, aiming to enhance the delivery of healthcare, assistance, and safety services for patients. The focus is on improving service quality to better support and save patients in need.
The provided examples highlighted use cases that go beyond just cutting costs—they demonstrate that by unlocking funds, you can drive future growth. The immediate savings generated help businesses operate leaner, reducing costs and streamlining processes. These freed-up resources create the opportunity to invest in scaling operations, unlocking potential for greater market reach. Additionally, the surplus funds can fuel innovative ventures, paving the way for new revenue streams.
Will our roles survive the shift?
Leaders openly shared their employees’ concerns about the potential loss of jobs as AI projects became more prevalent. This fear has naturally led to resistance when attempting to implement such initiatives. However, a senior figure offered an optimistic perspective, reminding everyone that our nation has a history of resilience and transformation.
Over the past 100 years, we have successfully transitioned from a manufacturing-based economy to a thriving knowledge society by continuously adapting to change. He emphasized that while fear of the unknown is understandable, history shows us that adaptation often leads to progress.
AI will bring new dynamics to your job?
While AI will undoubtedly transform the job market, it will not necessarily lead to widespread unemployment. Instead, it will change the nature of work, creating new opportunities and requiring workers to adapt and upskill. Employees who harness the power of AI will replace those who do not, leading to a more efficient and innovative workforce. As we move forward, it’s crucial to focus on how AI can complement human skills and enhance job roles rather than fearing its potential to replace us. Doing so can ensure a future where AI and humans work together to achieve greater heights. Curious how your job tasks are being affected? Get the full scoop in this blog.
Master the mindset for success
It was pointed out by a delegate that various analysts’ reports highlight the importance of engaging both employees and leadership to achieve success. His concern focused on the fact that most of the leadership team in his company is older white men. The question was raised: how do we ensure they understand the opportunities presented by AI? One suggestion was the need for more young talent to fuel the AI journey and drive its implementation within the organization. However, others argued that the senior talent workers are adapting to AI at a rapid pace. All in all, the conclusion seems to lie in the necessity of a balance with a bottom-up approach, and providing more training to enable the entire workforce to adopt AI effectively.
Conclusion
The evening highlighted the pressing need for cultural change to adopt AI responsibly. Discussions around ethics, privacy, and regulatory compliance emphasized the importance of robust governance frameworks and proactive strategies.
If your organization is exploring the future of AI in business, start by identifying use cases that align with your goals. Connect with us to access a catalog of AI use cases or participate in our workshops designed to help identify the best starting point for your company.
References
1 CEO decision-making in the age of AI, IBM: https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/2023-ceo
2 US state-by-state AI legislation snapshot, BCLP: https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html
3 LTIMindtree Canvas.ai is an enterprise-ready GenAI platform that accelerates the concept-to-value: https://www.ltimindtree.com/ltimindtree-canvas-ai/
More from Tom Christensen
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Recently, I had the opportunity to attend a Databricks and Snowflake event in Stockholm. It…
As businesses turn to cloud services to meet their growing technology needs, the promise of…
In the evolving landscape of technology, the rise of quantum computing stands out as a frontier…
Latest Blogs
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…