Art and Science of AI Implementation
AI in Digital Transformation Has Become Necessary Worldwide—This Is Today’s Rhetoric!!
The adoption of AI must transition from merely implementing pilots and proof of concepts to fully integrated ‘end-to-end’ solutions at scale. This transformation resembles a finely tuned mechanical clock, where all the moving parts—people, processes, and technology must be in sync for productive AI utilization. Only then can we successfully bridge the gap, commonly referred to as ‘AI Winters’ since Turing’s experiment in the 1950’s.
According to the latest statistics (IBM data), the proportion of large enterprises actively deploying AI has plateaued at 44 percent. Similarly, the number of organizations contemplating AI deployment or experimenting with the technology remains stable at around 40 percent. However, in the year 2023, 59 percent of the companies already exploring or deploying AI report accelerating their rollouts and investment in the technology. This trend indicates a growing commitment to and confidence in AI.[i]
As the integration of AI in digital transformation and implementation progresses, it’s essential to recognize that not everything incorporating an LLM (Large Language Model) in its architecture constitutes AI alone.
The term AI, coined by Prof. John McCarthy in 1955, underwent redefinition on November 8, 2023. The Organisation for Economic Co-operation and Development (OECD) has defined AI as a machine-based system that infers how to generate outputs for explicit or implicit aims, potentially influencing physical or virtual environments. AI systems exhibit varying levels of autonomy and adaptiveness after deployment.
Further, not all AI solutions are identical in scope and purpose, particularly concerning input/output data. We have transitioned from Generic AI to Generative AI, with artificial general intelligence (AGI) on the horizon, highlighting the distinction between Conversational AI vs. Generative AI.
Figure 1: Conversational AI vs. Generative AI: Core Differences, Varun Saharawat, PW Skills,
December 14, 2023: https://pwskills.com/blog/conversational-ai-vs-generative-ai-whats-the-difference/
LLM branches can be classified into two types: generative AI (like OpenAI’s GPTs), which creates text, and interpretive AI (like BERT), which understands text. Now, let’s delve into the science behind AI in digital transformation.
Science operates causally, necessitating assurance of AI readiness from the outset. But, two fundamental ‘readiness’ checks are essential before introducing the ‘hows’ to this scientific process. One involves assessing data maturity/readiness, while the other entails articulating the problem statement along with expected outcomes or readiness as a solution to answer customers’ issues.
Data VnV (Validation & Verification)
The approach explained here is a culmination of insights gained by my team since january 2023, with a focus on ensuring ‘AI implementation at scale’ for a product aimed at enhancing productivity across the software development life cycle (SDLC). I have simplified the explanation by employing Validation and Verification, starting with visualizing the process below. Both paths lead to the implementation process, with various stage gates (entry criteria) cascading to the following qualification criterion.
Figure 2: The Implementation Validation and Verification (V-n-V) process
Data is often referred to as new fuel, which is crucial for implementation and requires checks for both quantity and quality. The customer’s transaction start and stop times are necessary for proper synchronization between databases, ensuring correct data flow during transit and storage.
Verifying artifacts and creating a fitment sheet serves as an entry point to the functional specifications document (FSD), followed by the scope of work (SOW). All stakeholders sign off on FSD and SOW as success criteria before announcing the kickoff.
The entire implementation approach resembles DevOps but with a nuance; machine learning operations (MLOps)[ii] also include validation at every point, from trace collection (sessions tracing) to mapping databases and finally syncing to the database ready for ML-model (LLM) interaction. This process is iterative and customized to the specific use case rather than offering a one-size-fits-all solution. Therefore, both arms of the “V” are actioned in parallel, with overlapping timelines, to determine the implementation timelines.
Even before we go to the client’s environment, the System Integration Testing (SIT) environment is migrated to a sandbox created by the team, closer to the end customer’s integrated development environment (IDE). This ensures that any changes being incorporated are already assessed, and then the build is transferred to the end customer with minimal time required for the task. This involves significant co-innovation effort, akin to a thin-rim model of innovation. Here, we focus on meeting customer expectations regarding UX/UI through quick fixes at the edges, aligning closely with their business flow and the voice of the customer without altering core product functionalities essential to the technology stack.
These efforts necessitate automation at all touchpoints because servers are not available 24/7, and ML deployment is continuous, necessitating the setup of an IDE with dependencies daily. Thus, we must eliminate operational debt. A ready reckoner handbook containing scripts and deployment strategies will be crucial for smooth implementation.
Moving forward, ensuring that the implementation approach and solution build comply with global safety standards and meet the end customer’s expectations is essential. Therefore, before transferring the final build, we must address a combination of collaborative automated system test/dynamic application security testing ( CAST/DAST) and customer-specific local policies.
Additionally, adopting a zero-trust[iii] approach is crucial in the current landscape of growing AI regulations. In 2024, the groundwork will be laid for the EU AI Act to take effect within the next two years, leading to establishing risk management frameworks. Meanwhile, in the United States, regulatory bodies and case law will target companies involved in algorithmic discrimination or using insufficient data and dark patterns.
There may also be contention between regulations focusing solely on AI (such as the EU AI Act) and those encompassing broader legislation, including existing laws and recent enactments like the Digital Services Act and the Digital Markets Act.
While the EU once looked to China for inspiration to ban social scoring by algorithmic systems in the EU AI Act, China is now quietly setting the AI regulatory standard, possibly influencing others in 2024.[iv]
Figure 3: From MFA to Achieving Zero Trust
Summary
This visualization of putting the cart in front of the horse means ‘don’t put what you want to do before how you need to do it.’ In essence, the data-conscious approach to implementation is crucial. The solution should progress as the horse pulls the cart, not vice versa.
Figure 4: What comes first, Data or AI??
Therefore, both the art of possibility or the use cases and the science of implementation must align with the end outcomes. These outcomes should be mutually agreed upon with the customer and delivered gradually. As explained above, the stage-gate approach is critical for successful AI implementation, with a singular metric of increased productivity.
AI is now essential for digital transformation worldwide. To succeed, we must embrace a balanced approach, combining innovation, collaboration, and automation. By doing so, we can boost productivity, enhance quality, and stay compliant with regulations. Let’s not forget the transformative power of AI as we move forward, shaping the future of industries around the globe.
Canvas Insights Success Story for Wealth Management SaaS Provider in North America
Long regression test cycles slow down the speed to market for complex business-critical legacy applications, leading to limited visibility into code increments and reduced sprint velocity.
We implemented Canvas Insights Test Optimization and Defect Advisor features to address this issue. This solution facilitated faster and more frequent regression test cycles for critical applications on a SaaS-based transfer agency platform. It also correlated over 3000 business test cases with source code files, resulting in effort savings of 45 percent to 70 percent in regression cycles.
As a result, clients can now deliver both speed to market and improved quality deliverables as part of their agile transformation journey.
[i] IBM Report Suggests Early Adopters Driving Enterprise AI Adoption But Barriers to Adoption Remain, Ali Azhar, Datanami, January 10, 2024: https://www.datanami.com/2024/01/10/ibm-report-suggests-early-adopters-driving-enterprise-ai-adoption-but-barriers-to-adoption-remain/
[ii]Guest Post — Beyond Generative AI: The Indispensable Role of BERT in Scholarly Publishing, Dustin Smith, The Scholarly Kitchen, January 11, 2024: https://scholarlykitchen.sspnet.org/2024/01/11/guest-post-beyond-generative-ai-the-indispensable-role-of-bert-in-scholarly-publishing/
[iii] From MFA to Zero Trust: A Five-Phase Journey to Securing the Workforce, Cisco Public, December 2020: https://www.cisco.com/c/dam/global/en_uk/products/collateral/security/zero-trust/mfa-zero-trust-five-phase-journey-securing-workforce.pdf
[iv] The State of AI Regulations in 2024, Holistic AI, January 01, 2024:
https://www.holisticai.com/papers/the-state-of-ai-regulations-in-2024
More from Suman Kumar Chakraborty
Several factors have influenced how the web has changed, such as the creation of new tools…
Latest Blogs
Introduction to RAG To truly understand Graph RAG implementation, it’s essential to first…
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…