Synthetic Data For Accelerated Product Release
With the ever-evolving competition and an insatiable thirst to onboard new customers, insurance players are constantly looking to bring customized and tailored products to the market, without compromising on the rapidity. However, one can never be shortsighted in being considerate of the aspects influencing the process. There could be challenges related to the inherent product features or attributes that may not align with customer needs, mainly due to the inability to capture the pulse of the ever-changing market. A more common and critical challenge could be the one from a technical standpoint, wherein a lack of modern tooling or inadequate data for testing may result in an irrecoverable dent in the launch of a product.
The brand image of the provider is at stake – with the launch of every new product, any lacunae in the entire life cycle can cause a huge dent. Hence, a product can be in good shape for launch only when it is tested end-to-end with ample amounts of data covering the maximum possible and varied scenarios. This is uncompromisable. All possible permutations and combinations have to be ascertained to make sure the product runs smoothly in real-world scenarios when launched with various profiles, scenarios, and financial goals. While this looks simple and straightforward, in reality, there are a number of challenges, which if not addressed in time, may lead to delayed product releases and in turn losing the edge over competitors.
Some of these challenges include:
Data availability in large chunks – A large amount of policy data is needed to test the new product before launch. To make this available, tremendous effort goes in, as each data set requires several aspects or attributes to be taken care of. For instance, to create a life insurance policy, multiple attributes are to be addressed, starting from customer profile, billing, premiums, agents, and riders to fund details. In order to cover all these different features of a policy, the user may have to populate close to 200 to 300 attributes, which is a tremendous effort, if done manually
Data collisions in the downstream applications – Data generated from different sources often result in collisions, resulting in corrupted or non-consumable data for testing purposes. For instance, a personal identification number collision may happen in downstream systems that are fed from multiple data sources, which may not be helpful for the applications tied to this particular system.
Data Quality – Correctness of data is a significant factor that often gets compromised due to human errors or product configuration defects. This will add to the cycle time, leading to delayed product release.
Data Conditioning – Unless the generated data is in the required shape or status, it becomes stale and unfit for product testing. Data requirements can span across varied scenarios – insurance policies in active status, lapsed or terminated status, surrendered policies, policies with loan availed, policies having death claims or those that are reinstated, and other such scenarios. Not having a well-defined process will only add to the misery of delayed product release.
Backdated, current, and future-dated policies/data – In the case of testing life insurance policies, mostly, test scenarios are time-bound and often look for policies that need to be backdated while a few other scenarios demand current and future-dated policies. Having no provision to achieve these conditions results in conditional sign-offs that may lead to real-time issues post-product launch.
Integration with multiple systems – In order to create new policy data, multiple sources need to be associated with the destination system. For instance, fund and agent details may originate from different sources, and integrating them from multiple sources is often challenging and time consuming if the task is attempted manually.
Varied test scenarios – A product needs to be tested with all possible scenarios, covering both positive and negative flows, to make sure it doesn’t fail in the hands of the customer once launched. This can be achieved only if the testing and data generating teams are fully aware of the product specifications. However, apart from business analysts and product owners, other teams may not fully comprehend the product details, which may lead to a few scenarios being missed out.
Inadequate data for regression testing needs – Regression testing is an important factor as applications and data lakes are impacted when a new product is introduced in the system. Making sure that all the existing products and functionalities are in good shape is crucial for business continuity and this demands data to be generated in tremendous quantities. Nothing can be more challenging than this if attempted without proper tooling.
Data masking on sensitive data/fields – One of the ways to provision data for regression testing needs is to bring down production data to the test environment. However, the challenge of masking customer-specific sensitive data is a task that needs automated tooling solutions. In addition, this process supports the testing of existing products that already have a customer base in real-time and is not useful in terms of testing new products before launch.
Regulatory standards – It is important to adhere to standards to align with regulatory bodies of specific countries before attempting to generate data. This is crucial as it may lead to major issues in real time once the product is launched.
Knowledge repository – An identified flaw in policy data needs to be captured along with the applied solution so that it can be utilized as a knowledge base for any similar issue that might arise in the future. This will also help in regression testing at a faster pace as the possibility of the recurrence of similar defects reduces considerably. A mechanism or a methodology that can support this task is crucial.
Product/scenario comparison – It is imperative to test different versions of a product or a set of similar scenarios in order to identify the best version that meets customer needs/expectations. On similar lines, a new product being launched may need to be tested with limited traffic/users in order to simulate real-time issues or challenges. A testing methodology that can help address these situations is AB testing, which in turn demands appropriate sets of data to be generated.
There are insurance industry-specific tools that can efficiently address all these challenges and help in driving product releases in an automated manner and involving minimal effort. These tools not only help in simulating scenarios and generating data, but also include provisions for data conditioning to make it fit for testing requirements. Synthetic data generated from these tools will help in addressing regression needs as well as generating data for new products to be launched that do not have a customer base yet. It is important that the product specifications are captured and converted to configurable rules in these tools, which usually is a one-time activity. This ensures that data quality and accuracy are achieved and the efforts needed to generate data are brought down significantly.
LTIMindtree’s offering on accelerating product releases to the market, MindPronto, brings differentiation in product ideation, design, configuration, and testing process using a ‘Lego’-like building block approach. The solution works on the principle of decomposing products, coverages/benefits into features. These features are brought to life through business rules associated with them and are rolled into templatized definitions of products and coverages, referred to as templates within the tool. These templates are utilized to instantiate the product (plans) and coverages that are sold into the market.
Prominent features of the tool include standardization, reusability, generating synthetic data for testing end-to-end test scenarios, and effective test data management. Agile development and BDD is the way ahead in product development, and for this, the key enablers are reusability and regression testing, along with various scenarios and the accumulation and transmission of knowledge acquired over time. The tool helps in associating business process and test scenarios to product and process hierarchy, defining test scenarios, and generating test data, thereby ensuring comprehensive test scenario coverage, rapid scaling on business knowledge required for testing, and adopting a reusable approach towards test strategy definition for every product launch/update. It also helps in simulating different test scenarios, thereby facilitating the decision-making process to arrive at the best possible scenario/product version that aligns with customer needs.
Latest Blogs
Tired of spending countless hours troubleshooting failed API tests and keeping up with constant…
The business world is moving quickly and the only way to make informed decisions is to leverage…