Insurance providers are rich with data far beyond what they once had at their disposal for traditional historical analysis. The quantity, variety, and complexity of that data enhance the ability of insurers to gain greater insights into consumers, market trends, and strategies to improve their bottom line. But which projects offer you the best return on your investment? Here’s a glimpse at some of the most common insurance analytics project use cases that can transform the capabilities of your business.
Use your historical data to predict when a customer is most likely to buy a new policy.
Both traditional insurance providers and digital newcomers are competing for the same customer base. As a result, acquiring new customers requires targeted outreach with the right message at the moment a buyer is ready to purchase a specific type of insurance.
Predictive analytics allows insurance companies to evaluate the demographics of the target audience, their buying signals, preferences, buying patterns, pricing sensitivity, and a variety of other data points that forecast buyer readiness. This real-time data empowers insurers to reach policyholders with customized messaging that makes them more likely to convert.
Quoting Accurate Premiums
Provide instant access to correct quotes and speed up the time to purchase.
Consumers want the best value when shopping for insurance coverage, but if their quote fails to match their premium, they’ll take their business elsewhere. Insurers hoping to acquire and retain policyholders need to ensure their quotes are precise – no matter how complex the policy.
For example, one of our clients wanted to provide ride-share drivers with four-hour customized micro policies on-demand. Using real-time analytical functionality, we enabled them to quickly and accurately underwrite policies on the spot.
Improving Customer Experience
Better understand your customer’s preferences and optimize future interactions.
A positive customer experience means strong customer retention, a better brand reputation, and a reduced likelihood that a customer will leave you for the competition. In an interview with CMSWire, the CEO of John Hancock Insurance said many customers see the whole process as “cumbersome, invasive, and long.” A key solution is reaching out to customers in a way that balances automation and human interaction.
For example, the right analytics platform can help your agents engage policyholders at a deeper level. It can combine the customer story and their preferences from across customer channels to provide more personalized interactions that make customers feel valued.
Detecting Fraud
Stop fraud before it happens.
You want to provide all of your customers with the most economical coverage, but unnecessary costs inflate your overall expenses. Enterprise analytics platforms enable claims analysis to evaluate petabytes of data to detect trends that indicate fraud, waste, and abuse.
See for yourself how a tool like Tableau can help you quickly spot suspicious behavior with visual insurance fraud analysis.
Improving Operations and Financials
Access and analyze financial data in real time.
In 2019, ongoing economic growth, rising interest rates, and higher investment income were creating ideal conditions for insurers. However, that’s only if a company is maximizing their operations and ledgers.
Now, high-powered analytics has the potential to provide insurers with a real-time understanding of loss ratios, using a wide range of data points to evaluate which of your customers are underpaying or overpaying.
Are you interested in learning how a modern analytics platform like Tableau, Power BI, Looker, or other BI technologies can help you drive ROI for your insurance organization? Schedule a no-cost insurance whiteboarding strategy session to explore the full potential of your insurance data.
With your experience in the insurance industry, you understand more than most about how the actions of a smattering of people can cause disproportionate damage. The $80 billion in fraudulent claims paid out across all lines of insurance each year, whether soft or hard fraud, is perpetrated by lone individuals, sketchy auto mechanic shops, or the occasional organized crime group. The challenge for most insurers is that detecting, investigating, and mitigating these deceitful claims is a time-consuming and expensive process.
Rather than accepting loss to fraud as part of the cost of doing business, some organizations are enhancing their detection capabilities with insurance analytics solutions. Here is how your organization can use insurance fraud analytics to enhance fraud detection, uncover emerging criminal strategies, and still remain compliant with data privacy regulations.
Recognizing Patterns Faster
When you look at exceptional claim’s adjusters or special investigation units, one of the major traits they all share is an uncanny ability to recognize fraudulent patterns. Their experience allows them to notice the telltale signs of fraud, whether it’s frequent suspicious estimates from a body shop or complex billing codes intended to hide frivolous medical tests. Though you trust adjusters, many rely on heuristic judgments (e.g., trial and error, intuition, etc.) rather than hard rational analysis. When they do have statistical findings to back them up, they struggle to keep up with the sheer volume of claims.
This is where machine learning techniques can help to accelerate pattern recognition and optimize the productivity of adjusters and special investigation units. An organization starts by feeding a machine learning model a large data set that includes verified legitimate and fraudulent claims. Under supervision, the machine learning algorithm reviews and evaluates the patterns across all claims in the data set until it has mastered the ability to spot fraud indicators.
Let’s say this model was given a training set of legitimate and fraudulent auto insurance claims. While reviewing the data for fraud, the algorithm might spot links in deceptive claims between extensive damage in a claim and a lack of towing charges from the scene of the accident. Or it might notice instances where claims involve rental cars rented the day of the accident that are all brought to the same body repair shop. Once the algorithm begins to piece together these common threads, your organization can test the model’s unsupervised ability to create a criteria for detecting deception and spot all instances of fraud.
What’s important in this process is finding a balance between fraud identification and instances of false positives. If your program is overzealous, it might create more work for your agents, forcing them to prove that legitimate claims received an incorrect label. Yet when the machine learning model is optimized, it can review a multitude of dimensions to identify the likelihood of fraudulent claims. That way, if an insurance claim is called into question, adjusters can comb through the data to determine if the claim should truly be rejected or if the red flags have a valid explanation.
Detecting New Strategies
The ability of analytics tools to detect known instances of fraud is only the beginning of their full potential. As with any type of crime, insurance fraud evolves with technology, regulations, and innovation. With that transformation comes new strategies to outwit or deceive insurance companies.
One recent example has emerged through automation. When insurance organizations began to implement straight through processing (STP) in their claim approvals, the goal was to issue remittances more quickly, easily, and cheaply than manual processes. For a time, this approach provided a net positive, but once organized fraudsters caught wind of this practice, they pounced on a new opportunity to deceive insurers.
Criminals learned to game the system, identifying amounts that were below the threshold for investigation and flying their fraudulent claims under the radar. In many cases, instances of fraud could potentially double without the proper tools to detect these new deception strategies. Though most organizations plan to enhance their anti-fraud technology, there’s still the potential for them to lose millions in errant claims – if their insurance fraud analytics are not programmed to detect new patterns.
In addition to spotting red flags for common fraud occurrences, analytics programs need to be attuned to any abnormal similarities or unlikely statistical trends. Using cluster analysis, an organization can detect statistical outliers and meaningful patterns that reveal potential instances of fraud (such as suspiciously identical fraud claims).
Even beyond the above automation example, your organization can use data discovery to find hidden indicators of fraud and predict future incidents. Splitting claims data into various groups through a few parameters (such as region, physician, billing code, etc., in healthcare) can help in detecting unexpected correlations or warning signs for your automation process or even human adjusters to flag as fraud.
Safeguarding Personally Identifiable Information
As you work to improve your fraud detection, there’s one challenge all insurers face: protecting the personally identifiable information (PII) of policyholders while you analyze your data. The fines related to HIPAA violations can amount to $50,000 per violation, and other data privacy regulations can result in similarly steep fines. The good news is that insurance organizations can balance their fraud prediction and data discovery with security protocols if their data ecosystem is appropriately designed.
Maintaining data privacy compliance and effective insurance fraud analytics requires some maneuvering. Organizations that derive meaningful and accurate insight from their data must first bring all of their disparate data into a single source of truth. Yet, unless they also implement access control through a compliance-focused data governance strategy, there’s a risk of regulatory violations while conducting fraud analysis.
One way to limit your exposure is to create a data access layer that tokenizes the data, replacing any sensitive PII with unique identification symbols to keep data separate. Paired with clear data visualization capabilities, your adjusters and special investigation units can see clear-cut trends and evolving strategies without revealing individual claimants. From there, they can take their newfound insights into any red flag situation, saving your organization millions while reducing the threat of noncompliance.
Are you ready to take your data science initiatives to the next level? Partner with 2nd Watch, the industry leader in data management, analytics, and data science consulting. Our team of experts will guide you through the entire process, from building the business case to data preparation and model building. Schedule a data science readiness whiteboard session with us today and unlock the full potential of data science for your business. Don’t miss out on the opportunity to enhance fraud detection, uncover emerging criminal strategies, and remain compliant with data privacy regulations. Get started now and experience the transformative power of insurance fraud analytics with 2nd Watch by your side.
Data is one of the insurance industry’s greatest assets, which is why data analytics is so important. Before digital transformations swept the business world, underwriters and claims adjusters were the original data-driven decision makers, gathering information to assess a customer’s risk score or evaluate potential fraud. Algorithms have accelerated the speed and complexity of analytics in insurance, but some insurers have struggled to implement the framework necessary to keep their underwriting, fraud detection, and operations competitive.
The good news is that we have a clear road map for how to implement data analytics in insurance that garners the best ROI for your organization. Here are the four steps you need to unlock even more potential from your data.
Step 1: Let your business goals, not your data, define your strategy
As masters of data gathering, insurers have no shortage of valuable and illuminating data to analyze. Yet the abundance of complex data flowing into their organizations creates an equally vexing problem: conducting meaningful analysis rather than spur-of-the-moment reporting.
It’s all too easy for agents working on the front lines to allow the data flowing into their department to govern the direction of their reporting. Though ad hoc reporting can generate some insight, it rarely offers the deep, game-changing perspective businesses need to remain competitive.
Instead, your analytics strategy should align with your business goals if you want to yield the greatest ROI. Consider this scenario. A P&C insurer wants to increase the accuracy of their policy pricing in a way that retains customers without incurring additional expenses from undervalued risk. By using this goal to define their data strategy, it’s a matter of identifying the data necessary to complete that objective.
If, for example, they lack complex assessments of the potential risks in the immediate radius of a commercial property (e.g., a history of flood damage, tornado warnings, etc.), the insurer can seek out that data from an external source to complete the analysis, rather than restricting the scope of their analysis to what they have.
Step 2: Get a handle on all of your data
The insurance industry is rife with data silos. Numerous verticals, LoBs, and M&A activity have created a disorganized collection of platforms and data storage, often with their own incompatible source systems. In some cases, each unit or function has its own specialized data warehouse or activities that are not consistent or coordinated. This not only creates a barrier to cohesive data analysis but can result in a hidden stockpile of information as LoBs make rogue implementations off the radar of key decision-makers.
Before you can extract meaningful insights, your organization needs to establish a single source of truth, creating a unified view of your disparate data sources. One of our industry-leading insurance clients provides a perfect example of the benefits of data integration. The organization had grown over the years through numerous acquisitions, and each LoB brought their own unique policy and claims applications into the fold. This piecemeal growth created inconsistency across their comprehensive insight.
For example, the operational reports conducted by each LoB reported a different amount of paid losses on claims for the current year, calling into question their enterprise-wide decision-making process. As one of their established partners, 2nd Watch provided a solution. Our team conducted a current state assessment, interviewing a number of stakeholders to determine the questions each group wanted answered and the full spectrum of data sources that were essential to reporting.
We then built data pipelines (using SSIS for ETL and SQL Server) to integrate the 25 disparate sources we identified as crucial to our client’s business. We unified the meta-data, security, and governance practices across their organizations to provide a holistic view that also remained compliant with federal regulation. Now, their monthly P&L and operational reporting are simplified in a way that creates agreements across LoBs – and helps them make informed decisions.
Step 3: Create the perfect dashboard(s)
You’ve consolidated and standardized your data. You’ve aligned your analytics strategy with your goals. But can your business users quickly obtain meaning from your efforts? The large data sets analyzed by insurance organizations can be difficult to parse without a way to visualize trends and takeaways. For that very reason, building a customized dashboard is an essential part of the data analytics process.
Your insurance analytics dashboard is not a one-size-fits-all interface. Similarly, to how business goals should drive your strategy, they should also drive your dashboards. If you want people to derive quick insights from your data, the dashboard they’re using should evaluate KPIs and trends that are relevant to their specific roles and LoBs.
Claims adjusters might need a dashboard that compares policy type by frequency of utilization and cost, regional hotspots for claims submissions, or fraud priority scores for insurance fraud analytics. C-suite executives might be more concerned with revenue comparisons across LoBs, loss ratios per policy, and customer retention by vertical. All of those needs are valid. Each insurance dashboard should be designed and customized to satisfy the most common challenges of the target users in an interactive and low-effort way.
Much like the data integration process, you’ll find ideal use cases by conducting thorough stakeholder interviews. Before developers begin to build the interface, you should know the current analysis process of your end users, their pain points, and their KPIs. That way, you can encourage them to adopt the dashboards you create, running regular reports that maximize the ROI of your efforts.
Step 4: Prepare for ongoing change
A refined data strategy, consolidated data architecture, and intuitive dashboards are the foundation for robust data analytics in insurance. Yet the benchmark is always moving. There’s an unending stream of new data entering insurance organizations. Business goals are adjusting to better align with new regulations, global trends, and consumer needs. Insurers need their data analytics to remain as fluid and dynamic as their own organizations. That requires your business to have the answers to a number of questions.
How often should your dashboard update? Do you need real-time analytics to make up-to-the-minute assessments on premiums and policies? How can you translate the best practices from profitable use cases into different LoBs or roles? Though these questions (and many others) are not always intuitive, insurers can make the right preparations by working with a partner that understands their industry.
Here’s an example: One of our clients had a vision to implement a mobile application that enabled rideshare drivers to obtain commercial micro-policies based on the distance traveled and prevailing conditions. After we consolidated and standardized disparate data systems into a star schema data warehouse, we automated the ETL processes to simplify ongoing processes.
From there, we provided our client with guidance on how to build upon their existing real-time analytics to deepen the understanding of their data and explore cutting-edge analytical solutions. Creating this essential groundwork has enabled our team to direct them as we expand big data analytics capabilities throughout the organization, implementing a roadmap that yields greater effectiveness across their analytics.
P&C insurance is an incredibly data-driven industry. Your company’s core assets are data, your business revolves around collecting data, and your staff is focused on using data in their day-to-day workstreams. Although data is collected and used in normal operations, oftentimes the downstream analytics process is painful (think of those month-end reports). This is for any number of reasons:
Large, slow data flows
Unmodeled data that takes manual intervention to integrate
Legacy software that has a confusing backend and user interface
And more
Creating an analytics ecosystem that is fast and accessible is not a simple task, but today we’ll take you through the four key steps 2nd Watch follows to solve business problems with an insurance analytics solution. We’ll also provide recommendations for how best to implement each step to make these steps as actionable as possible.
Step 1: Determine your scope.
What are your company’s priorities?
Trying to improve profit margin on your products?
Improving your loss ratio?
Planning for next year?
Increasing customer satisfaction?
To realize your strategic goals, you need to determine where you want to focus your resources. Work with your team to find out which initiative has the best ROI and the best chance of success.
First, identify your business problems.
There are so many ways to improve your KPIs that trying to identify the best approach can very quickly become overwhelming. To give yourself the best chance, be deliberate about how you go about solving this challenge.
What isn’t going right? Answer this question by talking to people, looking at existing operational and financial reporting, performing critical thinking exercises, and using other qualitative or quantitative data (or both).
Then, prioritize a problem to address.
Once you identify the problems that are impacting metrics, choose one to address, taking these questions into account:
What is the potential reward (opportunity)?
What are the risks associated with trying to address this problem?
How hard is it to get all the inputs you need?
RECOMMENDATION
Taking on a scope that is too large, too complex, or unclear will make it very difficult to achieve success. Clearly set boundaries and decide what is relevant to determine which pain point you’re trying to solve. A defined critical path makes it harder to go off course and helps you keep your goal achievable.
Step 2: Identify and prioritize your KPIs.
Next, it’s time to get more technical. You’ve determined your pain points, but now you must identify the numeric KPIs that can act as the proxies for these business problems.
Maybe your business goal is to improve policyholder satisfaction. That’s great! But what does that mean in terms of metrics? What inputs do you actually need to calculate the KPI? Do you have the data to perform the calculations?
Back to the example, here are your top three options:
Based on this information, even though the TTC metric may be your third-favorite KPI for measuring customer satisfaction, the required inputs are identified and the data is available. This makes it the best option for the data engineering effort at this point in time. It also helps you identify a roadmap for the future if you want to start collecting richer information.
RECOMMENDATION
As you identify the processes you’re trying to optimize, create a data dictionary of all the measures you want to use in your reporting. Appreciate that a single KPI might:
Have more and higher quality data
Be easier to calculate
Be used to solve multiple problems
Be a higher priority to the executive team
Use this list to prioritize your data engineering effort and create the most high-value reports first. Don’t engineer in a vacuum (i.e., generate KPIs because they “seem right”). Always have an end business question in mind.
Step 3: Design your solution.
Now that you have your list of prioritized KPIs, it’s time to build the data warehouse. This will allow your business analysts to slice your metrics by any number of dimensions (e.g., TTC by product, TTC by policy, TTC by region, etc.).
2nd Watch’s approach usually involves a star schema reporting layer and a customer-facing presentation layer for analysis. A star schema has two main components: facts and dimensions. Fact tables contain the measurable metrics that can be summarized. In the TTC example, the fact-claim tables might contain a numeric value containing the number of days to close a claim. A dimension table would then provide context for how you pivot the measure. For example, you might have a dimension-policyholder table that contains attributes to “slice” the KPI value (e.g., policyholder age, gender, tenure, etc.).
Once you design the structure of your database design, you can build it. This involves transforming the data from your source system to the target database. You’ll want to consider the ETL (extract-transform-load) tool that will automate this transformation, and you’ll also need to consider the type of database that will be used to store your data. 2nd Watch can help with all these technology decisions.
You may also want to take a particular set of data standards into account, such as the ACORD Standards, to ensure more efficient and effective flow of data across lines of business, for example. 2nd Watch can take these standards into account when implementing an insurance analytics solution, giving you confidence that your organization can use enterprise-wide data for a competitive advantage.
Finally, when your data warehouse is up and running, you want to make sure your investment pays off by managing the data quality of your data sources. This can all be part of a data governance plan, which includes data consistency, data security, and data accountability.
RECOMMENDATION
Don’t feel like you need to implement the entire data warehouse at once. Be sure to prioritize your data sources and realize you can gain many benefits by just implementing some of your data sources.
Step 4: Put your insurance analytics solution into practice.
After spending the time to integrate your disparate data sources and model an efficient data warehouse, what do you actually get out of it? As an end business user, this effort can bubble up as flat file exports, dashboards, reports, or even data science models.
I’ve outlined three levels of data maturity below:
Level 1
The most basic product would be a flat file. Often, mid-to-large-sized organizations working with multiple source systems work in analytical silos. They connect directly to the back end of a source system to build analytics. As a result, intersystem analysis becomes complex with non-standard data definitions, metrics, and KPIs.
With all of that source data integrated in the data warehouse, the simplest way to begin to analyze the data is off of a single flat extract. The tabular nature of a flat file will also help business users answer basic questions about their data at an organizational level.
Level 2
Organizations farther along the data maturity curve will begin to build dashboards and reports off of the data warehouse. Regardless of your analytical capabilities, dashboards allow your users to glean information at a glance. More advanced users can apply slicers and filters to better understand what drives their KPIs.
By compiling and aggregating your data into a visual format, you make the breadth of information at your organization much more accessible to your business users and decision-makers.
Level 3
The most mature product of data integration would be data science models. Machine learning algorithms can detect trends and patterns in your data that traditional analytics would take a long time to uncover, if ever. Such models can help insurers more efficiently screen cases and predict costs with greater precision. When writing policies, a model can identify and manage risk based on demographic or historic factors to determine ROI.
RECOMMENDATION
Start simple. As flashy and appealing as data science can be to stakeholders and executives, the bulk of the value of a data integration platform lies in making the data accessible to your entire organization. Synthesize your data across your source systems to produce file extracts and KPI scorecards for your business users to analyze. As users begin to adopt and understand the data, think about slowly scaling up the complexity of analysis.
Conclusion
This was a lot of information to absorb, so let’s summarize the roadmap to solving your business problems with insurance analytics:
Step 1: Determine your scope.
Step 2: Identify and prioritize your KPIs.
Step 3: Design your solution.
Step 4: Put your insurance analytics solution into practice.
2nd Watch’s data and analytics consultants have extensive experience with roadmaps like this one, from outlining data strategy to implementing advanced analytics. If you think your organization could benefit from an insurance analytics solution, feel free to get in touch to discuss how we can help.
Data governance is a broad-ranging discipline that affects everyone in an organization, whether directly or indirectly. It is most often employed to improve and consistently manage data through deduplication and standardization, among other activities, and can have a significant and sustained effect on reducing operational costs, increasing sales, or both.
Data governance can also be part of a more extensive master data management (MDM) program. The MDM program an organization chooses and how they implement it depends on the issues they face and both their short- and long-term visions.
For example, in the insurance industry, many companies sell various types of insurance policies renewing annually over a number of years, such as industrial property coverages and workers’ compensation casualty coverages. Two sets of underwriters will more than likely underwrite the business. Having two sets of underwriters using data systems specific to their lines of business is an advantage when meeting the coverage needs of their customers but often becomes a disadvantage when considering all of the data — but it doesn’t have to be.
The disadvantage arises when an agent or account executive needs to know the overall status of a client, including long-term profitability during all the years of coverage. This involves pulling data from policy systems, claims systems, and customer support systems. An analyst may be tasked with producing a client report for the agent or account executive to truly understand their client and make better decisions on both the client and company’s behalf. But the analyst may not know where the data is stored, who owns the data, or how to link clients across disparate systems.
Fifteen years ago, this task was very time-consuming and even five years ago was still quite cumbersome. Today, however, this issue can be mitigated with the correct data governance plan. We will go deeper into data governance and MDM in upcoming posts; but for this one, we want to show you how innovators like Snowflake are helping the cause.
What is data governance?
Data governance ensures that data is consistent, accurate, and reliable, which allows for informed and effective decision-making. This can be achieved by centralizing the data into one location from few or many siloed locations. Ensuring that data is accessible in one location enables data users to understand and analyze the data to make effective decisions. One way to accomplish this centralization of data is to implement the Snowflake Data Cloud.
Snowflake not only enables a company to store their data inexpensively and query the data for analytics, but it can foster data governance. Dynamic data masking and object tagging are two new features from Snowflake that can supplement a company’s data governance initiative.
What is dynamic data masking?
Dynamic data masking is a Snowflake security feature that selectively omits plain-text data in table or view columns based on predefined policies for masking. The purpose of data masking or hiding data in specific columns is to ensure that data is accessed on a need-to-know basis. This kind of data is most likely sensitive and doesn’t need to be accessed by every user.
When is dynamic data masking used?
Data masking is usually implemented to protect personally identifiable information (PII), such as a person’s social security number, phone number, home address, or date of birth. An insurance company would likely want to reduce risk by hiding data pertaining to sensitive information if they don’t believe access to the data is necessary for conducting analysis.
However, data masking can also be used for non-production environments where testing needs to be conducted on an application. The users testing the environment wouldn’t need to know specific data if their role is just to test the environment and application. Additionally, data masking may be used to adhere to compliance requirements like HIPAA.
What is object tagging?
Another resource for data governance within Snowflake is object tagging. Object tagging enables data stewards to track sensitive data for compliance and discovery, as well as grouping desired objects such as warehouses, databases, tables or views, and columns.
When a tag is created for a table, view, or column, data stewards can determine if the data should be fully masked, partially masked, or unmasked. When tags are associated with a warehouse, a user with the tag role can view the resource usage of the warehouse to determine what, when, and how this object is being utilized.
When is object tagging used?
There are several instances where object tagging can be useful; one use would be tagging “PII” to a column and adding extra text to describe the type of PII data located there. For example, a tag can be created for a warehouse dedicated to the sales department, enabling you to track usage and deduce why a specific warehouse is being used.
Where can data governance be applied?
Data governance applies to many industries that maintain a vast amount of data from their systems, including healthcare, supply chain and logistics, and insurance; and an effective data governance strategy may use data masking and object tagging in conjunction with each other.
As previously mentioned, one common use case for data masking is for insurance customers’ PII. Normally, analysts wouldn’t need to analyze the personal information of a customer to uncover useful information leading to key business decisions. Therefore, the administrator would be able to mask columns for the customer’s name, phone number, address, social security number, and account number without interfering with analysis.
Object tagging is also valuable within the insurance industry as there is such a vast amount of data collected and consumed. A strong percentage of that data is sensitive information. Because there is so much data and it can be difficult to track those individual pieces of information, Snowflake’s object tagging feature can help with identifying and tracking the usage of those sensitive values for the business user.
Using dynamic data masking and object tagging together, you will be able to gain insights into the locations of your sensitive data and the amount specific warehouses, tables, or columns are being used.
Think back to the situation we mentioned earlier where the property coverage sales department is on legacy system X. During that same time period, the workers’ compensation sales department is on another legacy system Y. How are you supposed to create a report to understand the profitability of these two departments?
One option is to use Snowflake to store all of the data from both legacy systems. Once the information is in the Snowflake environment, object tagging would allow you to tag the databases or tables that involve data about their respective departments. One tag can be specified for property coverage and another tag can be set for workers’ compensation data. When you’re tasked with creating a report of profitability involving these two departments, you can easily identify which information can be used. Because the tag was applied to the database, it will also be applied to all of the tables and their respective columns. You would be able to understand what columns are being used. After the data from both departments is accessible within Snowflake, data masking can then be used to ensure that the new data is only truly accessible to those who need it.
This was just a small introduction to data governance and the new features that Snowflake has available to enable this effort. Don’t forget that this data governance effort can be a part of a larger, more intricate MDM initiative. In other blog posts, we touch more on MDM and other data governance capabilities to maintain and standardize your data, helping you make the most accurate and beneficial business decisions. If you have any questions in the meantime, feel free to get in touch.
Insurers are privy to large amounts of data, including personally identifying information. Your business requires you to store information about your policyholders and your employees, putting lots of people at risk if your data isn’t well-secured.
However, data governance in insurance goes beyond insurance data security. An enterprise-wide data governance strategy ensures data is consistent, accurate, and reliable, allowing for informed and effective decision-making.
If you aren’t convinced that your insurance data standards need a second look, read on to learn about the impact data governance has on insurance, the challenges you may face, and how to develop and implement a data governance strategy for your organization.
Why Data Governance Is Critical in the Insurance Industry
As previously mentioned, insurance organizations handle a lot of data; and the amount of data you’re storing likely grows day by day. Data is often siloed as it comes in, making it difficult to use at an enterprise level. With growing regulatory compliance concerns – such as the impact of the EU’s General Data Protection Regulation (GDPR) in insurance and other regulations stateside – as well as customer demands and competitive pressure, data governance can’t be ignored.
Having quality, actionable data is a crucial competitive advantage in today’s insurance industry. If your company lacks a “single source of the truth” in your data, you’ll have trouble accurately defining key performance indicators, efficiently and confidently making business decisions, and using your data to increase profitability and lower your business risks.
Data Governance Challenges in Insurance
Data governance is critical in insurance, but it isn’t without its challenges. While these data governance challenges aren’t insurmountable, they’re important to keep in mind:
Many insurers lack the people, processes, and technology to properly manage their data in-house.
As the amount of data you collect grows and new technologies emerge, insurance data governance becomes increasingly complicated – but also increasingly critical.
New regulatory challenges require new data governance strategies or at least a fresh look at your existing plan. Data governance isn’t a “one-and-done” pursuit.
Insurance data governance efforts require cross-company collaboration. Data governance isn’t effective when data is siloed within your product lines or internal departments.
Proper data governance may require investments you didn’t budget for and red tape can be difficult to overcome, but embarking on a data governance project sooner rather than later will only benefit you.
How to Create and Implement a Data Governance Plan
Creating a data governance plan can be overwhelming, especially when you take regulatory and auditing concerns into account. Working with a company like 2nd Watch can take some of the pressure off as our expert team members have experience crafting and implementing data management strategies customized to our clients’ situations.
Regardless of if you work with a data consulting firm or go it on your own, the process should start with a review of the current state of data governance in your organization and a determination of your needs. 2nd Watch’s data consultants can help with a variety of data governance needs, including data governance strategy; master data management; data profiling, cleansing, and standardization; and data security.
The next step is to decide who will have ultimate responsibility for your data governance program. 2nd Watch can help you establish a data governance council and program, working with you to define roles and responsibilities and then create and document policies, processes, and standards.
Finally, through the use of technologies chosen for your particular situation, 2nd Watch can help automate your chosen processes to improve your data governance maturity level and facilitate the ongoing effectiveness of your data governance program.
If you’re interested in discussing how insurance data governance could benefit your organization, get in touch with an 2nd Watch data consultant for a no-cost, no-risk dialogue.
In 2020, the U.S. insurance industry was worth a whopping $1.28 trillion. High premium volumes show no signs of slowing down and make the American insurance industry one of the largest markets in the world. The massive amount of premiums means there is an astronomical amount of data involved. Without artificial intelligence (AI) technology like machine learning (ML), insurance companies will have a near-impossible time processing all that data, which will create greater opportunities for insurance fraud to happen.
Insurance data is vast and complex. This data is comprised of many individuals with many instances and many factors used in determining the claims. Moreover, the type of insurance increases the complexity of data ingestion and processing. Life insurance is different than automobile insurance, health insurance is different than property insurance, and so forth. While some of the processes are similar, the data and multitude of flows can vary greatly.
As a result, insurance enterprises must prioritize digital initiatives to handle huge volumes of data and support vital business objectives. In the insurance industry, advanced technologies are critical for improving operational efficiency, providing excellent customer service, and, ultimately, increasing the bottom line.
ML can handle the size and complexity of insurance data. It can be implemented in multiple aspects of the insurance practice, and facilitates improvements in customer experiences, claims processing, risk management, and other general operational efficiencies. Most importantly, ML can mitigate the risk of insurance fraud, which plagues the entire industry. It is a big development in fraud detection and insurance organizations must add it to their fraud prevention toolkit.
In this article, we lay out how insurance companies are using ML to improve their insurance processes and flag insurance fraud before it affects their bottom lines. Read on to see how ML can fit within your insurance organization.
What is machine learning?
ML is a technology under the AI umbrella. ML is designed to analyze data so computers can make predictions and decisions based on the identification of patterns and historical data. All of this is without being explicitly programmed and with minimal human intervention. With more data production comes smarter ML solutions as they adapt autonomously and are constantly learning. Ultimately, AI/ML will handle menial tasks and free human agents to perform more complex requests and analyses.
What are the benefits of ML in the insurance industry?
There are several use cases for ML within an insurance organization regardless of insurance type. Below are some top areas for ML application in the insurance industry:
Lead Management
For insurers and salespeople, ML can identify leads using valuable insights from data. ML can even personalize recommendations according to the buyer’s previous actions and history, which enables salespeople to have more effective conversations with buyers.
Customer Service and Retention
For a majority of customers, insurance can seem daunting, complex, and unclear. It’s important for insurance companies to assist their customers at every stage of the process in order to increase customer acquisition and retention. ML via chatbots on messaging apps can be very helpful in guiding users through claims processing and answering basic frequently asked questions. These chatbots use neural networks, which can be developed to comprehend and answer most customer inquiries via chat, email, or even phone calls. Additionally, ML can take data and determine the risk of customers. This information can be used to recommend the best offer that has the highest likelihood of retaining a customer.
Risk Management
ML utilizes data and algorithms to instantly detect potentially abnormal or unexpected activity, making ML a crucial tool in loss prediction and risk management. This is vital for usage-based insurance devices, which determine auto insurance rates based on specific driving behaviors and patterns.
Fraud Detection
Unfortunately, fraud is rampant in the insurance industry. Property and casualty (P&C) insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. Overall, insurance fraud steals at least $80 billion every year from American consumers. ML can mitigate this issue by identifying potential claim situations early in the claims process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim.
Claims Processing
Claims processing is notoriously arduous and time-consuming. ML technology is the perfect tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.
Why is ML so important for fraud detection in the insurance industry?
Fraud is the biggest problem for the insurance industry, so let’s return to the fraud detection stage in the insurance lifecycle and detail the benefits of ML for this common issue. Considering the insurance industry consists of more than 7,000 companies that collect more than $1 trillion in premiums each year, there are huge opportunities and incentives for insurance fraud to occur.
Insurance fraud is an issue that has worsened since the COVID-19 pandemic began. Some industry professionals believe that the number of claims with some element of fraud has almost doubled since the pandemic.
Below are the various stages in which insurance fraud can occur during the insurance lifecycle:
Application Fraud: This fraud occurs when false information is intentionally provided in an insurance application. It is the most common form of insurance fraud.
False Claims Fraud: This fraud occurs when insurance claims are filed under false pretenses (i.e., faking death in order to collect life insurance benefits).
Forgery and Identity Theft Fraud: This fraud occurs when an individual tries to file a claim under someone else’s insurance.
Inflation Fraud: This fraud occurs when an additional amount is tacked onto the total bill when the insurance claim is filed.
Based on the amount of fraud and the different types of fraud, insurance companies should consider adding ML to their fraud detection toolkits. Without ML, insurance agents can be overwhelmed with the time-consuming process of investigating each case. The ML approaches and algorithms that facilitate fraud detection are the following:
Deep Anomaly Detection: During claims processing, this approach will analyze real claims and identify false ones.
Supervised Learning: Using predictive data analysis, this ML algorithm is the most commonly used for fraud detection. The algorithm will label all input information as “good” or “bad.”
Semi-supervised Learning: This algorithm is used for cases where labeling information is impossible or highly complex. It stores data about critical category parameters even when the group membership of the unlabeled data is unknown.
Unsupervised Learning: This model can flag unusual actions with transactions and learns specific patterns in data to continuously update its model.
Reinforcement Learning: Collecting information about the environment, this algorithm automatically verifies and contextualizes behaviors in order to find ways to reduce risk.
Predictive Analytics: This algorithm accounts for historical data and existing external data to detect patterns and behaviors.
ML is instrumental in fraud prevention and detection. It allows companies to identify claims suspected of fraud quickly and accurately, process data efficiently, and avoid wasting valuable human resources.
Conclusion
Implementing digital technologies, like ML, is vital for insurance businesses to handle their data and analytics. It allows insurance companies to increase operational efficiency and mitigate the top-of-mind risk of insurance fraud.
Working with a data consulting firm can help onboard these hugely beneficial technologies. By partnering with 2nd Watch for data analytics solutions, insurance organizations have experienced improved customer acquisition, underwriting, risk management, claims analysis, and other vital parts of their operations.
The 2nd Watch team attended the Reuters Insurance AI and Innovative Tech conference this past month, and we took away a lot of insightful perspectives from the speakers and leaders at the event. The insurance industry has a noble purpose in the world: insurance organizations strive to provide fast service to customers suffering from injury and loss, all while allowing their agents to be efficient and profitable. For this reason, insurance companies need to constantly innovate to satisfy all parties involved in the value chain.
But this is no easy business model. Ensuring the satisfaction and success of all parties is becoming increasingly more difficult for the following reasons:
The expectations and standards for a good customer experience are very high.
Insurers have a monumental amount of data to ingest and process.
The skills required to build useful analyses are at a premium.
It is easy to fail or get poor ROI on a technical initiative.
To keep up with the revolution, traditional insurance companies must undergo a massive digital transformation that supports a data-driven decision-making model. However, this sort of shift is daunting and riddled with challenges throughout the process. In presenting you with our takeaways from this eye-opening conference, we hope to address the challenges associated with redefining your insurance company and highlight new solutions that can help you tackle these issues head-on.
What are the pitfalls of an insurer trying to innovate?
The paradigm in the insurance industry has changed. As a result, your insurance business must adapt and improve digital capabilities to keep up with the market standards. While transformation is vital, it isn’t easy. Below are some pitfalls we’ve seen in our experience and that were also common themes at the Reuters event.
Your Corporate Culture Is Afraid of Failure
If your corporate culture avoids failure at all costs, then the business will be paralyzed in making necessary changes and decisions toward digital innovation. A lack of delivery can be just as damaging as bad delivery.
Your organization should prioritize incentivizing innovation and celebrating calculated risks. A culture that embraces quick failures will lead to more innovation because teams have the psychological safety net of trying out new things. Innovation cannot happen without disruption and pushing boundaries.
You Ignore the Details and Only Focus on the Aggregate
Insurtech 1.0 of the 2000s failed (Metromile, Lemonade, etc.), but from failure, we gained valuable lessons. Ultimately, they taught us that anyone can grow while unintentionally losing money, but we can avoid this pitfall if we understand the detailed events that can have the greatest effect on our key performance indicators.
Insurtech 1.0 leaders wanted to grow fast at all costs, but when these companies IPO’d, they flopped. Why? The short answer is that they focused only on growth and ignored the criticalness of high-quality underwriting. The growth-focused mindset led these Insurtech companies to write bad business to very risky customers (without realizing it!) because they were ignoring the “black swan” events that can have a major effect on your loss ratio.
Your insurance company should take note of the painful lessons Insurtech 1.0 had to go through. Be mindful of how you are growing by using technology to understand the primary drivers of cost.
You Don’t Pursue an Initiative Because It Doesn’t Have a Quick ROI
Innovation initiatives don’t always have an instant ROI, but that shouldn’t scare you off of them. The results of new technologies often aren’t immediately clearly defined and can take some time to come to fruition. Auto insurers using telematics is an example of a trend that is worth pursuing, even though the ROI initially feels ambiguous.
To increase your confidence in documenting ROI, utilize historical data sources to establish your baseline. You can’t measure the impact of a new solution without comparing the before and after! From there, you can select which metrics to track to determine ROI. By leveraging your historical data, you can gather new data, leverage all data sets, and create new value.
How can you avoid these pitfalls?
The conference showed us that there are plenty of promising new technologies, solutions, and frameworks to help insurers resolve these commonly seen pain points. Below are key ways that developed new products can contribute to a successful digital transformation of your insurance offerings:
Create a Collaborative and Cross-Functional Corporate Culture
In order to drive an innovation-centric strategy, your insurance company must promote the right culture to support it. Innovation shouldn’t be centralized, and you should take a strong interest in deploying new technologies and ideas by individuals. Additionally, you should develop a technical plan that ties back to the business strategy. A common goal and alignment toward the goal will foster teamwork and shared responsibility around innovation initiatives.
Ultimately, you want to land in a place where you have created a culture of innovation. This should be a grassroots approach where every member of the organization feels capable and empowered to develop the ideas of today into the innovations and insurance products of tomorrow. Prioritize diversity of perspectives, access to leadership, employee empowerment, and alignment on results.
Become More Customer-Centric and Less Operations-Focused
Your insurance company should make a genuine effort to understand your customers fully. This allows you to create tailored customer experiences for greater customer satisfaction. Empower your agents to use data to personalize and customize their touchpoints to the customer, and they can provide memorable customer experiences for your policyholders.
Fraud indicators, quote modifiers, and transaction-centric features are operations-focused ways to use your data warehouse. These tools are helpful, but they can distract you from building a customer-oriented data warehouse. Your insurance business should make customers the central pillar of your technologies and frameworks.
Pilot Technologies Based on Your Company’s Strategic Business Goals
Every insurance business has a different starting point, and you have to deal with the cards that you are dealt. Start by understanding what your technology gap is and where you can reduce the pain points. From there, you can build a strong case for change and begin to implement the tools, frameworks, and processes needed to do so.
Once you have established your business initiatives, there are powerful technologies for insurance companies that can help you transform and achieve your goals. For example, using data integration and data warehousing on cloud platforms, such as Snowflake, can enable KPI discovery and self-service. Another example is artificial intelligence and machine learning, which can help your business with underwriting transformation and provide you with “Next Best Action” by combining customer interests with the objectives of your business.
Conclusion
Any tool or model you have in production today is already “legacy.” Digital insurance innovation doesn’t just mean upgrading your technologies and tools. It means creating an entire ecosystem and culture to form hypotheses, take measured risks, and implement the results! A corporate shift to embrace change in the insurance industry can seem overwhelming, but partnering with 2nd Watch, which has experts in both the technology and the insurance industry, will set your innovation projects up for success. Contact us today to learn how we can help you revolutionize your business!
Analytics and machine learning technologies are revolutionizing the insurance industry. Rapid fraud detection, improved self service, better claims handling, and precise customer targeting are just some of the possibilities. Before you jump head first into an insurance analytics project, however, you need to take a step back and develop an enterprise data strategy for insurance that will ensure long-term success across the entire organization.
Here are the basics to help get you started – and some pitfalls to avoid.
The Foundation of Data Strategy for Insurance
Identify Your Current State
What are your existing analytics capabilities? In our experience, data infrastructure and analysis are rarely implemented in a tidy, centralized way. Departments and individuals choose to implement their own storage and analytical programs, creating entire systems that exist off the radar. Evaluating the current state and creating a roadmap empowers you to conduct accurate gap analysis and arrange for all data sources to funnel into your final analytics tool.
Define Your Future State
A strong ROI depends on a clear and defined goal from the start. For insurance analytics, that means understanding the type of analytics capabilities you need (e.g., real-time analytics, predictive analytics) and the progress you want to make (e.g., more accurate premiums, reduced waste, more personalized policies). Through stakeholder interviews and business requirements, you can determine the exact fix to reduce waste during the implementation process.
Pitfalls to Avoid
Even with a solid roadmap, some common mistakes can hinder the end result of your insurance analytics project. Keep these in mind during the planning and implementation phases.
Don’t Try to Eat the Elephant in One Bite
Investing $5 million in an all-encompassing enterprise-wide platform is good in theory. However, that’s a hefty price tag for an untested concept. We recommend our clients start on a more strategic proof of concept that can provide ROI in months rather than years.
Maximize Your Data Quality
Your insights are only as good as your data. Even with a well-constructed data hub, your findings cannot turn low-quality data into gems. Data quality management within your business provides a framework for better outcomes by identifying old or unreliable data. But your team needs to take it to the next level, acting with care to input accurate and timely data that your internal system can use for analysis.
Align Analytics with Your Strategic Goals
Alignment with your strategic goals is a must for any insurance analytics project. There needs to be consensus among all necessary stakeholders – business divisions, IT, and top business executives – or each group will pull the project in different directions. This challenge is avoidable if the right stakeholders and users are included in planning the future state of your analytics program.
Integrate Analytics with Your Whole Business
Incompatible systems result in significant waste in any organization. If an analytics system cannot access the range of data sources it needs to evaluate, then your findings will fall short. During one project, our client wanted to launch a claims system and assumed it would be a simple integration of a few systems. When we conducted our audit, we found that 25 disparate source systems existed. Taking the time up front to run these types of audits prevents headaches down the road when you can’t analyze a key component of a given problem.
If you have any questions or are looking for additional guidance on analytics, machine learning, or data strategy for insurance, 2nd Watch’s insurance data and analytics team is happy to help. Feel free to contact us here.
Insurance is a data-heavy industry with a huge upside to leveraging business intelligence. Today, we will discuss the approach we use at 2nd Watch to build out a data warehouse for insurance clients.
Understand the Value Chain and Create a Design
At its most basic, the insurance industry can be described by its cash inflows and outflows (e.g., the business will collect premiums based on effective policies and payout claims resulting from accidents). From here, we can describe the measures that are relevant to these activities:
Policy Transactions: Quote, Written Premium, Fees, Commission
Billing Transactions: Invoice, Taxes
Claim Transactions: Payment, Reserve
Payment transactions: Received amount
From these four core facts, we can collaborate with subject matter experts to identify the primary “describers” of these measures. For example, a policy transaction will need to include information on the policyholder, coverage, covered items, dates, and connected parties. By working with the business users and analyzing the company’s front-end software like Guidewire or Dovetail, we can design a structure to optimize reporting performance and scalability.
Develop a Data Flow
Here is a quick overview:
Isolate your source data in a “common landing area”: We have been working on an insurance client with 20+ data sources (many acquisitions). The first step of our process is to identify the source tables that we need to build out the warehouse and load the information in a staging database. (We create a schema per source and automate most of the development work.)
Denormalize and combine data into a data hub: After staging the data in the CLA, our team creates “Get” Stored Procedures to combine the data into common tables. For example, at one client, we have 13 sources with policy information (policy number, holder, effective date, etc.) that we combined into a single [Business].[Policy] table in our database. We also created tables for tracking other dimensions and facts such as claims, billing, and payment.
Create a star schema warehouse: Finally, the team loads the business layer into the data warehouse by assigning surrogate keys to the dimensions, creating references in the facts, and structuring the tables in a star schema. If designed correctly, any modern reporting tool, from Tableau to SSRS, will be able to connect to the data warehouse and generate high-performance reporting.
Produce Reports, Visualizations, and Analysis
By combining your sources into a centralized data warehouse for insurance, the business has created a single source of the truth. From here, users have a well of data to extract operational metrics, build predictive models, and generate executive dashboards. The potential for insurance analytics is endless: premium forecasting, geographic views, fraud detection, marketing, operational efficiency, call-center tracking, resource optimization, cost comparisons, profit maximization, and so much more!