Data Clean Rooms: Share Your Corporate Data Fearlessly

Data sharing has become more complex, both in its application and our relationship to it. There is a tension between the need for personalization and the need for privacy. Businesses must share data to be effective and ultimately provide tailored customer experiences. However, legislation and practices regarding data privacy have tightened, and data sharing is tougher and fraught with greater compliance constraints than ever before. The challenge for enterprises is reconciling the increased demand for data with increased data protection.

Data Clean Rooms

The modern world runs on data. Companies share data to facilitate their daily operations. Data distribution occurs between business departments and external third parties. Even something as innocuous as exchanging Microsoft Excel and Google Sheets spreadsheets is data sharing!

Data collaboration is entrenched in our business processes. Therefore, rather than avoiding it, we must find the tools and frameworks to support secure and privacy-compliant data sharing. So how do we govern the flow of sensitive information from our data platforms to other parties?

The answer: data clean rooms. Data clean rooms are the modern vehicle for various data sharing and data governance workflows. Across industries – including media and entertainment, advertising, insurance, private equity, and more – a data clean room can be the difference-maker in your data insights.

Ready to get started with a data clean room solution? Schedule time to talk with a 2nd Watch data expert.

What is a data clean room?

There is a classic thought experiment wherein two millionaires want to find out who is richer without actually sharing how much money they are individually worth. The data clean room solves this issue by allowing parties to ask approved questions, which require external data to answer, without actually sharing the sensitive information itself!

In other words, a data clean room is a framework that allows two parties to securely share and analyze data by granting both parties control over when, where, and how said data is used. The parties involved can pool together data in a secure environment that protects private details. With data clean rooms, brands can access crucial and much-needed information while maintaining compliance with data privacy policies.

Data clean rooms have been around for about five years with Google being the first company to launch a data clean room solution (Google Ads Data Hub) in 2017. The era of user privacy kicked off in 2018 when data protection and privacy became law, most notably with the General Data Protection Regulation (GDPR).

This was a huge shake-up for most brands. Businesses had to adapt their data collection and sharing models to operate within the scope of the new legislation and the walled gardens that became popular amongst all tech giants. With user privacy becoming a priority, data sharing has become stricter and more scrutinized, which makes marketing campaign measurements and optimizations in the customer journey more difficult than ever before.

Data clean rooms are crucial for brands navigating the era of consumer protection and privacy. Brands can still gain meaningful marketing insights and operate within data privacy laws in a data clean room.

Data clean rooms work because the parties involved have full control over their data. Each party agrees upon access, availability, and data usage, while a trusted data clean room offering oversees data governance. This yields the secure framework needed to ensure that one party cannot access the other’s data and upholds the foundational rule that individual, or user-level data cannot be shared between different parties without consent.

Personally, identifying information (PII) remains anonymized and is processed and stored in a way that is not exposed to any parties involved. Thus, data sharing within a data clean room complies with privacy policies, such as GDPR and California Consumer Privacy Act (CCPA).

How does a data clean room work?

Let’s take a deeper dive into the functionality of a data clean room. Four components are involved with a data clean room:

#1 – Data ingestion
Data is funneled into the data clean room. This can be first-party data (generated from websites, applications, CRMs, etc.) or second-party data from collaborating parties (such as ad networks, partners, publishers, etc.)

#2 – Connection and enrichment
The ingested data sets are matched at the user level. Tools like third-party data enrichment complement the data sets.

#3 – Analytics
The data is analyzed to determine if there are intersections/overlaps, measurement/attribution, and propensity scoring. Data will only be shared where the data points intersect between the two parties.

#4 – Application
Once the data has finished its data clean room journey, each party will have aggregated data outputs. It creates the necessary business insights to accomplish crucial tasks such as optimizing the customer experience, performing reach and frequency measurements, building effective cross-platform journeys, and conducting deep marketing campaign analyses.

What are the benefits of a data clean room?

Data clean rooms can benefit businesses in any industry, including media, retail, and advertising. In summary, data clean rooms are beneficial for the following reasons:

You can enrich your partner’s data set.
With data clean rooms, you can collaborate with your partners to produce and consume data regarding overlapping customers. You can pool common customer data with your partners, find the intersection between your business and your partners, and share the data upstream without sharing sensitive information with competitors. An example would be sharing demand and sales information with an advertising partner for better-targeted marketing campaigns.

You can create governance within your enterprise.
Data clean rooms provide the framework to achieve the elusive “single source of truth.” You can create a golden record encompassing all the data in every system of records within your organization. This includes sensitive PII such as social security numbers, passport numbers, financial account numbers, transactional data, etc.

You can remain policy compliant.
In a data clean room environment, you can monitor where the data lives, who has access to it, and how it is used with a data clean room. Think of it as an automated middleman that validates requests for data. This allows you to share data and remain compliant with all the important acronyms: GDPR, HIPPA, CCPA, FCRA, ECPA, etc.

But you have to do it right…

With every data security and analytics initiative, there is a set of risks if the implementation is not done correctly. A truly “clean” data clean room will allow you to unlock data for your users while remaining privacy compliant. You can maintain role-based access, tokenized columns, and row-level security – which typically lock down particular data objects – and share these sensitive data sets quickly and in a governed way. Data clean rooms satisfy the need for efficient access and the need for the data producer to limit the consumer to relevant information for their use case.

Of course, there are consequences if your data clean room is actually “dirty.” Your data must be federated, and you need clarity on how your data is stored. The consequences are messy if your room is dirty. You risk:

  • Loss of customer trust
  • Fines from government agencies
  • Inadvertently oversharing proprietary information
  • Locking out valuable data requests due to a lack of process

Despite the potential risks of utilizing a data clean room, it is the most promising solution to the challenges of data-sharing in a privacy-compliant way.


To get the most out of your data, your business needs to create secure processes to share data and decentralize your analytics. This means pooling together common data with your partners and distributing the work to create value for all parties involved.

However, you must govern your data. It is imperative to treat your data like an asset, especially in the era of user privacy and data protection. With data clean rooms, you can reconcile the need for data collaboration with the need for data ownership and privacy.

2nd Watch can be your data clean room guide, helping you to establish a data mesh that enables sharing and analyzing distributed pools of data, all while maintaining centralized governance. Schedule time to get started with a data clean room.

Fred Bliss – CTO Data Insights 2nd Watch 


Snowflake’s Role in Data Governance for Insurance: Data Masking and Object Tagging Features

Data governance is a broad-ranging discipline that affects everyone in an organization, whether directly or indirectly. It is most often employed to improve and consistently manage data through deduplication and standardization, among other activities, and can have a significant and sustained effect on reducing operational costs, increasing sales, or both.

Snowflake’s Role in Data Governance for Insurance

Data governance can also be part of a more extensive master data management (MDM) program. The MDM program an organization chooses and how they implement it depends on the issues they face and both their short- and long-term visions.

For example, in the insurance industry, many companies sell various types of insurance policies renewing annually over a number of years, such as industrial property coverages and workers’ compensation casualty coverages. Two sets of underwriters will more than likely underwrite the business. Having two sets of underwriters using data systems specific to their lines of business is an advantage when meeting the coverage needs of their customers but often becomes a disadvantage when considering all of the data — but it doesn’t have to be.

The disadvantage arises when an agent or account executive needs to know the overall status of a client, including long-term profitability during all the years of coverage. This involves pulling data from policy systems, claims systems, and customer support systems. An analyst may be tasked with producing a client report for the agent or account executive to truly understand their client and make better decisions on both the client and company’s behalf. But the analyst may not know where the data is stored, who owns the data, or how to link clients across disparate systems.

Fifteen years ago, this task was very time-consuming and even five years ago was still quite cumbersome. Today, however, this issue can be mitigated with the correct data governance plan. We will go deeper into data governance and MDM in upcoming posts; but for this one, we want to show you how innovators like Snowflake are helping the cause.

What is data governance?

Data governance ensures that data is consistent, accurate, and reliable, which allows for informed and effective decision-making. This can be achieved by centralizing the data into one location from few or many siloed locations. Ensuring that data is accessible in one location enables data users to understand and analyze the data to make effective decisions. One way to accomplish this centralization of data is to implement the Snowflake Data Cloud.

Snowflake not only enables a company to store their data inexpensively and query the data for analytics, but it can foster data governance. Dynamic data masking and object tagging are two new features from Snowflake that can supplement a company’s data governance initiative.

What is dynamic data masking?

Dynamic data masking is a Snowflake security feature that selectively omits plain-text data in table or view columns based on predefined policies for masking. The purpose of data masking or hiding data in specific columns is to ensure that data is accessed on a need-to-know basis. This kind of data is most likely sensitive and doesn’t need to be accessed by every user.

When is dynamic data masking used?

Data masking is usually implemented to protect personally identifiable information (PII), such as a person’s social security number, phone number, home address, or date of birth. An insurance company would likely want to reduce risk by hiding data pertaining to sensitive information if they don’t believe access to the data is necessary for conducting analysis.

However, data masking can also be used for non-production environments where testing needs to be conducted on an application. The users testing the environment wouldn’t need to know specific data if their role is just to test the environment and application. Additionally, data masking may be used to adhere to compliance requirements like HIPAA.

What is object tagging?

Another resource for data governance within Snowflake is object tagging. Object tagging enables data stewards to track sensitive data for compliance and discovery, as well as grouping desired objects such as warehouses, databases, tables or views, and columns.

When a tag is created for a table, view, or column, data stewards can determine if the data should be fully masked, partially masked, or unmasked. When tags are associated with a warehouse, a user with the tag role can view the resource usage of the warehouse to determine what, when, and how this object is being utilized.

When is object tagging used?

There are several instances where object tagging can be useful; one use would be tagging “PII” to a column and adding extra text to describe the type of PII data located there. For example, a tag can be created for a warehouse dedicated to the sales department, enabling you to track usage and deduce why a specific warehouse is being used.

Where can data governance be applied?

Data governance applies to many industries that maintain a vast amount of data from their systems, including healthcare, supply chain and logistics, and insurance; and an effective data governance strategy may use data masking and object tagging in conjunction with each other.

As previously mentioned, one common use case for data masking is for insurance customers’ PII. Normally, analysts wouldn’t need to analyze the personal information of a customer to uncover useful information leading to key business decisions. Therefore, the administrator would be able to mask columns for the customer’s name, phone number, address, social security number, and account number without interfering with analysis.

Object tagging is also valuable within the insurance industry as there is such a vast amount of data collected and consumed. A strong percentage of that data is sensitive information. Because there is so much data and it can be difficult to track those individual pieces of information, Snowflake’s object tagging feature can help with identifying and tracking the usage of those sensitive values for the business user.

Using dynamic data masking and object tagging together, you will be able to gain insights into the locations of your sensitive data and the amount specific warehouses, tables, or columns are being used.

Think back to the situation we mentioned earlier where the property coverage sales department is on legacy system X. During that same time period, the workers’ compensation sales department is on another legacy system Y. How are you supposed to create a report to understand the profitability of these two departments?

One option is to use Snowflake to store all of the data from both legacy systems. Once the information is in the Snowflake environment, object tagging would allow you to tag the databases or tables that involve data about their respective departments. One tag can be specified for property coverage and another tag can be set for workers’ compensation data. When you’re tasked with creating a report of profitability involving these two departments, you can easily identify which information can be used. Because the tag was applied to the database, it will also be applied to all of the tables and their respective columns. You would be able to understand what columns are being used. After the data from both departments is accessible within Snowflake, data masking can then be used to ensure that the new data is only truly accessible to those who need it.

This was just a small introduction to data governance and the new features that Snowflake has available to enable this effort. Don’t forget that this data governance effort can be a part of a larger, more intricate MDM initiative. In other blog posts, we touch more on MDM and other data governance capabilities to maintain and standardize your data, helping you make the most accurate and beneficial business decisions. If you have any questions in the meantime, feel free to get in touch.


The Critical Role of Data Governance in the Insurance Industry

Data Governance in the Insurance Industry

Insurers are privy to large amounts of data, including personally identifying information. Your business requires you to store information about your policyholders and your employees, putting lots of people at risk if your data isn’t well-secured.

However, data governance in insurance goes beyond insurance data security. An enterprise-wide data governance strategy ensures data is consistent, accurate, and reliable, allowing for informed and effective decision-making.

If you aren’t convinced that your insurance data standards need a second look, read on to learn about the impact data governance has on insurance, the challenges you may face, and how to develop and implement a data governance strategy for your organization.

Why Data Governance Is Critical in the Insurance Industry

As previously mentioned, insurance organizations handle a lot of data; and the amount of data you’re storing likely grows day by day. Data is often siloed as it comes in, making it difficult to use at an enterprise level. With growing regulatory compliance concerns – such as the impact of the EU’s General Data Protection Regulation (GDPR) in insurance and other regulations stateside – as well as customer demands and competitive pressure, data governance can’t be ignored.

Having quality, actionable data is a crucial competitive advantage in today’s insurance industry. If your company lacks a “single source of the truth” in your data, you’ll have trouble accurately defining key performance indicators, efficiently and confidently making business decisions, and using your data to increase profitability and lower your business risks.

Data Governance Challenges in Insurance

Data governance is critical in insurance, but it isn’t without its challenges. While these data governance challenges aren’t insurmountable, they’re important to keep in mind:

  • Many insurers lack the people, processes, and technology to properly manage their data in-house.
  • As the amount of data you collect grows and new technologies emerge, insurance data governance becomes increasingly complicated – but also increasingly critical.
  • New regulatory challenges require new data governance strategies or at least a fresh look at your existing plan. Data governance isn’t a “one-and-done” pursuit.
  • Insurance data governance efforts require cross-company collaboration. Data governance isn’t effective when data is siloed within your product lines or internal departments.
  • Proper data governance may require investments you didn’t budget for and red tape can be difficult to overcome, but embarking on a data governance project sooner rather than later will only benefit you.

How to Create and Implement a Data Governance Plan

Creating a data governance plan can be overwhelming, especially when you take regulatory and auditing concerns into account. Working with a company like 2nd Watch can take some of the pressure off as our expert team members have experience crafting and implementing data management strategies customized to our clients’ situations.

Regardless of if you work with a data consulting firm or go it on your own, the process should start with a review of the current state of data governance in your organization and a determination of your needs. 2nd Watch’s data consultants can help with a variety of data governance needs, including data governance strategy; master data management; data profiling, cleansing, and standardization; and data security.

The next step is to decide who will have ultimate responsibility for your data governance program. 2nd Watch can help you establish a data governance council and program, working with you to define roles and responsibilities and then create and document policies, processes, and standards.

Finally, through the use of technologies chosen for your particular situation, 2nd Watch can help automate your chosen processes to improve your data governance maturity level and facilitate the ongoing effectiveness of your data governance program.

If you’re interested in discussing how insurance data governance could benefit your organization, get in touch with an 2nd Watch data consultant for a no-cost, no-risk dialogue.


How Machine Learning Can Benefit the Insurance Industry

In 2020, the U.S. insurance industry was worth a whopping $1.28 trillion. High premium volumes show no signs of slowing down and make the American insurance industry one of the largest markets in the world. The massive amount of premiums means there is an astronomical amount of data involved. Without artificial intelligence (AI) technology like machine learning (ML), insurance companies will have a near-impossible time processing all that data, which will create greater opportunities for insurance fraud to happen. 

How Machine Learning Can Benefit the Insurance Industry

Insurance data is vast and complex. This data is comprised of many individuals with many instances and many factors used in determining the claims. Moreover, the type of insurance increases the complexity of data ingestion and processing. Life insurance is different than automobile insurance, health insurance is different than property insurance, and so forth. While some of the processes are similar, the data and multitude of flows can vary greatly.

As a result, insurance enterprises must prioritize digital initiatives to handle huge volumes of data and support vital business objectives. In the insurance industry, advanced technologies are critical for improving operational efficiency, providing excellent customer service, and, ultimately, increasing the bottom line.

ML can handle the size and complexity of insurance data. It can be implemented in multiple aspects of the insurance practice, and facilitates improvements in customer experiences, claims processing, risk management, and other general operational efficiencies. Most importantly, ML can mitigate the risk of insurance fraud, which plagues the entire industry. It is a big development in fraud detection and insurance organizations must add it to their fraud prevention toolkit. 

In this article, we lay out how insurance companies are using ML to improve their insurance processes and flag insurance fraud before it affects their bottom lines. Read on to see how ML can fit within your insurance organization. 

What is machine learning?

ML is a technology under the AI umbrella. ML is designed to analyze data so computers can make predictions and decisions based on the identification of patterns and historical data. All of this is without being explicitly programmed and with minimal human intervention. With more data production comes smarter ML solutions as they adapt autonomously and are constantly learning. Ultimately, AI/ML will handle menial tasks and free human agents to perform more complex requests and analyses.

What are the benefits of ML in the insurance industry?

There are several use cases for ML within an insurance organization regardless of insurance type. Below are some top areas for ML application in the insurance industry:

Lead Management

For insurers and salespeople, ML can identify leads using valuable insights from data. ML can even personalize recommendations according to the buyer’s previous actions and history, which enables salespeople to have more effective conversations with buyers. 

Customer Service and Retention

For a majority of customers, insurance can seem daunting, complex, and unclear. It’s important for insurance companies to assist their customers at every stage of the process in order to increase customer acquisition and retention. ML via chatbots on messaging apps can be very helpful in guiding users through claims processing and answering basic frequently asked questions. These chatbots use neural networks, which can be developed to comprehend and answer most customer inquiries via chat, email, or even phone calls. Additionally, ML can take data and determine the risk of customers. This information can be used to recommend the best offer that has the highest likelihood of retaining a customer. 

Risk Management

ML utilizes data and algorithms to instantly detect potentially abnormal or unexpected activity, making ML a crucial tool in loss prediction and risk management. This is vital for usage-based insurance devices, which determine auto insurance rates based on specific driving behaviors and patterns. 

Fraud Detection

Unfortunately, fraud is rampant in the insurance industry. Property and casualty (P&C) insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. Overall, insurance fraud steals at least $80 billion every year from American consumers. ML can mitigate this issue by identifying potential claim situations early in the claims process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim. 

Claims Processing

Claims processing is notoriously arduous and time-consuming. ML technology is the perfect tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.

Why is ML so important for fraud detection in the insurance industry?

Fraud is the biggest problem for the insurance industry, so let’s return to the fraud detection stage in the insurance lifecycle and detail the benefits of ML for this common issue. Considering the insurance industry consists of more than 7,000 companies that collect more than $1 trillion in premiums each year, there are huge opportunities and incentives for insurance fraud to occur.  

Insurance fraud is an issue that has worsened since the COVID-19 pandemic began. Some industry professionals believe that the number of claims with some element of fraud has almost doubled since the pandemic. 

Below are the various stages in which insurance fraud can occur during the insurance lifecycle:

  • Application Fraud: This fraud occurs when false information is intentionally provided in an insurance application. It is the most common form of insurance fraud.
  • False Claims Fraud: This fraud occurs when insurance claims are filed under false pretenses (i.e., faking death in order to collect life insurance benefits).
  • Forgery and Identity Theft Fraud: This fraud occurs when an individual tries to file a claim under someone else’s insurance.
  • Inflation Fraud: This fraud occurs when an additional amount is tacked onto the total bill when the insurance claim is filed. 

Based on the amount of fraud and the different types of fraud, insurance companies should consider adding ML to their fraud detection toolkits. Without ML, insurance agents can be overwhelmed with the time-consuming process of investigating each case. The ML approaches and algorithms that facilitate fraud detection are the following:

  • Deep Anomaly Detection: During claims processing, this approach will analyze real claims and identify false ones. 
  • Supervised Learning: Using predictive data analysis, this ML algorithm is the most commonly used for fraud detection. The algorithm will label all input information as “good” or “bad.”
  • Semi-supervised Learning: This algorithm is used for cases where labeling information is impossible or highly complex. It stores data about critical category parameters even when the group membership of the unlabeled data is unknown.
  • Unsupervised Learning: This model can flag unusual actions with transactions and learns specific patterns in data to continuously update its model. 
  • Reinforcement Learning: Collecting information about the environment, this algorithm automatically verifies and contextualizes behaviors in order to find ways to reduce risk.
  • Predictive Analytics: This algorithm accounts for historical data and existing external data to detect patterns and behaviors.

ML is instrumental in fraud prevention and detection. It allows companies to identify claims suspected of fraud quickly and accurately, process data efficiently, and avoid wasting valuable human resources.


Implementing digital technologies, like ML, is vital for insurance businesses to handle their data and analytics. It allows insurance companies to increase operational efficiency and mitigate the top-of-mind risk of insurance fraud.

Working with a data consulting firm can help onboard these hugely beneficial technologies. By partnering with 2nd Watch for data analytics solutions, insurance organizations have experienced improved customer acquisition, underwriting, risk management, claims analysis, and other vital parts of their operations.

2nd Watch is here to work collaboratively with you and your team to design your future-state data and analytics environment. Request a complimentary insurance data strategy session today!


Our Takeaways from Insurance AI and Innovative Tech 2022

The 2nd Watch team attended the Reuters Insurance AI and Innovative Tech conference this past month, and we took away a lot of insightful perspectives from the speakers and leaders at the event. The insurance industry has a noble purpose in the world: insurance organizations strive to provide fast service to customers suffering from injury and loss, all while allowing their agents to be efficient and profitable. For this reason, insurance companies need to constantly innovate to satisfy all parties involved in the value chain.

AI and Innovation in the Insurance Industry

But this is no easy business model. Ensuring the satisfaction and success of all parties is becoming increasingly more difficult for the following reasons: 

  • The expectations and standards for a good customer experience are very high.
  • Insurers have a monumental amount of data to ingest and process.
  • The skills required to build useful analyses are at a premium.
  • It is easy to fail or get poor ROI on a technical initiative.

To keep up with the revolution, traditional insurance companies must undergo a massive digital transformation that supports a data-driven decision-making model. However, this sort of shift is daunting and riddled with challenges throughout the process. In presenting you with our takeaways from this eye-opening conference, we hope to address the challenges associated with redefining your insurance company and highlight new solutions that can help you tackle these issues head-on.

What are the pitfalls of an insurer trying to innovate?

The paradigm in the insurance industry has changed. As a result, your insurance business must adapt and improve digital capabilities to keep up with the market standards. While transformation is vital, it isn’t easy. Below are some pitfalls we’ve seen in our experience and that were also common themes at the Reuters event.

Your Corporate Culture Is Afraid of Failure

If your corporate culture avoids failure at all costs, then the business will be paralyzed in making necessary changes and decisions toward digital innovation. A lack of delivery can be just as damaging as bad delivery.

Your organization should prioritize incentivizing innovation and celebrating calculated risks. A culture that embraces quick failures will lead to more innovation because teams have the psychological safety net of trying out new things. Innovation cannot happen without disruption and pushing boundaries. 

You Ignore the Details and Only Focus on the Aggregate

Insurtech 1.0 of the 2000s failed (Metromile, Lemonade, etc.), but from failure, we gained valuable lessons. Ultimately, they taught us that anyone can grow while unintentionally losing money, but we can avoid this pitfall if we understand the detailed events that can have the greatest effect on our key performance indicators. 

Insurtech 1.0 leaders wanted to grow fast at all costs, but when these companies IPO’d, they flopped. Why? The short answer is that they focused only on growth and ignored the criticalness of high-quality underwriting. The growth-focused mindset led these Insurtech companies to write bad business to very risky customers (without realizing it!) because they were ignoring the “black swan” events that can have a major effect on your loss ratio.

Your insurance company should take note of the painful lessons Insurtech 1.0 had to go through. Be mindful of how you are growing by using technology to understand the primary drivers of cost. 

You Don’t Pursue an Initiative Because It Doesn’t Have a Quick ROI

Innovation initiatives don’t always have an instant ROI, but that shouldn’t scare you off of them. The results of new technologies often aren’t immediately clearly defined and can take some time to come to fruition. Auto insurers using telematics is an example of a trend that is worth pursuing, even though the ROI initially feels ambiguous.  

To increase your confidence in documenting ROI, utilize historical data sources to establish your baseline. You can’t measure the impact of a new solution without comparing the before and after! From there, you can select which metrics to track to determine ROI. By leveraging your historical data, you can gather new data, leverage all data sets, and create new value.

How can you avoid these pitfalls?

The conference showed us that there are plenty of promising new technologies, solutions, and frameworks to help insurers resolve these commonly seen pain points. Below are key ways that developed new products can contribute to a successful digital transformation of your insurance offerings:

Create a Collaborative and Cross-Functional Corporate Culture

In order to drive an innovation-centric strategy, your insurance company must promote the right culture to support it. Innovation shouldn’t be centralized, and you should take a strong interest in deploying new technologies and ideas by individuals. Additionally, you should develop a technical plan that ties back to the business strategy. A common goal and alignment toward the goal will foster teamwork and shared responsibility around innovation initiatives.

Ultimately, you want to land in a place where you have created a culture of innovation. This should be a grassroots approach where every member of the organization feels capable and empowered to develop the ideas of today into the innovations and insurance products of tomorrow. Prioritize diversity of perspectives, access to leadership, employee empowerment, and alignment on results.  

Become More Customer-Centric and Less Operations-Focused

Your insurance company should make a genuine effort to understand your customers fully. This allows you to create tailored customer experiences for greater customer satisfaction. Empower your agents to use data to personalize and customize their touchpoints to the customer, and they can provide memorable customer experiences for your policyholders. 

Fraud indicators, quote modifiers, and transaction-centric features are operations-focused ways to use your data warehouse. These tools are helpful, but they can distract you from building a customer-oriented data warehouse. Your insurance business should make customers the central pillar of your technologies and frameworks.

Pilot Technologies Based on Your Company’s Strategic Business Goals

Every insurance business has a different starting point, and you have to deal with the cards that you are dealt. Start by understanding what your technology gap is and where you can reduce the pain points. From there, you can build a strong case for change and begin to implement the tools, frameworks, and processes needed to do so. 

Once you have established your business initiatives, there are powerful technologies for insurance companies that can help you transform and achieve your goals. For example, using data integration and data warehousing on cloud platforms, such as Snowflake, can enable KPI discovery and self-service. Another example is artificial intelligence and machine learning, which can help your business with underwriting transformation and provide you with “Next Best Action” by combining customer interests with the objectives of your business. 


Any tool or model you have in production today is already “legacy.” Digital insurance innovation doesn’t just mean upgrading your technologies and tools. It means creating an entire ecosystem and culture to form hypotheses, take measured risks, and implement the results! A corporate shift to embrace change in the insurance industry can seem overwhelming, but partnering with 2nd Watch, which has experts in both the technology and the insurance industry, will set your innovation projects up for success. Contact us today to learn how we can help you revolutionize your business!


Data Strategy for Insurance: How to Build the Right Foundation for Analytics and Machine Learning

Analytics and machine learning technologies are revolutionizing the insurance industry. Rapid fraud detection, improved self service, better claims handling, and precise customer targeting are just some of the possibilities. Before you jump head first into an insurance analytics project, however, you need to take a step back and develop an enterprise data strategy for insurance that will ensure long-term success across the entire organization.

Want better dashboards? Our data and analytics insurance team is here to help. Learn more about our data visualization starter pack.

Data management Strategy for Insurance companies

Here are the basics to help get you started – and some pitfalls to avoid.

The Foundation of Data Strategy for Insurance

Identify Your Current State

What are your existing analytics capabilities? In our experience, data infrastructure and analysis are rarely implemented in a tidy, centralized way. Departments and individuals choose to implement their own storage and analytical programs, creating entire systems that exist off the radar. Evaluating the current state and creating a roadmap empowers you to conduct accurate gap analysis and arrange for all data sources to funnel into your final analytics tool.

Define Your Future State

A strong ROI depends on a clear and defined goal from the start. For insurance analytics, that means understanding the type of analytics capabilities you need (e.g., real-time analytics, predictive analytics) and the progress you want to make (e.g., more accurate premiums, reduced waste, more personalized policies). Through stakeholder interviews and business requirements, you can determine the exact fix to reduce waste during the implementation process.

Pitfalls to Avoid

Even with a solid roadmap, some common mistakes can hinder the end result of your insurance analytics project. Keep these in mind during the planning and implementation phases.

Don’t Try to Eat the Elephant in One Bite

Investing $5 million in an all-encompassing enterprise-wide platform is good in theory. However, that’s a hefty price tag for an untested concept. We recommend our clients start on a more strategic proof of concept that can provide ROI in months rather than years.

Maximize Your Data Quality

Your insights are only as good as your data. Even with a well-constructed data hub, your findings cannot turn low-quality data into gems. Data quality management within your business provides a framework for better outcomes by identifying old or unreliable data. But your team needs to take it to the next level, acting with care to input accurate and timely data that your internal system can use for analysis.

Align Analytics with Your Strategic Goals

Alignment with your strategic goals is a must for any insurance analytics project. There needs to be consensus among all necessary stakeholders – business divisions, IT, and top business executives – or each group will pull the project in different directions. This challenge is avoidable if the right stakeholders and users are included in planning the future state of your analytics program.

Integrate Analytics with Your Whole Business

Incompatible systems result in significant waste in any organization. If an analytics system cannot access the range of data sources it needs to evaluate, then your findings will fall short. During one project, our client wanted to launch a claims system and assumed it would be a simple integration of a few systems. When we conducted our audit, we found that 25 disparate source systems existed. Taking the time up front to run these types of audits prevents headaches down the road when you can’t analyze a key component of a given problem.

If you have any questions or are looking for additional guidance on analytics, machine learning, or data strategy for insurance, 2nd Watch’s insurance data and analytics team is happy to help. Feel free to contact us here.

Data strategy insurance industry