3 Things to Consider Before Hiring a Data Scientist

It is commonly understood that data science can bring tremendous value to an organization. That being said, a pitfall companies often fall into when pursuing data science initiatives is hiring data scientists without having a clear vision around their goals, business impact, and expected results.

Before embarking on the lengthy (and expensive) journey of hiring a data scientist, take a step back and make sure your organization is data science ready. This includes developing a concrete, results-focused data science strategy and auditing your underlying data to ensure your data is accurate, consistent, and complete enough to support reliable analysis.

Step 1: Develop your data science strategy.

The process of hiring a data scientist requires an immense amount of time, money, and effort. It could cost your company up to $30,000 just to find a candidate with the desired skill set and personality to fit your company. In addition to the steadily increasing salary, which currently averages around $113,000 (not including benefits), it is a huge investment. If you hire a data scientist without having a clearly defined business goal for data science, you run the risk of burning through that investment and burning out the talent.

Showing a candidate that you have a strategy will inspire confidence in your organization and help them determine if they are up for the challenge. If you plan to hire and onboard a data scientist, you should not leave it up to them to determine their mission and where they fit. To get started on developing a strategy, have your IT team and business leaders join forces to find answers to some of the questions below:

  • What are our business problems and opportunities? Do the goals of our data science initiatives match the goals of our organization?
  • What data do we have to support analytics?
  • Which business or metric definitions vary across departments in our organization? Why do these knowledge silos exist, and how can they be overcome?
  • Can our current infrastructure support data science needs?
  • Are we prepared to change as an organization based on data science initiatives?
  • How can we effectively communicate data science results?

Step 2: Evaluate your company’s data science readiness.

Accurate and readily available data is essential for any data science project. The quality of data that you use for analysis directly impacts your outcome. In other words, if nobody trusts your results, they will not use those insights to inform their decision-making, and your entire data science strategy will flop. Set your data science team up for success by providing clean and centralized data so they can hit the ground running.

While your data does not need to be perfect, you should at least ensure that your data is centralized and does not contain duplicated records or large amounts of missing information. Centralizing key information in a data warehouse eliminates time wasted on searching for the data or finding ways to work around data silos. Creating a system that cleans, organizes, and standardizes your data guarantees reliable information for everyone. It will not only help your new data scientist produce results faster, but it will also increase trust in their results around your organization and save hours of menial data cleansing done by your IT team. While the steps to achieve data science readiness are different for every company, they should all consider the same objectives.

Step 3: Define clear and actionable business cases for data science.

A massive part of a successful data science strategy is to understand the insights data science can provide and how your business can act on that information. Start by brainstorming a variety of use cases. Determine which ones are the most actionable, relevant and provide the best competitive edge. If any of your ideas could save money, that is another great place to start. During this process there are no wrong answers. Identifying use cases can seem intimidating at first, but there are some very easy ways to get started:

  • Ask employees which common business questions go unanswered.
  • Look into what industry leaders (and your competitors) are doing. Whether it’s personalizing marketing messaging to customers or using models to identify insurance fraud, data science has use cases in any industry.
  • Find out what executives wish they could predict about your organization.
  • Reach out to experts. Many organizations (consulting companies and vendors) have implemented data science solutions at a variety of clients.
  • Identify time-consuming and complicated manual processes. Data scientists can likely automate these and make them more reliable.

At 2nd Watch, we have experience with everything from implementing strategic data science solutions to developing data warehouses to support advanced analytics. We know what to look for in a data scientist and how to assess an organization’s data science readiness. Reach out to 2nd Watch if you have any doubts about preparing your data or deciding where to start with use cases. With a data science readiness assessment, we will help you get started and ensure you’re prepared for your new data scientist.

rss
Facebooktwitterlinkedinmail

5 Important Principles for Dashboard Development

So, you’ve been tasked with building an analytics dashboard. It’s tempting to jump into development straight away, but hold on a minute! There are numerous pitfalls that are easy to fall into and can ruin your plans for an attractive, useful dashboard. Here are five important principles for dashboard development to keep in mind every time you open up Power BI, Tableau, Looker, or any other BI tool.

Human Resource Dashboard Development

1. Keep it focused and defined.

Before you start answering questions, you need to know exactly what you’re trying to find out. The starting point of most any dashboarding project should be a whiteboarding session with the end users; the dashboard becomes a collection of visuals that hold the ability to answer their questions.

For every single visual you create, make sure you’re answering a specific question. Each graph needs to be intentional and purposeful, and it’s very important to have your KPIs clearly defined well before you start building. If you don’t include your stakeholders from the very beginning, you’ll almost certainly have a lot more reworking to do after initial production is complete.

Looker Dashboard Development

Courtesy of discourse.looker.com

2. A good data foundation is key.

Generating meaningful visualizations is nearly impossible without a good data foundation. Unclean data means holes and problems will need to be patched and fixed further down the pipeline. Many BI tools have functions that can format/prepare your data and generate some level of relational modeling for building your visualizations. However, too much modeling and logic in the tool itself will lead to large performance issues, and most BI tools aren’t specifically built with data wrangling in mind. A well-modeled semantic layer in a separate tool that handles all the necessary business logic is often essential for performance and governance.

Don’t undervalue the semantic layer!

The semantic layer is the step in preparation where the business logic is performed, joins are defined, and data is formatted from its raw form so it’s understandable and logical for users going forward. For Power BI users, for example, you would likely generate tabular models within SSAS. With a strong semantic layer in place before you even get to the BI tool, there will be little to no data management to be done in the tool itself. This means there is less processing the BI tool needs to handle and a much cleaner governance system.

In many BI tools, you can load in a raw dataset and have a functional dashboard in 10 minutes. However, building a semantic layer forces you to slow down and put some time in upfront for definition, development, and reflection about what the data is and what insights you’re trying to get for your business. This ensures you’re actually answering the right questions.

This is one of the many strengths of Looker, which is built specifically to handle the semantic layer as well as create visualizations. It forces you to define the logic in the tool itself before you start creating visuals.

It’s often tempting to skip the data prep steps in favor of putting out a finished product quickly, but remember: Your dashboard is only as good as the data underneath it.

3. PLEASE de-clutter.

There are numerous, obvious problems with the dashboard below, but there is one lesson to learn that many developers forget: Embrace white space! White space wants to be your friend. Like in web development, trying to pack too many visuals into the same dashboard is a recipe for disaster. Edward Tufte calls it the “data to ink ratio” in his book The Visual Display of Quantitative Information, one of the first and most impactful resources on data visualization.

Basically, just remove anything that isn’t essential or move important but non-pertinent information to a different page of the dashboard/report.

4. Think before using that overly complicated visual.

About to use a tree-map to demonstrate relationships among three variables at once? What about a 3-D, three-axis representation of sales? Most of the time: don’t. Visualizing data isn’t about making something flashy  –  it’s about creating something simple that someone can gain insight from at a glance. For almost any complex visualization, there is a simpler solution available, like splitting up the graph into multiple, more focused graphs.

5. Keep your interface clean, understandable, and consistent.

In addition to keeping your data clean and your logic well-defined, it’s important to make sure everything is understandable from start to finish and is easy to interpret by the end users. This starts with simply defining dimensions and measures logically and uniformly, as well as hiding excess and unused columns in the end product. A selection panel with 10 well-named column options is much easier than one with 30, especially if end-users will be doing alterations and exploration themselves.

You may notice a theme with most of these principles for dashboard development: Slow down and plan. It’s tempting to jump right into creating visuals, but never underestimate the value of planning and defining your steps first. Doing that will help ensure your dashboard is clean, consistent, and most important, valuable.

If you need help planning, implementing, or finding insights in your dashboards, the 2nd Watch team can help. Our certified consultants have the knowledge, training, and experience to help you drive the most value from your dashboard tool. Contact us today to learn about our data visualization starter pack.

rss
Facebooktwitterlinkedinmail

Modern Data Management: On-Premise Data Warehouse vs Modern Data Warehouse

Regardless of your industry or function, the ability to access, analyze, and make use of your data is essential. For many organizations, however, data is scattered throughout the organization in various applications (data silos), often in a format that’s unique to that system. The result is inconsistent access to data and unreliable insights. Some organizations may have a data management solution in place, such as a legacy or on-premise data warehouse, that is not able to keep up with the volume of data and processing speeds required for modern analytics tools or data science initiatives. For organizations striving to become data-driven, these limitations are a major roadblock.

On-Premise vs. The Modern Data Warehouse

The solution for many leading companies is a modern data warehouse

Over the course of several blogs, we tap into our extensive data warehouse experience across industry, function, and company sizes to guide you through this powerful data management solution.

Download Now: Modern Data Warehouse Comparison Guide [Amazon Redshift, Google BigQuery, Azure Synapse, and Snowflake]

In this series of blogs, we:

  1. Define the modern data warehouse.
  2. Outline the different types of modern data warehouses.
  3. Illustrate how the modern data warehouse fits in the big picture.
  4. Share options on how to get started.

A modern data warehouse, implemented correctly, will allow your organization to unlock data-driven benefits from improved operations through data insights to machine learning to optimize sales pipelines. It will not only improve the way you access your data but will be instrumental in fueling innovation and driving business decisions in all facets of your organization.

Part 1: What Is a Data Warehouse?

At its most basic level, a data warehouse stores data from various applications and combines it together for analytical insights. The integrated data is then evaluated for quality issues, cleansed, organized, and modeled to represent the way a business uses the information – not the source system definition. With each business subject area integrated into the system, this data can be used for upstream applications, reporting, advanced analytics, and most importantly, for providing the insights necessary to make better, faster decisions.

Mini Case Study:

A great example of this is JD Edwards data integration. 2nd Watch worked with a client that had multiple source systems, including several JDE instances (both Xe and 9.1), Salesforce, TM1, custom data flows, and a variety of flat files they wanted to visualize in a dashboard report. The challenge was the source system definitions from JDE, with table names like “F1111”, Julian-style dates, and complex column mapping; it was nearly impossible to create the desired reports and visualizations.

2nd Watch solved this by creating a custom data architecture to reorganize the transactional data; centralize it; and structure the data for high-performance reporting, visualizations, and advanced analytics.

Image 1: The image above illustrates a retailer with multiple locations each with a different point of sale system. When they try to run a report on the numbers of units sold by state directly from data housed in these systems, the result is inaccurate due to data formatting inconsistencies. While this is a very simple example, imagine this on an enterprise scale.

Image 2: The image above shows the same data being run through an ETL process into a data warehouse. The result is a clear and accurate chart with the business users’ needs.

Data warehouses then . . . and now

There was a time when a data warehouse architecture consisted of a few source systems, a bunch of ELT/ETL (extract, transform, load) processes, and several databases, all running on one or two machines in an organization’s own data center. Companies would spend years building out this architecture with custom data processes that were used to copy and transform data from one database to another.

Times have changed and traditional on-premise data warehousing has hit its limits for most organizations. Enterprises have built data warehouse solutions in an era where they had limited data sources, infrequent changes, fewer transactions, and low competition. Now, the same systems that have been the backbone of an organization’s analytical environment are being rendered obsolete and ineffective.

Today’s organizations have to analyze data from many data sources to remain competitive. In addition, they are also addressing an increased volume of data coming from those data sources. Beyond this, in today’s fast-changing landscape, access to near real-time or instantaneous insights from data is necessary. Simply put, the legacy warehouse was not designed for the volume, velocity, and variety of data and analytics demanded by modern organizations.

If you depend on your data to better serve your customers, streamline your operations, and lead (or disrupt) your industry, a modern data warehouse built on the cloud is a must-have for your organization. In our next blog, we’ll dive deeper into the modern data warehouse and explore some of the options for deployment.

Contact us to learn what a modern data warehouse would look like for your organization.

Read Part 2: Modern Data Management: Comparing Modern Data Warehouse Options

rss
Facebooktwitterlinkedinmail

A High-Level Overview of Snowflake

Using a modern data warehouse, like Snowflake, can give your organization improved access to your data and dramatically improved analytics. When paired with a BI tool, like Tableau, or a data science platform, like Dataiku, you can gain even faster access to impactful insights that help your organization fuel innovation and drive business decisions.

In this post, we’ll provide a high-level overview of Snowflake, including a description of the tool, why you should use it, pros and cons, and complementary tools and technologies.

Overview of Snowflake

Overview of Snowflake

Snowflake was built from the ground up for the cloud, initially starting on AWS and scaling to Azure and GCP. With no servers to manage and near-unlimited scale in compute, Snowflake separates compute from storage and charges based on the size and length of time that compute clusters (known as “virtual warehouses”) are running queries.

Value Prop:

  • Cross cloud lets organizations choose the cloud provider to use
  • Dynamic compute scaling saves on cost
  • Micro-partitioned storage with automatic maintenance

Scalability:

  • Rapid auto-scaling of compute nodes allows for increased cost savings and high concurrency on demand, and compute and storage are separated

Performance:

  • Built for MPP (massive parallel processing)
  • Optimized for read via a columnar backend
  • Dedicated compute means no concurrency issues

Features:

  • Ability to assign dedicated compute
  • High visibility into spend
  • Native support for JSON, XML, Avro, Parquet, and ORC semi-structured data formats
  • SnowSQL has slight syntax differences
  • Introduction of Snowpark for Snowflake native development

Security:

  • Full visibility into queries executed, by whom, and how long they ran
  • Precision point-in-time restore available via “time-travel” feature

Why Use Snowflake

Decoupled from cloud vendors, it allows a true multi-cloud experience. You can deploy on Azure, AWS, GCP, or any combination of those cloud services. With near-unlimited scale and minimal management, it offers a best-in-class data platform but with a pay-for-what-you-use consumption model.

Pros of Snowflake

  • Allows for a multi-cloud experience built on top of existing AWS, Azure, or GCP resources, depending on your preferred platform
  • Highly-performant queries utilizing uniquely provisioned pay-as-you-go compute and automatically derived partitioning
  • Easy implementation of security and role definitions for less frustrating user experience and easier delineation of cost while keeping data secure
  • Integrated ability to share data to partners or other consumers outside of an organization and supplement data with publicly available datasets within Snowflake

Cons of Snowflake

  • Ecosystem of tooling continues to grow as adoption expands, but some features are not readily available
  • Due to the paradigm shift in a cloud-born architecture, taking full advantage of Snowflake’s advanced features requires a good understanding of cloud data architecture

Select Complementary Tools and Technologies for Snowflake

  • Apache Kafka
  • AWS Lambda
  • Azure Data Factory
  • Dataiku
  • Power BI
  • Tableau

We hope you found this high-level overview of Snowflake helpful. If you’re interested in learning more about Snowflake or other modern data warehouse tools like Amazon Redshift, Azure Synapse, and Google BigQuery, contact us to learn more.

The content of this blog is an excerpt of our Modern Data Warehouse Comparison Guide. Click here to download a copy of that guide.

rss
Facebooktwitterlinkedinmail

Evolving Operations to Maximize AWS Cloud Native Services

As a Practice Director of Managed Cloud Services, my team and I see well-intentioned organizations fall victim to this very common scenario… Despite the business migrating from its data center to Amazon Web Services (AWS), its system operations team doesn’t make adjustments for the new environment. The team attempts to continue performing the same activities they did when their physical hardware resided in a data center or at another hosting provider.

The truth is, that modernizing your monolithic applications and infrastructure requires new skill sets, knowledge, expertise, and understanding to get desired results. Unless you’re a sophisticated, well-funded, start-up, most established organizations don’t know where to begin after the migration is complete. The transition from deploying legacy software in your own data center, to utilizing Elastic Kubernetes Service (EKS) and micro-services, while deploying code through an automated Continuous Integration and Continuous Delivery (CI/CD) pipeline, is a whole new ballgame. Not to mention how to keep it functioning after it is deployed.

In this article, I’m providing some insight on how to overcome the stagnation that hits post-migration. With forethought, AWS understanding, and a reality check on your internal capabilities, organizations can thrive with cloud-native services. At the same time, kicking issues downstream, maintaining inefficiencies, and failing to address new system requirements will compromise the ROI and assumed payoffs of modernization.

Is Your Team Prepared?

Sure, going serverless with Lambda might be all the buzz right now, but it’s not something you can effectively accomplish overnight. Running workloads on cloud-native services and platforms requires a different way of operating. New operational demands require that your internal teams are equipped with these new skill sets. Unfortunately, a team that may have mastered the old data center or dedicated hosting provider environment, may not be able to jump in on AWS.

The appeal of AWS is the great flexibility to drive your business and solve unique challenges.  However, because of the ability to provision and decommission on demand, it also introduces new complexities. If these new challenges are not addressed early on, you will definitely see friction between teams which can damage collaboration and adoption, the potential for system sprawl increases, and cost overruns can compromise the legitimacy and longevity of modernization.

Due to the high cost and small talent pool of technically efficient cloud professionals, many organizations struggle to nab the attention of these highly desired employees. Luckily, modern cloud-managed service providers can help you wade through the multitude of services AWS introduces. With a trusted and experienced partner by your side, businesses are able to gain the knowledge necessary to drive business efficiencies and solve unique challenges. Depending on the level of interaction, existing team members may be able to level up to better manage AWS growth going forward. In the meantime, involving a third-party cloud expert is a quick and efficient way to make sure post-migration change management evolves with your goals, design, timeline, and promised outcomes.

Are You Implementing DevOps?

Modern cloud operations and optimizations address the day two necessities that go into the long-term management of AWS. DevOps principles and automation need to be heavily incorporated into how the AWS environment operates. With hundreds of thousands of distinct prices and technical combinations, even the most experienced IT organizations can get overwhelmed.

Consider traditional operations management versus cloud-based DevOps. One is a physical hardware deployment that requires logging into the system to perform configurations, and then deploying software on top. It’s slow, tedious, and causes a lag for developers as they wait for feature delivery, which negatively impacts productivity. Instead of system administrators performing monthly security patching, and having to log into each instance separately, a modern cloud operation can efficiently utilize a pipeline ­with infrastructure as code. Now, you can update your configuration files to use a new image and then use infrastructure automation to redeploy. This treats each one as an ephemeral instance, minimizing any friction or delay on the developer teams.

This is just one example of how DevOps can and should be used to achieve strong availability, agility, and profitability. Measuring DevOps with the CALMS model provides a guideline for addressing the five fundamental elements of DevOps: Culture, Automation, Lean, Measurement, and Sharing. Learn more about DevOps in our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them.

Do You Continue With The Same Behavior?

Monitoring CPU, memory, and disk at the traditional thresholds used on legacy hardware are not necessarily appropriate when utilizing AWS EC2. To achieve the financial and performance benefits of the cloud, you purposely design systems and applications to use and pay for the number of resources required. As you increasingly deploy new cloud-native technology, such as Kubernetes and serverless operations, require that you monitor in different ways so as to reduce an abundance of unactionable alerts that eventually become noise.

For example, when running a Kubernetes cluster, you should implement monitoring that alerts on desired pods. If there’s a big difference between the number of desired pods and currently running pods, this might point to resource problems where your nodes lack the capacity to launch new pods. With a modern managed cloud service provider, cloud operations engineers receive the alert and begin investigating the cause to ensure uptime and continuity for application users. With fewer unnecessary alerts and an escalation protocol for the appropriate parties, triage of the issue can be done more quickly. In many cases remediation efforts can be automated, allowing for more efficient resource allocation.

How Are You Cutting Costs?

Many organizations initiate cloud migration and modernization to gain cost-efficiency. Of course, these financial benefits are only accessible when modern cloud operations are fully in place.

Considering that anyone can create an AWS account but not everyone has visibility or concerns for budgetary costs, it can result in costs exceeding expectations quickly. This is where establishing a strong governance model and expanding automation can help you to achieve your cost-cutting goals. You can limit instance size deployment using IAM policies to insure larger, more expensive instances are not unnecessarily utilized. Another cost that can quickly grow without the proper controls is your S3 storage. Enabling policies to have objects expire and automatically be deleted can help to curb an explosion in storage costs. Enacting policies like these to control costs requires that your organization take the time to think through the governance approach and implement it.

Evolving in the cloud can reduce computing costs by 40-60% while increasing efficiency and performance. However, those results are not guaranteed. Download our eBook, A Holistic Approach to Cloud Cost Optimization, to ensure a cost-effective cloud experience.

How Will You Start Evolving Now?

Time is of the essence when it comes to post-migration outcomes – and the board and business leaders around you will be expecting results. As your organization looks to leverage AWS cloud-native services, your development practices will become more agile and require a more modern approach to managing the environment. To keep up with these business drivers, you need a team to serve as your foundation for evolution.

2nd Watch works alongside organizations to help start or accelerate your cloud journey to become fully cloud native on AWS. With more than 10 years of migrating, operating, and effectively managing workloads on AWS, 2nd Watch can help your operations staff evolve to operate in a modern way with significant goal achievement. Are you ready for the next step in your cloud journey? Contact us and let’s get started.

 

rss
Facebooktwitterlinkedinmail

Healthcare Dashboard Examples to Improve Decision-Making through Design

Healthcare executives often must quickly make informed decisions that affect the trajectory of their business. How can they best access and analyze the key performance indicators (KPIs) needed to make those decisions? By referring to well-designed healthcare dashboards.

Using tools such as Looker, Power BI, and Tableau, healthcare organizations can integrate customizable, interactive dashboards into their reporting to improve decision-making across a range of areas – from healthcare facility operations and surgeon performance to pharmaceutical sales and more. We’ve compiled several healthcare dashboard examples to illustrate a variety of use cases and a sampling of dashboard design best practices to help your organization make the most of your data.

A Healthcare Dashboard Using Data Science

Healthcare Dashboard

A healthcare dashboard like this one demonstrates how a company could use data science to determine sales projections – in this case, pharmaceutical sales – and guide their field reps’ sales activities. To make the information easy to digest, this dashboard uses the rule of thirds (a design principle that draws a viewer’s eye to points of interest – you’ll notice variations on the rule of thirds throughout these healthcare dashboard examples):

  1. New prescription (NRx) and total prescription (TRx) values at the top make goals and expectations easily accessible.
  2. A visual in the middle illustrates the TRx projection over time, also broken down into specific drugs for competitive analysis.
  3. Customer segmentation comparisons at the bottom provide broader context and more actionable information about doctors prescribing the company’s drugs. This section indicates the company’s most valuable customers and the areas where their competitors are seeing higher growth, allowing them to target accordingly.

Combining projections and customer segmentation information from a dashboard like this one, a company can adjust their sales strategy as necessary or determine they don’t need to change their sales plan to meet their goals.

A Customized Healthcare Dashboard in Tableau

A Customized Healthcare Dashboard in Tableau

This dashboard provides quick and easy insight into the financials and patient visits specific to this organization’s California healthcare facilities. It shows the most valuable metrics at the top of the dashboard and enables executives to drill into those that warrant further investigation below.

Using a custom map feature in Tableau, company executives can quickly see which zip codes generate the most revenue and, even more specifically, which individual facilities contribute the most patient encounters. Users are able to uncover areas of improvement across the business such as facilities with lower levels of visits than expected or a revenue-to-encounter ratio not meeting profitability expectations. Conversely, this dashboard also highlights successful facilities that less-successful facilities can model themselves after.

A Regional Healthcare Finance Dashboard

A Regional Healthcare Finance Dashboard

This dashboard showing clinical practice financial data and patient visit information would be referenced by two audiences: the clinical practice’s financial team and the practice owner. In this case, the owner is a private equity firm. Including overall financial performance and that of individual practices in one healthcare finance dashboard allows the private equity firm to understand practice financial data in context.

The heat map provides an easy-to-digest visual to communicate location-specific patient encounter information. Each clinical practice can reference the map to see how the number of patient encounters correlates to their individual financial information, and the private equity firm can understand how location impacts operating costs and revenue across multiple nearby facilities. They’re able to see which clinics and which zip codes bring in the greatest net revenue, allowing them to strategically approach potential new healthcare practice acquisitions.

An Interactive Surgeon Performance Dashboard

An Interactive Surgeon Performance Dashboard

Hospital administrators and department heads would use a dashboard like this to evaluate surgeons’ performance. This could be the average time spent in the operating room for a specific surgery, patients’ average time in the hospital recovering after surgery, or how patients fare in emergency surgeries vs. planned surgeries. Alternatively, surgeons could evaluate themselves and learn about areas for improvement.

Using multiple spreadsheets to dig into the factors relevant to a surgical department evaluation can be time-consuming and confusing. Instead, this healthcare dashboard example provides a high-level view of a neurology department and also details regarding each surgeon. The dashboard is interactive, so users can filter in many ways to evaluate one surgeon or multiple surgeons against each other.

A user can easily identify potential issues, such as a surgeon spending a significant amount of time in the operating room for very few surgeries. Department heads can then drill down further into the average age of patients, the surgeries performed, and how many surgeries were planned vs. emergency to see if those factors explain the amount of time spent, all without having to reference multiple spreadsheets or dashboards.

A High-Level Sales KPI Dashboard

A High-Level Sales KPI Dashboard

Similar to the dashboard using data science shared at the beginning of this post, this highly visual dashboard gives the user a quick overview of KPIs. The user would likely be high-level sales employees at a pharmaceutical company who don’t want to be bogged down with details upfront. However, they can click through this “nine-box stoplight” to get more information as needed.

A user could click on the box showing the change in NBRx (prescriptions for patients who are new to the brand) to see related trends or details about specific sales reps that contributed to this “green light,” a positive indicator. They could then investigate the negative change in TRx by clicking on that box. Pulling in only the details the user needs, they are better able to explain these numbers without the distraction of unrelated details. Perhaps the brand saw great growth in the number of distinct HCPs (healthcare providers) prescribing NBRx but not prescribing high enough volume of RRx (returning patients) to positively affect TRx. Or maybe a high-performing sales rep was on maternity leave and therefore not selling this month, causing a temporary drop in TRx.

A Dialysis Clinic Dashboard in Looker

A Dialysis Clinic Dashboard in Looker

Dialysis facility owners, administrators, and directors can use a dashboard like this one, created using Looker, to track dialysis machine utilization and allocate machine staffing needs based on location. This dashboard tells a full story with data, moving from a broad picture of total patient encounters, through dialysis-specific encounters, and drilling into details at the facility level.

Like the previous healthcare dashboard examples, this dashboard is driven by the rule of thirds. It first provides KPIs, then highly visual representations of location-specific information (both individual facilities and another zip code heat map), and finally a broader data point broken down based on time. Within one dashboard, a user can understand their data from multiple viewpoints to draw varied, in-depth insights.

These healthcare dashboard examples demonstrate how a well-designed dashboard can uniquely meet the needs of a range of healthcare-related organizations. Healthcare executives can make the most of their data and empower their business users with customizable, interactive dashboards. To learn how 2nd Watch could help your healthcare organization develop dashboards that best suit your needs, set up a complimentary healthcare analytics whiteboard session.

Healthcare Dashboard Example

rss
Facebooktwitterlinkedinmail

3 Data Visualization Best Practices That Uncover Hidden Patterns

At a first glance, raw data doesn’t offer anyone but data experts many actionable conclusions. The high volume and complexity of consolidated data sources in most organizations create a visual overload, hiding essential truths about your business and your customers in plain sight. Yet with the right data visualizations, hidden patterns and trends can spring forth from the screen and inform decisions. The question is: how do you ensure the data visualizations within your dashboard will tell a story that guides your next steps? If your organization adheres to the following data visualization best practices, you’ll be equipped to uncover hidden patterns that enhance your awareness and refine your decision-making.

Data Visualization Best Practices

 

Want better dashboards? Our data and analytics experts are here to help. Learn more about our Data Visualization Starter Pack.

 

1. Follow design best practices.

 

The value of your data lies in the clarity of your visuals. Good design fosters the right conclusions, allowing the cream to rise to the top of its own accord. And bad design? It baffles your users and confuses your findings – even if the patterns and trends from your reports would otherwise be straightforward.

The importance of data visualizations is only really apparent when you take time to review the worst of the worst. This Tumblr provides plenty of mindboggling examples of data visualizations gone wrong. Moreover, it helps to communicate the importance of following design best practices:

Follow Design Best Practices

Appreciate White Space

Images Source: Tumblr

The readability of your data visualization is a key consideration. Not every inch of a visualization needs to be crammed with information (we aren’t designing for Where’s Waldo). The image on the left illustrates just how important it is to give your graphical visualization some breathing room.

One way is to break up the information. Your dashboard should have no more than 15 objects or tiles at a time. Any more and the content can make it difficult to focus on any single takeaway. If the visuals are intertwined, following the rule of thirds (dividing your dashboard vertically or horizontally into thirds) can prevent clutter and draw the eye to key observations.

 

Adopt a Color Palette

Images Source: Tumblr

The colors within your dashboard visuals matter. By choosing colors that reflect your brand or fall within a clear color palette, you allow the visuals to stand out on their own.

When you pick colors that are not complementary, you fail to draw the eye to any insight, encouraging the key findings to be overlooked at a first glance. The image on the right is a perfect example of how your takeaways can all too easily fade into the background.

 

Respect the Message

Images Source: Tumblr

Visualizations should never prioritize style over substance. You want pleasant graphics that do not distract from the actual message. Clarity is always key.

In the image on the left, the vertical axis representing height makes it seem as if women from Latvia tower over those from India. Most users can still extract value but it calls the dashboard visualization’s credibility into question.

 

 

Provide Clear Answers

Images Source: Tumblr

Provide Clear Answers

Complexity is another area to avoid. Your business users shouldn’t need to perform mental gymnastics to figure out what you’re trying to communicate. Any extra steps they need to take can muddle the actual findings of a report.

In the image on the right, there are a variety of factors to calculate the number of avocado toasts it takes to afford a deposit on a house (bad idea already) that are not clear at first glance.

Each piece of avocado toast on the graphic represents 100 toasts, making it next to impossible to gauge the amount represented by the incomplete pieces for Mexico City or Johannesburg. Plus, people would need to calculate the cost based on the average price in each city, adding an additional step to verify the data. Regardless, nothing is clear and the “insight” doesn’t merit the additional work.

At the end of the day, you don’t have to have an exceptional eye for visual design, but the person implementing visualizations into your dashboard does. This can be an external resource (if you have one) or a partner experienced in following data visualization best practices.

 

2. Cater to your target users.

 

Great data visualizations are tailored to their target end users. The data sources, the KPIs, and the visualizations themselves all need to align with their goals and challenges. Before implementing your data visualization, ask these questions to determine the perfect parameters:

  • What is the background and experience of your end users? Your data visualizations need to rise to the level of your users. Executives need visualizations that offer strategic observations about revenue streams, operational efficiencies, and market trends. Employees on the front lines need answers to allow them to evaluate performance, KPIs, and other tactical needs.
  • What experience do they have with reporting dashboards or data visualizations? Frame of reference matters. If your users have primarily used Excel, you need to strike a balance between enhancing their reporting and creating a sense of familiarity. That might mean visualizing KPIs that they’ve previously created in Excel or building an interface that mirrors the experience of any previous tools.
  • What are their most common usages? There’s finite space for data visualizations within your dashboards. Every graphic or chart you provide should apply to frequent use cases that provide the greatest value for your team. Otherwise, there’s a risk that your investment will fail to earn the fullest ROI.
  • Which pain points do they struggle with most? Visualizations are meant to solve problems. As you are determining what to illuminate with your dashboard visualizations, you need to reflect on the greatest needs. Are there challenges pinpointing customer motivation and behaviors? Have revenues stagnated? Are old processes inefficient? The squeakiest wheels should be oiled first.

Answering all of these questions creates a foundation that your partner can implement as they create meaningful dashboards. Data visualizations that cater to these audiences are better at satisfying their needs.

For example, if you are creating a dashboard for a restaurant, your visualizations should cater to the pressing needs and concerns of the owner or manager. Expenses by supply category, total sales vs monthly expenses, special orders by count on receipt line items, and other KPIs can supply owners with quick and essential insight that can enhance their business strategy.

Regardless of industry, the data visualizations within your dashboards should balance immediate needs with actionable insight.

 

3. Show overlooked relationships between KPIs.

 

Reviewing one benchmark alone gives very narrow insights. Good data visualization can help organizations to connect the dots between KPIs. In fact, there are plenty of instances where the connection between one KPI and another is not apparent until that data is visualized. That’s when it helps to have an experienced partner guiding your work.

Let’s use our home health staffing firm partner as an example. We helped them to implement a Snowflake data warehouse, and one of the key lessons they wanted to learn was about the response they should take during COVID-19. They were eager to review cases across the United States, but that would only provide limited insights.

We suggested they visualize the data against a few other parameters. A timeline functionality could help to create an interactive experience that showed the growth of outbreaks over a period of time. In other scenarios, organizations could do a side-by-side comparison of KPIs like CO2 emissions, automotive traffic, or other conditions to measure the impact or trends related to the virus.

What’s equally important is that you not overlook outliers. There are organizations that will get in the habit of burying statistical anomalies, not realizing when those outliers become the norm. Working with the right partner can give a fresh perspective, preventing essential findings from falling outside of your awareness.

 

We can help you unlock the power of your visual analytics. Learn more here.

rss
Facebooktwitterlinkedinmail

4 Unexpected Customer Insights Uncovered Through Analytics

Analytics can uncover valuable information about your customers, allowing you to connect on a deeper, more personal level.

A customer insight is a piece of information or metric that helps a business better understand how their customers think or behave. Unlike just a few years ago, businesses don’t need to rely on a stodgy market research firm to gain these insights. Today’s most successful companies are digging deep into their own datasets — past superficial metrics like gender, age, and location — to uncover valuable knowledge that was unattainable until very recently.

Customer Insights

Here are a few key examples.

Eloquii Discovers New E-Commerce Revenue Streams

Because e-commerce companies exist in such a data-rich world, it makes sense that they’d be ahead of the curve in terms of using analytics to gain new insights. That’s exactly how Eloquii, a fast-fashion house catering to plus-size women, has solved several of its marketing problems.

After noticing that customers were returning white dresses at a higher proportion than other products, Eloquii’s marketing department dug into its data and discovered that many of those customers had actually bought multiple dresses, with the intention of using one of them as a wedding dress. That unexpected insight enabled Eloquii to have a more effective conversation with its customers around those products and better serve their needs.

According to Eloquii VP of Marketing, Kelly Goldston, the company also relies on analytics to anticipate customer behavior and tailor their marketing efforts to proactively engage each of the brand’s customer profiles, such as customers who indicate a potential high lifetime value and those who have started to shop less frequently at the site.

DirecTV Uses Customer Insight to Create Double-Digit Conversion Boost

Satellite media provider DirectTV used data to uncover an underserved portion of its customer base – those who had recently moved. The company discovered that statistically, people who have recently moved are more likely to try new products and services, especially those who’ve moved within seven days.

Armed with this information, and change of address data from the U.S. Postal Service, DirecTV created a special version of their homepage that would appear only for people who had recently moved. Not only did the targeted campaign result in a double-digit conversion improvement of the homepage, it did so with a reduced offer compared to the one on the standard website.

Whirlpool Uses Customer Insight to Drive Positive Social Change

While analyzing customer data, Whirlpool discovered that 1 in 5 children in the U.S. lack access to clean clothes, and that not having clean laundry directly contributes to school absenteeism and increases the risk of dropping out. This further predisposes these children to a variety of negative outcomes as an adult, including a 70% increased risk of unemployment.

To help stop this vicious cycle, Whirlpool created the Care Counts Laundry Program, which installs washers and dryers in schools with high numbers of low-income students. The machines are outfitted with data collection devices, enabling the Whirlpool team to record laundry usage data for each student and correlate their usage with their attendance and performance records.

The program has yielded dramatic results, including a 90% increase in student attendance, an 89% improvement in class participation, and a 95% increase in extracurricular activity participation among target students. As a result of its success, the program has attracted interest from over 1000 schools. It’s also drawn support from other organizations like Teach for America, which partnered with Whirlpool on the initiative for the 2017/2018 school year.

Prudential Better Serves its Customers with Data-Driven Insight

Financial services firms are a leading adopter of data analytics technology and Prudential has established itself as one of the forward-thinkers in the field. In August of this year, the company announced the launch of a completely new marketing model built on the customer insights gleaned from analytics and machine learning.

A central part of that initiative is the Prudential LINK platform, a direct-to-consumer investing service that allows customers to create a detailed profile, set and track personal financial goals, and get on-demand human assistance through a video chat. The LINK platform not only provides a more convenient customer experience, it also gives the Prudential team access to customer data they can use to make optimizations to other areas, such as the new PruFast Track system, which uses data to streamline the normally tedious insurance underwriting process.

Quality Customer Insights Have Become Vital to Business Success

As customers grow used to data-driven marketing, businesses will be forced to approach prospects with customized messages, or run the risk of losing competitive advantage. Research from Salesforce shows that 52% of customers are either extremely likely or likely to switch brands if a company doesn’t personalize communication with them.

2nd Watch helps organizations uncover high value insights from their data. If you’re looking to get more insights from your data or just want to ask one our analytics experts a question, send us a message. We’re happy to help.

rss
Facebooktwitterlinkedinmail

A High-Level Overview of Amazon Redshift

Modern data warehouses, like Amazon Redshift, can improve the way you access your organization’s data and dramatically improve your analytics. Paired with a BI tool, like Tableau, or a data science platform, like Dataiku, your organization can increase speed-to-insight, fuel innovation, and drive business decisions throughout your organization.

In this post, we’ll provide a high-level overview of Amazon Redshift, including a description of the tool, why you should use it, pros and cons, and complementary tools and technologies.

A High-Level Overview of Amazon Redshift

Overview of Amazon Redshift

Amazon’s flagship data warehouse service, acquired from ParAccel originally, is a columnar database forked from Postgres. Similar to AWS RDS databases, pricing for Amazon Redshift is charged by size of the instance, along with how long it’s up and running.

Value Prop:

  • Increased performance of queries and reports with automatic indexing and sort keys
  • Easy integration with other AWS products
  • Most established data warehouse

Scalability:

  • Flexibility to pay for compute independently of storage by specifying the number of instances needed
  • With Amazon Redshift Serverless, automatic and intelligent scaling of data warehouse capacity

Performance:

  • Instances maximize speed for performance-intensive workloads that require large amounts of compute capacity.
  • Distribution and sort keys are more intuitive than traditional RDBMS indexes, allowing for more user-friendly performance tuning of queries.

Features:

  • Easy to spin up and integrate with other AWS services for a seamless cloud experience
  • Native integration with the AWS analytics ecosystem makes it easier to handle end-to-end analytics workflows with minimal issues

Security:

  • Can be set up to use SSL to secure data in transit and hardware-accelerated AES-256 encryption for data at rest

Why Use Amazon Redshift

It’s easy to spin up as an AWS customer, without needing to sign any additional contracts. This is ideal for more predictable pricing and starting out. Amazon Redshift Serverless automatically scales data warehouse capacity while only charging for what you use. This enables any user to run analytics without having to manage the data warehouse infrastructure.

Pros of Amazon Redshift

  • It easily spins up and integrates with other AWS services for a seamless cloud experience.
  • The distribution and sort keys are more intuitive than traditional RDBMS indexes, allowing for more user-friendly performance tuning of queries.
  • Materialized views support functionality and options not yet available in other cloud data warehouses, helping improve reporting performance.

Cons of Amazon Redshift

  • It lacks some of the modern features and data types available in other cloud-based data warehouses such as support for separation of compute and storage spending, and automatic partitioning and distribution of data.
  • It requires traditional database administration overhead tasks such as vacuuming and managing of distribution of sort keys to maintain performance and data storage.
  • As data needs grow, it can be difficult to manage costs and scale.

Select Complementary Tools and Technologies for Amazon Redshift

  • AWS Glue
  • AWS QuickSight
  • AWS SageMaker
  • Tableau
  • Dataiku

We hope you found this high-level overview of Amazon Redshift helpful. If you’re interested in learning more about Amazon Redshift or other modern data warehouse tools like Google BigQuery, Azure Synapse, and Snowflake, contact us to learn more.

The content of this blog is an excerpt of our Modern Data Warehouse Comparison Guide. Click here to download a copy of that guide.

rss
Facebooktwitterlinkedinmail

How to Add Business Logic Unique to a Company and Host Analyzable JDE Data

In the first part of this series, A Step by Step Guide to Getting the Most from Your JD Edwards Data, we walked through the process of collecting JDE data and integrating it with other data sources. In this post, we will show you how to add business logic unique to a company and host analyzable JDE data.

Adding Business Logic Unique to a Company

When working with JD Edwards, you’ll likely spend the majority of your development time defining business logic and source-to-target mapping required to create an analyzable business layer. In other words, you’ll transform the confusing and cryptic JDE metadata into something usable. So, rather than working with columns like F03012.[AIAN8] or F0101.[ABALPH], the SQL code will transform the columns into business-friendly descriptions of the data. For example, here is a small subset of the customer pull from the unified JDE schema:

Adding Business Logic Unique to a Company
Furthermore, you can add information from other sources. For example, if a business wanted to include new customer information only stored in Salesforce, you can build the information into the new [Customer] table that exists as a subject area rather than a store of data from a specific source. Moreover, the new business layer can act as a “single source of the truth” or “operational data store” for each subject area of the organization’s structured data.

operational data store

Looking for Pre-built Modules?

2nd Watch has built out data marts for several subject areas. All tables are easily joined on natural keys, provide easy-to-interpret column names, and are “load-ready” to any visualization tool (e.g., Tableau, Power BI, Looker) or data application (e.g., machine learning, data warehouse, reporting services). Modules already developed include the following:

Account Master Accounts Receivable Backlog Balance Sheet Booking History
Budget Business Unit Cost Center Currency Rates Customer Date
Employee General Ledger Inventory Organization Product
Purchase Orders Sales History Tax Territory Vendor

Hosting Analyzable JDE Data

After creating the data hub, many companies prefer to warehouse their data in order to improve performance by time boxing tables, pre-aggregating important measures, and indexing based on frequently used queries. The data warehouse also provides dedicated resources to the reporting tool and splits the burden of the ETL and visualization workloads (both memory-intensive operations).

By design, because the business layer is load-ready, it’s relatively trivial to extract the dimensions and facts from the data hub and build a star-schema data warehouse. Using the case from above, the framework would simply capture the changed data from the previous run, generate any required keys, and update the corresponding dimension or fact table:

Hosting Analyzable JDE Data

Simple Star Schema

Evolving Approaches to JDE Analytics

This approach to analyzing JD Edwards data allows businesses to vary the BI tools they use to answer their questions (not just tools specialized for JDE) and change their approach as technology advances. 2nd Watch has implemented the JDE Analytics Framework both on premise and in a public cloud (Azure and AWS), as well as connected with a variety of analysis tools, including Cognos, Power BI, Tableau, and ML Studio. We have even created API access to the different subject areas in the data hub for custom applications. In other words, this analytics platform enables your internal developers to build new business applications, reports, and visualizations with your company’s data without having to know RPG, the JDE backend, or even SQL!

Evolving Approaches to JDE Analytics

High-level JDE Data Flow

Looking for more data and analytics insights? Download our eBook, “Advanced Data Insights: An End-to-End Guide for Digital Analytics Transformation.”

rss
Facebooktwitterlinkedinmail