Professionals in the supply chain industry need uncanny reflexes. The moment they get a handle on raw materials, labor expenses, international legislation, and shipping conditions, the ground shifts beneath them and all the effort they put into pushing their boulder up the hill comes undone. With the global nature of today’s supply chain environment, the factors governing your bottom line are exceptionally unpredictable. Fortunately, there’s a solution for this problem: predictive analytics for supply chain management.
This particular branch of analytics offers an opportunity for organizations to anticipate challenges before they happen. Sounds like an indisputable advantage, yet only 30% of supply chain professionals are using their data to forecast their future.
Though most of the stragglers plan to implement predictive analytics in the next 10 years, they are missing incredible opportunities in the meantime. Here are some of the competitive advantages companies are missing when they choose to ignore predictive operational analytics.
Enhanced Demand Forecasting
How do you routinely hit a moving goalpost? As part of an increasingly complex global system, supply chain leaders are faced with an increasing array of expected and unexpected sales drivers from which they are pressured to determine accurate predictions about future demand. Though traditional demand forecasting yields some insight from a single variable or small dataset, real-world supply chain forecasting requires tools that are capable of anticipating demand based on a messy, multifaceted assembly of key motivators. Otherwise, they risk regular profit losses as a result of the bullwhip effect, buying far more products or raw materials than are necessary.
For instance, one of our clients, an international manufacturer, struggled to make accurate predictions about future demand using traditional forecasting models. Their dependence on the historical sales data of individual SKUs, longer order lead times, and lack of seasonal trends hindered their ability to derive useful insight and resulted in lost profits. By implementing machine learning models and statistical packages within their organization, we were able to help them evaluate the impact of various influencers on the demand of each product. As a result, our client was able to achieve an 8% increase in weekly demand forecast accuracy and 12% increase in monthly demand forecast accuracy.
This practice can be carried across the supply chain in any organization, whether your demand is relatively predictable with minor spikes or inordinately complex. The right predictive analytics platform can clarify the patterns and motivations behind complex systems to help you to create a steady supply of products without expensive surpluses.
Smarter Risk Management
The modern supply chain is a precise yet delicate machine. The procurement of raw materials and components from a decentralized and global network has the potential to cut costs and increase efficiencies – as long as the entire process is operating perfectly. Any type of disruption or bottleneck in the supply chain can create a massive liability, threatening both customer satisfaction and the bottom line. When organizations leave their fate up to reactive risk management practices, these disruptions are especially steep.
Predictive risk management allows organizations to audit each component or process within their supply chain for its potential to destabilize operations. For example, if your organization currently imports raw materials such as copper from Chile, predictive risk management would account for the threat of common Chilean natural disasters such as flooding or earthquakes. That same logic applies to any country or point of origin for your raw materials.
You can evaluate the cost and processes of normal operations and how new potentialities would impact your business. Though you can’t prepare for every possible one of these black swan events, you can have contingencies in place to mitigate losses and maintain your supply chain flow.
Formalized Process Improvement
As with any industry facing internal and external pressures to pioneer new efficiencies, the supply chain industry cannot rely on happenstance to evolve. There needs to be a twofold solution in place. One, there needs to be a culture of continuous organizational improvement across the business. Two, there need to be apparatuses and tools in place to identify opportunities and take meaningful action.
For the second part, one of the most effective tools is predictive analytics for supply chain management. Machine learning algorithms are exceptional at unearthing inefficiencies or bottlenecks, giving stakeholders the fodder to make informed decisions. Because predictive analytics removes most of the grunt work and exploration associated with process improvement, it’s easier to create a standardized system of seeking out greater efficiencies. Finding new improvements is almost automatic.
Ordering is an area that offers plenty of opportunities for improvement. If there is an established relationship with an individual customer (be it retailer, wholesaler, distributor, or the direct consumer), your organization has stockpiles of information on individual and demographic customer behavior. This data can in turn be leveraged alongside other internal and third-party data sources to anticipate product orders before they’re made. This type of ordering can accelerate revenue generation, increase customer satisfaction, and streamline shipping and marketing costs.
You already know that data is a gateway for retailers to improve customer experiences and increase sales. Through traditional analysis, we’ve been able to combine a customer’s purchase history with their browser behavior and email open rates to help pinpoint their current preferences and meet their precise future needs. Yet the new wave of buzzwords such as “machine learning” and “AI” promise greater accuracy and personalization in your forecasts and the marketing actions they inform.
What distinguishes the latest predictive analytics technology from the traditional analytics approach? Here are three of the numerous examples of this technology’s impact on addressing retail challenges and achieving substantial ROI.
Repeat customers contribute to 40% of a brand’s revenue. But how do you know where to invest your marketing dollars to increase your customer return rate? All of this comes down to predicting which customers are most likely to return and factors that influence the highest customer lifetime value (CLV) for these customers, which are both great use cases for machine learning.
Consider this example: Your customer is purchasing a 4K HD TV and you want to predict future purchases. Will this customer want HD accessories, gaming systems, or an upgraded TV in the near future? If they are forecasted to buy more, which approach will work to increase their chances of making the purchase through you? Predictive analytics can provide the answer.
One of the primary opportunities is to create more personalized sales process without mind-boggling manual effort. The sophistication of machine learning algorithms allows you to quickly review large inputs on purchase histories, internet and social media behavior, customer feedback, production costs, product specifications, market research, and other data sources with accuracy.
Historically, data science teams had to run one machine-learning algorithm at a time. Now, modern solutions from providers like DataRobot allows a user to run hundreds of algorithms at once and even identify the most applicable ones. This vastly increases the time-to-market and focuses your expensive data science team’s hours on interpreting results rather than just laying groundwork for the real work to begin.
2. Attract new customers.
Retailers cannot depend on customer loyalty alone. HubSpot finds that consumer loyalty is eroding, with 55% of customers no longer trusting the companies they buy from. With long-running customers more susceptible to your competitors, it’s important to always expand your base. However, as new and established businesses vie for the same customer base, it also appears that customer acquisition costs have risen 50% in five years.
Machine learning tools like programmatic advertising offer a significant advantage. For those unfamiliar with the term, programmatic advertising is the automated buying and selling of digital ad space using intricate analytics. For example, if your business is attempting to target new customers, the algorithms within this tool can analyze data from your current customer segments, page context, and optimal viewing time to push a targeted ad to a prospect at the right moment.
Additionally, businesses are testing out propensity modeling to target consumers with the highest likelihood of customer conversion. Machine learning tools can score consumers in real time using data from CRMs, social media, e-commerce platforms, and other sources to identify the most promising customers. From there, your business can personalize their experience to better shepherd them through the sales funnel – even going as far as reducing cart abandon rates.
3. Automate touch points.
Often, machine learning is depicted as a way to eliminate a human workforce. But that’s a mischaracterization. Its greatest potential lies in augmenting your top performers, helping them automate routine processes to free up their time for creative projects or in-depth problem-solving.
For example, you can predict customer churn based on irregularities in buying behavior. Let’s say that a customer who regularly makes purchases every six weeks lapses from their routine for 12 weeks. A machine learning model can identify if their behavior is indicative of churn and flag customers likely not to return. Retailers can then layer these predictions with automated touch points such as sending a reminder about the customer’s favorite product – maybe even with a coupon – straight to their email to incentivize them to return.
How to Get Started
Though implementing machine learning can transform your business in many ways, your data needs to be in the right state before you can take action. That involves identifying a single customer across platforms, cleaning up the quality of your data, and identifying specific use cases for machine learning. With the right partner, you can not only make those preparations but rapidly reap the rewards of powering predictive analytics with machine learning.
Want to learn how the 2nd Watch team can apply machine learning to your business? Contact us now.
As many third-party logistics (3PL) companies transition to a data-driven approach, it’s essential to underscore the importance of your data management practices. The way you choose to organize and store data impacts everything from how fast you can access information to which metrics are available. Many data-forward 3PL companies have begun implementing a data vault model to address this strategic decision. The data vault model allows them to address industry-wide challenges such as disparate data, lack of visibility into what is happening, reworking of analytics when acquisitions occur, and slow retrieval or transfer of information.
To assist you in determining the best possible way to organize your data, we will outline the benefits of a data vault model for 3PLs and highlight four use cases to illustrate the benefits for better decision-making.
What is data vault?
A data vault model is known for its practice of separating your data’s primary keys, relationships, and attributes from each other. Let’s say you want to analyze which customers are moving the most loads through you. The relationship between the customer and the load would be stored in one table, while the details about each load and customer would be stored in two separate, but related tables.
Structuring data in this manner helps you account for changing relationships within your data and seamless integration of new data sources when acquisitions or business rules inevitably change. Additionally, it enables quicker data loading through parallel streams and automatically stores historical data. For more details on what a data vault model is and the benefits it provides, check out this blog by 2nd Watch.
Data vault makes it easier to build a data warehouse with accurate, centralized data
The built-in relationships between data vault entities (hubs, satellites, links) make it easier to build a data warehouse. Structuring your data model around flexible but integrated primary keys allows you to combine data from various source systems easily in your data warehouse. It helps you ensure the data loaded into your reporting is not duplicated or out of date.
A lack of a data governance strategy often means that reporting is inconsistent and inaccurate. It reduces executives’ visibility into departments throughout the organization and limits your ability to create effective reporting because data is disjointed. Implementing a data vault model inherently accounts for centralizing your source data and enforcing primary keys. This will not only allow you to offer better reporting to customers, but it has also been found that accurate data is key to shipping accuracy. A strong data warehouse will further your internal analytics abilities by unlocking dashboards that highlight key metrics from revenue to cost-per-pound or on-time performance.
Data vault models make it easy to add new data sources and update business rules without interrupting access to data
A data vault model enables you to centralize data from various sources, while still addressing their differences such as load frequency and metadata. This is accomplished by storing the primary keys for an entity in one table, then creating attribute tables (satellites) specific to separate source systems.
Under a traditional model, most of this data would be held in one table and would require changes to the table structures, and therefore interruptions to data in production, each time a new source system is added. A scalable data model, like data vault, allows you to quickly adjust data delivery and reporting if your customers expand to new markets or merge with another company. Not only will this satisfy your current customers, but it is additionally a quality many logistics companies seek when choosing a 3PL partner. Accommodating multiple source systems and implementing business rules flexibly is key for any 3PL company’s data solution.
Data vault models allow for parallel loading, which gets you and your customers access to data faster
Data vault separates its source systems and data components into different tables. In doing so, it eliminates dependencies within your data and allows for parallel loading, meaning that multiple tables can be loaded at once rather than in a sequence. Parallel loading dramatically reduces the time it takes to access refreshed data.
Many 3PL companies offer customers access to high-quality reporting. Implementing a data vault model to load data quicker allows customers to gain insights in near-real-time. Furthermore, key metrics such as order accuracy, return rates, and on-time shipping percentage rely on timely data. They either require you to respond to a problem or could become inaccurate if your data takes too long to load. The faster you access your data, the more time you have to address your insights. This ultimately enables you to increase your accuracy and on-time shipments, leading to more satisfied customers.
Data vault models automatically save historic data required for advanced analytics
Whether you are looking for more advanced forecasting or planning to implement machine learning analytics, you will need to rely on historical data. Satellite tables, mentioned previously, store attribute information. Each time a feature of an order, a shipment, an employee, etc., changes, it is recorded in a satellite table with a timestamp when the change occurred. The model tracks the same information for changing relationships. This data allows you to automatically tie larger events to specific attributes involved when the events occurred.
3PL companies without data vault models often lose this history of attributes and relationships. When they pursue initiatives to find nuanced trends within their data through advanced analytics, their implementation is roadblocked by the task of generating adequate data. Alternatively, 3PL companies with a data vault model are ready to hit the ground running. Having historical data at your fingertips makes you prepared for any advanced analytics strategy.
2nd Watch has vast experience integrating 3PL companies’ key financial and operational data into a centralized hub. This immediately enables quick, reliable, and holistic insights to internal stakeholders and customers. Furthermore, it lays the groundwork for advanced predictive analytics that allow your teams to proactively address key industry challenges, including late deliveries, volatile market rates, and equipment failure.
Reach out to 2nd Watch for assistance getting started with data vault or evaluating how it may fit in with your current data strategy.
Whether it’s an economic downturn, increased competition, or pressure from executives, third-party logistics (3PL) companies are continuously tasked with finding ways to reduce costs while ensuring operations run smoothly. Fortunately, 3PL companies have a wealth of data at their fingertips that they can use to find opportunities to reduce costs, increase efficiencies, and more. In this blog, we’ll review three tangible ways data and analytics can help 3PLs save money while improving operations.
1. Reduce transportation costs with a holistic view of your business processes.
As a 3PL company, moving your customer’s products on time and at the lowest possible cost keeps them satisfied. Especially when they are likely looking to reduce costs on their end as well. By collecting and centralizing data from your various supply chain processes, your logistics team will gain a holistic understanding of the various touch points of your business process. This data can uncover KPIs such as the cheapest routes as well as cost-per-unit, cost-per-truck, and lead times.
Taking it a step further by layering in external information, such as weather and gas prices, provides even more insights for reducing transportation costs. Visualizing transportation data in an intuitive dashboard will tell your team the story of what is actually happening and what could happen through trends, maps, and metrics. This makes it abundantly clear where improvements are needed, and how costs can be cut.
Takeaway: A holistic view of your business processes can help your team make decisions to increase their productivity and shipment efficiency, which will directly translate into more cost-effective shipping rates.
2. Increase operational efficiency with strategic inventory dashboards.
There are multiple touchpoints a 3PL is required to oversee during the process from when a product is ordered to the moment it reaches its final destination. Your warehouse operations can impact most of these touchpoints in one way or another. Ensuring your warehouse runs smoothly is critical for a 3PL to garner trust and continued business from its customers. One of the best ways to achieve this is through a data-centric inventory management strategy. With centralized and organized inventory data, you can track inventory through each phase of the cycle. KPIs such as order time-to-fill, operation capacity, and storage type utilization can be tracked side by side on dashboards to highlight where there is room for improvement.
Takeaway: Ensuring that supply chain efforts are accurate and efficient will strengthen customer retention and reduce costs associated with under-utilized storage and compensation for inaccurate shipments.
3. Decrease your cost to serve by breaking down its components in a dashboard.
Focusing on your cost-to-serve metric will help you determine areas of inefficiencies and non-profitable customers. Start by centralizing data from various inputs (e.g., sales, employee expenses, business processes, etc.) in a data warehouse so you can access every component that goes into serving your customers. Then, develop a dashboard that aggregates the cost-to-serve metric and breaks it down into its individual components. Furthermore, you could add filters to analyze specific customers, employees, or even the types of materials your customers ship. This breakdown enables you to analyze what costs are required to make a load successful and which costs can be cut without impacting your customer satisfaction.
You could additionally compare customer revenues to the costs associated with serving them to determine how profitable each customer actually is. From there, you could further the analysis to identify future customers who may yield the most profits using machine learning.
Takeaway: Breaking down and analyzing your cost to serve in a logistics dashboard will help your organization reduce costs and identify more profitable customers, business units, or general operational improvements.
As shown through these examples, the primary starting point is to centralize your data if that has not yet been done. From there, it’s easy to craft strategic dashboards that unlock insights into cost-saving opportunities. Even better, these tips are just the tip of the iceberg. There are many other creative options to reduce costs offered by 3PL data. 2nd Watch has vast experience helping companies identify effective, cost-saving data strategies. Feel free to contact 2nd Watch for help determining data analytics solutions that may work best for reducing costs at your company, or get started with a modern data quickstart for 3PLs.
While most servers spend the majority of their time well below peak usage, companies often pay for max usage 24/7.
Cloud providers enable the ability to scale usage up and down, but determining the right schedule is highly prone to human error.
Machine learning models can be used to predict server usage throughout the day and scale the servers to that predicted usage.
Depending on the number of servers, savings can be in the millions of dollars.
How big of a server do you need? Do you know? Enough to handle peak load, plus a little more headroom? How often is your server going to run at peak utilization? For two hours per day? Ten hours? If your server is only running at two hours per day at peak load, then you are paying for 22 hours of peak performance that you aren’t using. Multiply that inefficiency across many servers, and that’s a lot of money spent on compute power sitting idle.
Cloud Providers Make Scaling Up and Down Possible (with a Caveat)
If you’ve moved off-premise and are using a cloud provider such as AWS or Azure, it’s easy to reconfigure server sizes if you find that you need a bigger server or if you’re not fully utilizing the compute, as in the example above. You can also schedule these servers to resize if there are certain times where the workload is heavier. For example, scheduling a server to scale up during nightly batch processes or during the day to handle customer transactions.
The ability to schedule is powerful, but it can be difficult to manage the specific needs of each server, especially when your enterprise uses many servers for a wide variety of purposes. The demands of a server can also change, perhaps without their knowledge, requiring close monitoring of the system. Managing the schedules of servers becomes yet another task to pile on top of all of IT’s other responsibilities. If only there was a solution that could recognize the needs of a server and create dynamic schedules accordingly, and do so without any intervention from IT. This type of problem is a great example for the application of machine learning.
How Machine Learning Can Dynamically Scale Your Server Capacity (without the Guesswork)
Machine learning excels at taking data and creating rules. In this case, you could use a model to predict server utilization, and then use that information to dynamically create schedules for each database.
Server Optimization In Action
We’ve previously done such an application for a client in the banking industry, leading to a 68% increase in efficiency and a cost savings of $10,000 per year for a single server. When applied to the client’s other 2,000 servers, this method could lead to savings of $20 million per year!
While the actual savings will depend on the number of servers employed and the efficiency at which they currently run, the cost benefits will be significant once the machine learning server optimization model is applied.
If you’re interested in learning more about using machine learning to save money on your server usage, click here to contact us about our risk-free server optimization whiteboard session.