4 Ways to Evaluate Your Enterprise Data Science Projects

In order for your models to be successful, you must first determine exactly what success looks like for your organization

Vish Vishwanath
Ravi Pasula
4 Ways to Evaluate Your Enterprise Data Science Projects

Today’s enterprises understand the rush to capitalize on the power of data science, but they may not always know exactly what a successful data science project looks like. Business leaders are primarily concerned with business KPIs such as revenue growth, efficiency and productivity. In contrast, data science teams deal in technical KPIs such as model accuracy and precision.

For their projects to be successful, data science teams must bridge this gap, helping executives understand how technical KPIs translate into business KPIs. When both business and technical stakeholders speak the same language, they can determine what they want to accomplish with their data science models, and then collaborate to achieve that goal.

In this blog, we’ll cover four steps data science teams should take to evaluate the success of their projects and demonstrate that success to business stakeholders:

  • Connect use cases to strategy
  • Schedule regular checkpoints and communicate results clearly
  • Add context for consistency and explainability
  • Evaluate success from an infrastructure standpoint

Connect use cases to strategy

Like any other business process, data science projects should contribute in some meaningful way to achieving the company’s strategic objectives. The problem data science teams often face is that company strategies can be too broad to be helpful. Let’s say that your company’s strategic objective is to grow revenue. This is an objective for every business, and given that there are so many factors involved with increasing revenue, it would be difficult to connect it directly to your AI use cases.

We’d all like to believe that our AI models directly contribute to revenue growth, but if we actually want to show quantifiable results, we should focus our attention on something more specific and granular. Let’s say that your model is all about helping salespeople find the information they need quickly. You can safely assume that increasing salesforce efficiency would eventually result in increased revenue, but it would be impossible to quantify the direct correlation.

Instead, you should define success using the business KPIs that are most relevant to the project. You could determine how many hours the average salesperson saved using your models, and then multiply that number by the total number of salespeople to determine the overall benefit. This gives you a quantifiable way to determine how successful the project really was. While it is true that these KPIs won’t be relevant to stakeholders outside the sales function, that’s no reason to worry. All stakeholders have different strategic priorities, so it stands to reason you’d align different models to different stakeholders.

Schedule regular checkpoints and communicate results clearly

Data science teams must meet with business stakeholders regularly to assess how different models are contributing to strategic priorities. For instance, preventing customer churn is another example of a strategic priority that would contribute to the overall goal of increasing revenue. One way you could evaluate your churn prediction models is to sample your data sets.

Suppose your models identified a list of 500 customers that were likely to churn. You could take a random sample of 10 of those customers and do a deep dive to determine whether the model provided helpful results. You could look at whether or not those customers churned, but also consider what behavior they exhibited that indicated they might churn.

One way to get the behavioral insights you need to evaluate your churn model is by performing a peer group analysis. This means you would compare the customers identified by the model with a database of customers that have churned in the past. For example, one reason the model may have flagged a particular customer is that they opened an abnormally high volume of support tickets recently. If many former customers opened a similar volume of tickets shortly before churning, it would indicate that this behavior is indeed an indicator of growing customer dissatisfaction, and that the model was right to consider the customer at risk.

This is just one simple example of how you can translate AI metrics into business metrics, using plain English that non-technical leaders can easily understand. You could even go a step further by building a dashboard to communicate the results of your models, abstracting the AI metrics altogether. By giving your business users only the results that they care about, you can help them get comfortable using AI-driven insights in their everyday work. This will help non-technical stakeholders understand that the point of AI projects is not to replace their role, but to help them do it better.

Add context for consistency and explainability

Imagine you’re trying to decide where your business should expand next. You’ve recently been growing the business in Japan, and all your models indicate that there’s still high demand for your services there. Before you make a long-term investment in the country, how do you know if that demand will still be there five years from now?

To answer this, you need more context for your models. You can feed external data sets into your internal models to make sure you’re getting the full picture. For instance, if you include macroeconomic data for Japan—its GDP, growth rate and inflation rate—you might find that your business doesn’t have as much growth potential as your internal data sets might have indicated.

Models that only pull from internal data sets lack the context needed to establish consistency over time. They have no way of knowing that your business may already be approaching the point of market saturation in the country. These models may deliver results that are technically accurate based on current conditions, but ultimately misleading over the long term.

It’s also important for stakeholders to understand that AI models deliver guidance for business decisions based on probability. Adding more data from both internal and external sources can help provide the additional context needed for better results, but there is no such thing as a model that provides 100% accuracy. Business leaders can use these probabilistic outcomes to make more informed decisions, but they must be aware that there are external factors the model has no way of accounting for.

Consider the pandemic: In the early days, it broke all the models due to the disruption it caused, but then it broke all the models again during the recovery period. This is a big reason we’ve seen so much chaos in the labor market lately. Many businesses hired based on rapid growth patterns that were directionally true, but ultimately short-lived. Once the post-pandemic growth started to cool, those same businesses realized that they had over-hired. Both the over-hiring and the resulting layoffs can likely be attributed to business leaders assuming that the results of their models were sustainable, when the context of those models was always much more complicated.

Evaluate success from an infrastructure standpoint  

Like any other digital use case, data science projects depend on having the right infrastructure in the right places. As you evaluate your results, make sure to consider your digital infrastructure investments. How much CAPEX and OPEX goes toward supporting your projects? Do the results justify the spending? How can you quantify the ROI of your projects in a way that will inspire business leaders to continue investing?

Where you run your AI workloads will inevitably influence how successful they are. Many IT leaders went all-in on cloud services for their AI workloads, only to realize that it wasn’t always the most cost-effective option. Instead, a hybrid multicloud approach is often the best way to meet the diverse infrastructure needs of data science projects. It allows data science teams to distribute their inference and training workloads across different locations to meet the different needs those workloads have around performance, proximity/latency and cost-efficiency.

In addition, a cloud-adjacent approach allows enterprises to take advantage of cloud services on demand, without having to store their data in the cloud. This allows them to avoid data egress fees, which can seriously add up if they’re moving data into and out of the cloud frequently.

Platform Equinix® offers 240+ data centers spread across 70+ global metros, making it easy for enterprises to deploy their inference workloads at the digital edge. In addition, Equinix Metal®, our Bare Metal as a Service offering, can be an ideal solution for businesses that can’t justify the cost and complexity of deploying their own hardware for their inference workloads. Many Equinix IBX® data centers are also home to cloud on-ramps from top vendors, which makes it easy for businesses to move their data into the cloud for their model training workloads.

To learn more about how Equinix and its partner NVIDIA are helping enterprises get the infrastructure they need to fast-track their AI projects, read the ESG white paper Enable AI at Scale with NVIDIA and Equinix.

Avatar photo
Vish Vishwanath Vice President, Global IT Enterprise Analytics & Data Science
Avatar photo
Ravi Pasula Senior Director, Data Science
Subscribe to the Equinix Blog