Here is a hack I use to evaluate an engineering team’s AI experience: teams that are new to AI focus too much on the machine learning (ML) model. Because they usually come from a traditional IT or product background, they think of an ML model as an isolated library that performs a specific function.
This perspective almost always leads to a suboptimal solution. In this issue of FeedForward, I’ll describe the difference between how pros and amateurs think about AI.
Your goal as an AI leader is to get your teams to think like pros. You want them to strategically look for ways in which AI can lift the entire business instead of just solving a narrowly defined problem. Your team should constantly seek ways to advance the bigger vision of becoming an AI-driven company.
The following table summarizes some differences between how amateurs and pros think about AI.
Let’s explore each dimension in detail.
Problem definition
AI is still new, and every company struggles to identify the best use cases for the technology. The primary challenge is defining the business problem that AI can solve. Amateurs and pros approach this task differently.
How amateurs talk about the problem
Amateurs look for isolated points to introduce ML models. They look at an existing business process and ask, “Where could I insert an ML model to make this process more efficient?”
For example, amateurs look at a complex set of business rules and discuss options for replacing the rules with an ML model. Because machine learning models can simplify an infrastructure and reduce maintenance costs, this approach is perfectly valid. But it limits the impact of the engineering work and rarely results in more than single-digit efficiency improvements.
How pros talk about the problem
Pros look at the entire business process and ask, “How could AI make the process ten times more efficient?” They talk about improving the entire system instead of optimizing a single step.
Pros start by looking at the output of a business process. The output of most workflow is a human-driven decision based on transformed data. They evaluate whether a novel AI methodology could run the process. Defining the problem in terms of the entire process is hard and takes creativity, but it ultimately results in an optimal solution.
Metrics
Metrics are a critical part of AI because models don’t lend themselves to easy analysis. Data scientists have developed metrics and best practices to monitor whether models are performing as needed. But metrics need to be introduced at the appropriate time in an AI project.
How amateurs talk about metrics
Because amateurs take a model-centric approach to AI, they start talking about metrics at the project’s outset. They think their job is to create an isolated model that performs at a desired threshold. For example, they might ask about F1 scores and worry about overfitting. This approach is understandable because online classes and textbook exercises emphasize metrics.
How pros talk about metrics
Pros realize that metrics can be misleading when they are naively applied. Not all errors carry the same cost, and not all predictions are equally relevant. For example, measuring a solution’s ability to generalize to new data is a waste of time if you have insufficient data to address the business needs. Because data scarcity is usually an issue at the beginning of a project, pros often overfit the model to first prove the viability of the solution.
Metrics measure the performance of the model. But that performance must be holistically defined within the context of the business problem.
Production deployment
Creating models in Jupyter notebooks can be challenging. But putting an AI solution into production is a much bigger challenge for most companies.
How amateurs talk about production deployment
Amateurs don’t invest much time worrying about deployment. They often say, “We already know how to deploy software and have a process for doing it.” They usually think of the model as an isolated library that can be accessed through a clear interface. But the entire solution is much more complicated than the model.
How pros talk about production deployment
An entire AI solution involves many complex components, such as data pipelines, feature stores, monitoring, interfaces, training workflows, inference workflows, feedback loops, and the model itself. Collectively, these components are often called ML operations (ML ops). The tools and platforms are immature, and most solutions lack best practices for production deployment.
Pros are aware of these challenges. They begin talking about them at the beginning of any project.
Data
Most companies are organized around specific business processes and work functions. They create scale by optimizing people-driven processes. In these companies, software’s primary function is helping people work with data more efficiently.
This sort of legacy structure isn’t ideal for AI. An optimal AI solution is designed around data and decisions, without regard to existing workflows and processes.
How amateurs talk about data
Your product teams have been trained to take a customer-centric approach to building applications. One of their biggest fears is building something nobody will use.
Amateurs rigidly bring this same mindset to their first AI projects. They look for opportunities to optimize the work of individual business partners or departments. In their people-driven approach to AI, they view data as a resource for achieving their goals rather than a driver for determining their goals.
How pros talk about data
Pros take a data-driven approach to AI. They try to ignore existing structures and instead look for ways to use data to make better decisions. Data is the primary resource, and data is the output. Departments and job functions are all legacies of the pre-AI era. They’re subject to wholesale change as AI takes over.
Endpoint
AI is evolving faster than any of us can track. Today’s state-of-the-art approaches could be obsolete in a matter of months. But taking advantage of the technological advances will be critical for survival in the business world. Planning is particularly challenging in such a chaotic environment.
How amateurs talk about the endpoint
Amateurs talk about “finishing” a model. They imagine a hypothetical endpoint in which the model has been successfully trained to perform a function. They might even ask the data scientist, “When will you be finished?” They expect the model to transition into a maintenance mode that requires only periodic improvements, like software libraries do.
How pros talk about the endpoint
Pros don’t talk about an endpoint at all. They expect the pace of AI research and techniques to keep accelerating. They trust that AI will play a growing role in the solution. To an AI pro, there’s no “finished”; there’s just a next step in the process.
Shift the mindset to shift the outcome
Creating a high-performing AI team requires changing perspectives. Your teams will begin their AI journey as we all do: by trying to map AI onto their existing worldview. They’ll view the model as an isolated component that neatly fits into existing workflows. Teams naturally start with a model-centric approach.
As an AI leader, you can begin coaching your team to think like pros:
- Define the problem as leveraging AI to get to the desired outcome. Consider the entire process.
- Before worrying about metrics, get them to develop a viable solution based on a broad vision.
- Ask about production deployment challenges from the outset.
- Take a data-centric approach. Try to ignore the existing organizational structure and jobs.
- Shift the discussion away from “done” to the next step in an evolving solution.
Your company (and your competitors) likely evolved in the pre-AI era. The organizations that survive the AI revolution will be those that can reinvent themselves with AI at the core. Encourage your teams to move away from model-centric thinking and toward solution-centric thinking, and you’ll be poised to be one of the winners.