Meeting Data-Intensive Challenges With Narrow AI

By Ben Graf, P.E., M.SAME

When considering solutions for targeted, data-intensive problems, such as a recent challenge undertaken by the Air Force Civil Engineer Center to predict construction project priorities, the use of traditional narrow artificial intelligence still beats emerging technologies.
Narrow AI models, such as a recent effort developed by the Air Force Civil Engineer Center to predict construction project priorities, excel at specific data-intensive tasks but require certain drivers to find success. U.S. Air Force photo.

With an avalanche of breakthroughs over the last two years, artificial intelligence (AI) has dominated headlines, leaving organizations working to capitalize on its vast potential while also evaluating risks and security implications. Much of the hype is focused on generative AI, which centers on generating text, images, video, audio and other content. While this innovation has augmented areas previously the exclusive domain of humans, the tools struggle with data-intensive tasks. In these applications, traditional narrow AI remains the preferred solution.

While generative AI generates, narrow AI performs prediction, optimization, or classification tasks for targeted problems. “Targeted” is the operative word; narrow AI is so called because it is designed to solve a highly specific narrow problem. Narrow AI also can be referred to as discriminative AI. In this context, “discriminative” denotes the ability of the models to discriminate between two or more potential outcomes (success or failure) or categories (legitimate or fraudulent), then predict which outcome or category is applicable to a new observation.

Validating Decisions

Whereas traditional analysis is typically descriptive (it answers the question “What happened?”), narrow AI allows a user to transition to a different set of questions: either the diagnostic (“Why did it happen?”) or the predictive (“What will happen?”). These predictions can be categorical, as in predicting success or failure. Or they can be numerical, such as predicting temperature or cost growth.

Narrow AI also can work very well in optimization problems, such as was implemented recently, by the Air Force Civil Engineer Center (AFCEC), to predict which military construction projects are most likely to receive funding authority. The solution that the team developed enabled it to focus its limited resources on prioritizing the projects most likely to make the budget cut.

Drivers for Success

When launching a narrow AI project like the AFCEC model, organizations should consider three key success drivers and set realistic expectations for success rates and development times. These factors directly impact the outcomes of narrow AI projects.

Problem Statement Clarity. First among the keys to success is having a clear, precise problem statement. “Apply AI to our execution process” might be a typical problem statement—but it is nowhere near precise enough. A vague problem statement means different things to different people and is likely to lead to numerous false starts, confusion, and frustration.

To effectively leverage narrow AI, a better version would be: “Predict which construction projects are most likely to be funded next year.” Here the objective is clearly defined. The team can proceed with data collection and model training more efficiently.

Quality, Labeled Data. Quality data constitutes the second key driver of narrow AI success. Employing AI involves training models, and this training cannot happen without data. The data should be as high quality as can be obtained, since the model will only be as good as its training data. More data is always better, since this allows AI models to tease out more hidden dependencies. It is also important to provide data that covers diverse scenarios in order to develop a model that generalizes well. For instance, a cost growth model trained only on private sector hotel construction would not be expected to provide accurate predictions for military runways.

Additionally, narrow AI typically relies on labeled, or annotated, training data. The model digests training data in much the same way that a human learns from flashcards. If a bank wants to train a model to recognize fraudulent transactions, the training process must involve showing the model lots of flashcards with transactions labeled “fraudulent” and many others labeled “legitimate.”

Sometimes, a data set will have built-in labels. In the AFCEC model, the development team had past years of data showing which projects were funded and which were not. In other cases, organizations will not have prelabeled data and will need to go through a data labeling process before model training begins. AFCEC recently built AI models to “read” narrative project descriptions and discriminate between those that involved roofs, HVAC, runways, and other infrastructure components. But the process began by recruiting domain experts to carry out a rigorous, manual, data labeling process, where they answered a total of 37 yes-or-no questions for each of hundreds of project descriptions. While this process is undeniably tedious, it is often a prerequisite to training a quality model.

While many narrow AI products must be custom-built, some can be acquired off the shelf for more broad-based projects, such as a solution implemented to identify invasive weeds in public parks across Jefferson County, Colo. Photo courtesy Matrix Design Group.

Customer Buy-In. Perhaps the most overlooked driver of narrow AI success is early customer buy-in. As AI solutions can require a substantial investment of time and money to develop, it is imperative then to have a customer excited to incorporate the resultant model into their workflow.

The AFCEC model to optimize construction project priority demonstrated it would reduce total validation effort by one-third. This represented a substantial time savings, since validation involved more than a dozen individuals over a two-month period. However, the model’s implementation was delayed by a full year because the AI team unwittingly delivered the results to the validation team a week later than they needed it. With more upfront coordination by both parties to clarify the timeline, the savings might have been realized sooner. To avoid similar delays, get specific with the customer before starting development. What is the success rate and time involved in their current process? How much would an AI model need to boost those metrics for them to adopt it? Are there any additional benefits or potential limitations to adoption?

Setting Expectations

Lack of experience with narrow AI results in organizations, leadership, and customers establishing unrealistic expectations. Expecting a model to achieve a 100 percent classification or prediction accuracy is a recipe for disappointment.

First, a model is only as accurate as its training data is comprehensive. Second, predicting the future always carries some amount of uncertainty. Understanding the customer’s current process and their benchmarks for success is crucial. When predicting the stock market, an investor may be thrilled with a model that achieves 56 percent accuracy. The same performance would be disastrous for a target recognition AI. By understanding how the current process performs, a team can establish the improvement needed from an AI model to be considered an upgrade. When AFCEC turned to an AI model to develop the draft priority list for centralized construction, the reduction in effort for the governance team compared to the prior scoring model was minimal. The real benefits have been to installations, which had previously been required to fill out complex project scoring worksheets to feed the old approach. The new model used readily available variables, saving 25,000 person-hours per year.

Organizations should realize that developing a custom narrow AI solution will take longer than performing analysis or building a dashboard. Training data often requires substantially more cleaning and preparation to make it ready to feed into an AI model than to prepare it for other uses. And the model building process typically involves testing hundreds of variations. An iterative approach works best—and that takes time.

Customized Solutions

Whereas for many organizations generative AI will come in the form of off-the-shelf models, narrow AI solutions are so targeted that most must be custom-built. Opportunities that do exist to acquire narrow AI products off-the-shelf are typically found when a customer has the same problem and the same or similar data as other people. For example, Matrix Design Group was able to adapt deep learning models built into Esri’s ArcGIS suite of products to identify invasive weed species in aerial drone photography for the Jefferson County parks system in Colorado, dramatically reducing the boots-on-the-ground effort required and saving the county money in achieving eradication. When a problem is unique or data is atypical or classified, a custom solution may be the only option.

Narrow AI continues to offer tangible value, even in this new dawn of generative AI—delivering targeted prediction, optimization, and classification solutions that tools like ChatGPT cannot yet provide. Through a clear problem statement, quality data, and upfront customer buy-in, teams can avoid the common pitfalls of narrow AI projects and drive a successful solution.


Ben Graf, P.E., M.SAME, is Chief Data Scientist, Matrix Design Group; ben.graf@matrixdesigngroup.com


Article published in The Military Engineer, September-October 2024

More News from TME