• AI
  • Deep Learning
Back to Blog

Introduction to Narrow AI

March 02, 2022

In 1997, IBM’s chess playing computer, Deepblue, defeated Gary Kasparov, the reigning World Champion. Deepblue’s memory contained 700,000 games between chess Grandmasters which it would search through at a rate of 200 million positions per second to find the best move to play. DeepBlue is an example of Narrow AI, a computer capable of superhuman results for one very specific pattern-matching task. In this case matching a chess board to the move that’s most likely to lead to a win.




Narrow AI was used in a chess game - Deepblue vs Gary Kasparov

General AI, on the other hand, is the form of AI that is often seen in popular culture and blockbuster movies like Terminator, the Matrix, iRobot, Iron Man, etc. Digital entities with the ability to apply learned skills across a wide variety of domains and applications. As of today, twenty-four years after DeepBlue, all AI systems in production are examples of Narrow AI. All the recommender systems, routing algorithms, document processors, facial recognition systems, text generators, and even voice assistants are all Narrow AI.

Voice assistants may seem to understand what we say to them at first but all it takes is to ask them a question that is not in their database to get the traditional response, “Sorry I cannot answer that question.” Voice assistants are made up of different Narrow AI systems working together to recognize voice commands and respond appropriately. The first system is tasked with recognizing the activation keyword, the next turns your voice command into text and that text is used to search for an appropriate response in the assistant’s database. Memory and pattern matching, just like Deep Blue.

What can Narrow AI do?

Any problem which can be formulated as a pattern matching task with sufficient training data can be solved by a Narrow AI faster and more consistently than even the best human experts.

Narrow AI can be broken down into 2 main categories:

  • Supervised learning
    Requires labeled data, data that a human has attached a category to, such as a picture of a cat along with the category “Siamese cat”. Given enough data, this type of Narrow AI can then differentiate between the types of patterns it has been trained on.
  • Unsupervised learning
    Does not require labels and can identify patterns in the data by itself. For example: grouping photos of animals by species using common traits or detecting fraudulent transactions by their uncommon traits.




Figure 1. Supervised and Unsupervised Learning

Figure 1. Supervised and Unsupervised Learning. Source researchgate.net.

If we focus on supervised computer vision — problems where labeled images and videos are the source of raw data — applications of Narrow AI can be put into three categories in order of increasing complexity:

  • Classification: There is a cat in this image.
  • Localization: There is a cat in this area of the image.
  • Segmentation: There is a cat in these specific pixels of this image.




Figure 2. Classification, Localization, Segmentation

Figure 2. Classification, Localization, Segmentation

Examples of problems that Supervised learning has solved include:

  • Automating grocery store checkout.
  • Detecting unripe tomatoes on a high speed conveyor belt.
  • Detecting anger in phone calls to escalate customer service claims.
  • Extracting information from receipts for expense reports.

What could Narrow AI do for you?

When considering the complexity of using Narrow AI to solve a problem, here are three key questions to ask:

  • Can we get enough data for each pattern we want to recognize to get to our target accuracy?
  • What is the cost of failure in production?
  • Does the end result of achieving that target accuracy offset the costs of acquiring the data, labeling it, training the model, deploying it into production and maintaining it?

Let us use this method to explore different vision problems of increasing complexity:

  • Recognizing handwritten addresses on mail
  • Detecting defects for predictive maintenance in a factory
  • Making a car that can drive on the highway by itself

In the case of handwriting recognition we have an enormous amount of data, acquiring new labeled data is very easy, and the outcomes are split into clear categories from 0-9 and from A-Z. The cost of a failure would be that a package or letter gets sent to the wrong address. The benefit, routing packages and letters automatically, far outweighs the cost, given enough scale.

In the case of detecting defects for predictive maintenance, you could use cameras and visual inspection to get labeled training data. This data can then be used to train a Narrow AI that can detect when machines need to be maintained before they start to fail. In this case because the output is being used to do predictive maintenance, a failure in production would mean that a failure of equipment would not be prevented. The savings in this type of use case can be considerable and far outweigh the initial investment, the key being that the cost of failure is variable depending on the solutions that are currently in place. If equipment failure is currently dealt with after the fact then the cost of failure of a predictive maintenance system would be zero.

In the case of driving on the highway we have a vast amount of data as every single car spends time on the highway and millions of cars are outfitted with cameras. However the target accuracy is extremely high; unassisted humans drive an average of 100 million miles of highway per lethal crash. In order to reach that level of target accuracy the AI system would need to have tens of thousands of labeled examples of thousands of different situations. The way a leading autonomous vehicle company is tackling the data collection problem is to use the disengagement of their autopilot as a label that autopilot has failed, which helps them discover new situations that autopilot cannot yet handle. A failure in production of a car driving on the highway could be disengagement of autopilot but it could also mean a lethal crash. The massive economic opportunity of cars that can drive themselves has not yet surpassed the difficulty of acquiring enough labeled data and overcoming the immense cost of failure in production.

Traditional tools and the Lodestar benefit

At Lodestar our mission is to address most of the challenges in creating those Narrow AI models. Our platform will reduce the effort in collection and curation, make object detection tasks easy and reduce the overall effort of the labeling process that results in better time to market and cost per AI project. In the table below you can learn how we achieve those benefits.

FeatureTraditionalLodestarBenefit
Scale of dataMinutes of video, Thousands of ImagesHours of video, Millions of imagesCollect the data in raw video format, never lose a corner case.
Data curationManualAutomaticLoad big datasets as video or images, let active learning choose what frames are most valuable to label.
AI powered labelingBring your own model or use a general purpose object detectorCustom model continuously trained on your dataGet automatically improving predictions based on your dataset, reducing labeling time by up to 5x.
Model training and testingEvery few weeks, requires expertiseContinuous, automatedThe helper model improves along with your dataset automatically, providing insight on the quality of the dataset and performance you can expect from a model.

Conclusion

Narrow AI is a proven technology which can solve a near limitless amount of problems. However there are a few key insights that can help formulate problems most efficiently as well as identify which problems are worth solving.

Lodestar dramatically reduces the cost of putting Narrow AI in production through our no-code AI platform which turns raw video into trained models. With Lodestar you can start training Narrow AI systems to solve your specific problems for free today.

Learn more about our video annotation tools.

Stay connected!