Press "Enter" to skip to content

What is artificial intelligence?

Artificial Intelligence (AI)
Artificial Intelligence (AI)

Artificial intelligence enables computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind.

In computer science, the term artificial intelligence (AI) refers to any human-like intelligence exhibited by a computer, robot, or other machine. In popular usage, artificial intelligence refers to the ability of a computer or machine to mimic the capabilities of the human mind—learning from examples and experience, recognizing objects, understanding and responding to language, making decisions, solving problems—and combining these and other capabilities to perform functions a human might perform, such as greeting a hotel guest or driving a car.

After decades of being relegated to science fiction, today, AI is part of our everyday lives. The surge in AI development is made possible by the sudden availability of large amounts of data and the corresponding development and wide availability of computer systems that can process all that data faster and more accurately than humans can. AI is completing our words as we type them, providing driving directions when we ask, vacuuming our floors, and recommending what we should buy or binge-watch next. And it’s driving applications—such as medical image analysis—that help skilled professionals do important work faster and with greater success.

As common as artificial intelligence is today, understanding AI and AI terminology can be difficult because many of the terms are used interchangeably; and while they are actually interchangeable in some cases, they aren’t in other cases. What’s the difference between artificial intelligence and machine learning? Between machine learning and deep learning? Between speech recognition and natural language processing? Between weak AI and strong AI? This article will try to help you sort through these and other terms and understand the basics of how AI works.

Artificial intelligence, machine learning, and deep learning

The easiest way to understand the relationship between artificial intelligence (AI), machine learning, and deep learning is as follows:

  • Think of artificial intelligence as the entire universe of computing technology that exhibits anything remotely resembling human intelligence. AI systems can include anything from an expert system—a problem-solving application that makes decisions based on complex rules or if/then logic—to something like the equivalent of the fictional Pixar character Wall-E, a computer that develops the intelligence, free will, and emotions of a human being.
  • Machine learning is a subset of AI application that learns by itself. It actually reprograms itself, as it digests more data, to perform the specific task it’s designed to perform with increasingly greater accuracy.
  • Deep learning is a subset of machine learning application that teaches itself to perform a specific task with increasingly greater accuracy, without human intervention.
Diagam of the relationship between artificial intelligence, machine learning, and deep learning

Let’s take a closer look at machine learning and deep learning, and how they differ.

Machine learning

Machine learning applications (also called machine learning models) are based on a neural network, which is a network of algorithmic calculations that attempts to mimic the perception and thought process of the human brain. At its most basic, a neural network consists of the following:

  • An input level, where data enters the network.
  • At least one hidden level, where machine learning algorithms process the inputs and apply weights, biases, and thresholds to the inputs.
  • An output layer, where various conclusions—in which the network has various degrees of confidence—emerge.
Diagram of a basic neural network.

Machine learning models that aren’t deep learning models are based on artificial neural networks with just one hidden layer. These models are fed labeled data—data enhanced with tags that identify its features in a way that helps the model identify and understand the data. They are capable of supervised learning (i.e., learning that requires human supervision), such as periodic adjustment of the algorithms in the model.

Deep learning

Deep learning models are based on deep neural networks—neural networks with multiple hidden layers, each of which further refines the conclusions of the previous layer. This movement of calculations through the hidden layers to the output layer is called forward propagation. Another process, called backpropagation, identifies errors in calculations, assigns them weights, and pushes them back to previous layers to refine or train the model.

Diagram of a deep neural network.

While some deep learning models work with labeled data, many can work with unlabeled data—and lots of it. Deep learning models are also capable of unsupervised learning—detecting features and patterns in data with the barest minimum of human supervision.

A simple illustration of the difference between deep learning and other machine learning is the difference between Apple’s Siri or Amazon’s Alexa (which recognize your voice commands without training) and the voice-to-type applications of a decade ago, which required users to “train” the program (and label the data) by speaking scores of words to the system before use. But deep learning models power far more sophisticated applications, including image recognition systems that can identify everyday objects more quickly and accurately than humans.

For a deeper dive into the nuanced differences between these technologies, read “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?”

Types of artificial intelligence—weak AI vs. strong AI

Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ is a more accurate descriptor for this AI, because it is anything but weak; it enables some very impressive applications, including Apple’s Siri and Amazon’s Alexa, the IBM Watson computer that vanquished human competitors on Jeopardy, and self-driving cars.

Strong AI, also called Artificial General Intelligence (AGI), is AI that more fully replicates the autonomy of the human brain—AI that can solve many types or classes of problems and even choose the problems it wants to solve without human intervention. Strong AI is still entirely theoretical, with no practical examples in use today. But that doesn’t mean AI researchers aren’t also exploring (warily) artificial super intelligence (ASI), which is artificial intelligence superior to human intelligence or ability. An example of ASI might be HAL, the superhuman (and eventually rogue) computer assistant in 2001: A Space Odyssey.

Artificial intelligence applications

As noted earlier, artificial intelligence is everywhere today, but some of it has been around for longer than you think. Here are just a few of the most common examples:

  • Speech recognition: Also called speech to text (STT), speech recognition is AI technology that recognizes spoken words and converts them to digitized text. Speech recognition is the capability that drives computer dictation software, TV voice remotes, voice-enabled text messaging and GPS, and voice-driven phone answering menus.
  • Natural language processing (NLP): NLP enables a software application, computer, or machine to understand, interpret, and generate human text. NLP is the AI behind digital assistants (such as the aforementioned Siri and Alexa), chatbots, and other text-based virtual assistance. Some NLP uses sentiment analysis to detect the mood, attitude, or other subjective qualities in language.
  • Image recognition (computer vision or machine vision): AI technology that can identify and classify objects, people, writing, and even actions within still or moving images. Typically driven by deep neural networks, image recognition is used for fingerprint ID systems, mobile check deposit apps, video and medical image analysis, self-driving cars, and much more.
  • Real-time recommendations: Retail and entertainment web sites use neural networks to recommend additional purchases or media likely to appeal to a customer based on the customer’s past activity, the past activity of other customers, and myriad other factors, including time of day and the weather. Research has found that online recommendations can increase sales anywhere from 5% to 30%.
  • Virus and spam prevention: Once driven by rule-based expert systems, today’s virus and spam detection software employs deep neural networks that can learn to detect new types of virus and spam as quickly as cybercriminals can dream them up.
  • Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.
  • Ride-share services: Uber, Lyft, and other ride-share services use artificial intelligence to match up passengers with drivers to minimize wait times and detours, provide reliable ETAs, and even eliminate the need for surge pricing during high-traffic periods.
  • Household robots: iRobot’s Roomba vacuum uses artificial intelligence to determine the size of a room, identify and avoid obstacles, and learn the most efficient route for vacuuming a floor. Similar technology drives robotic lawn mowers and pool cleaners.
  • Autopilot technology: This has been flying commercial and military aircraft for decades. Today, autopilot uses a combination of sensors, GPS technology, image recognition, collision avoidance technology, robotics, and natural language processing to guide an aircraft safely through the skies and update the human pilots as needed. Depending on who you ask, today’s commercial pilots spend as little as three and a half minutes manually piloting a flight.

History of artificial intelligence: Key dates and names

The idea of ‘a machine that thinks’ dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi’s ENIGMA code during WWII—proposes to answer the question ‘can machines think?’ and introduces the Turing Test (link resides outside IBM) to determine if a computer can demonstrate the same intelligence (or the results of the same intelligence) as a human. The value of the Turing test has been debated ever since.
  • 1956: John McCarthy coins the term ‘artificial intelligence’ at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist, the first-ever running AI software program.
  • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that ‘learned’ though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s: Neural networks featuring backpropagation—algorithms for training the network—become widely used in AI applications.
  • 1997: IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
  • 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
  • 2015: Baidu’s Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • 2016: DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported $400 million.

Artificial intelligence and IBM Cloud

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

  • Collect: Simplifying data collection and accessibility.
  • Analyze: Building scalable and trustworthy AI-driven systems.
  • Infuse: Integrating and optimizing systems across an entire business framework.
  • Modernize: Bringing your AI applications and systems to the cloud.

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

 

Breaking News: