Machine learning, deep learning, and Artificial Intelligence - interchangeable? codependent? Let's clarify
Let's start with Artificial Intelligence (AI): it is the general demonstration of intelligent behavior by machines. Although debate remains over what actually defines intelligence, we can reason by analogy. For example, most would agree that to become a great chess player you would have to be intelligent. But machines have been beating human champions at the game for over 20 years. And that same incredible chess playing machine has absolutely no clue if New York is a city or an ice cream flavor. And so we can then narrow down AI to machines (or “agents”) that act in a certain environment with clear goals, often with the aid of predefined rules. General Artificial Intelligence (AGI) would be a machine indistinguishable from a human being. What we see in fiction is usually General AI - examples including HAL 9000 from the movie 2001: A Space Odyssey, or Cortana from the game series HALO (later adopted by Microsoft for their Windows assistant).
But what actually makes AGI different? The answer is algorithms. With algorithms we enter Machine Learning, a sub field of AI. In a few words, machine learning is about getting a machine, program, algorithm, or agent (used interchangeably here) to “learn”, or “recognize patterns”, from data. How does this so-called learning occur? Depends on who you ask. A few decades ago, the leading approach to cracking AI was Expert Systems. There were collections of thousands of lines of code that defined precise rules for the machine to follow, depending on certain conditions. Here's an easy way to think about it: ‘If it rains, take an umbrella’, ‘if it is sunny, take sunglasses’. The programmer would try to write as many rules as possible. It was thought that AGI would only be possible if we could define every possible situation.
Ironically, Deep Blue, used as a much earlier form of Machine Learning, pioneered the use of searching algorithms. The main idea of search algorithms are: Define a set of basic rules, define a set of possible actions, and let the computer search through those actions for which is the best given certain parameters. In theory they actually do calculate every move, and eventually decide which move is best. Nowadays, Machine Learning is actually closer to Statistical Learning, and uses data to model a problem and find solutions. Advances in computer hardware and data collection (see Big Data) have made these methods more popular and economically feasible.
Enter Neural Networks(NN), a subfield of Machine Learning and a grand subfield of A.I. Although the original inspiration for NN was the human brain, in reality Neural Networks are just a bunch of extremely complicated math formulas. As a result of this increase in complexity, Neural Networks demanded a much higher level of computational power. They're called "networks" because all of their parts, or more precisely their “nodes”, are connected to each other. There's an Input Layer (for example, pixel values of an image of Serena Williams), one or more hidden layers (some mathematical transformations that are ‘learned’ from the data to be able to decipher wheter she's hitting a backhand or a forehand), and an output layer (e.g. which shot it actually is). The output is generated by math optimization (most commonly by using Gradient Descent). Finally, we close with Deep Learning, which is nothing more than a Neural Network with many hidden layers (some models may include a few hundred layers).
The undeniable truth is that these machine learning techniques have solved, and continue to solve real world problems, We're only at the tip of the iceberg.