This is the first question I had when I first started exploring these subjects a month ago. I have a better grasp on the terms now, so let's delve in.
I think generally when people think about Artificial Intelligence (AI), they think of Artificial General Intelligence (AGI), or the ability of a non-biological entity to be able to successfully perform a wide variety of tasks at a human level (or better) proficiency. This contrasts with narrow AI, which can perform specific tasks at human proficiency. We already have examples of narrow AI in everyday life, like in our personal phone or home assistants, self driving cars, or AlphaGo. We don’t, however, have any examples of AGI yet and there is no consensus as to when that might occur, but the holy grail of AI research is toward the development of human level AGI, with the possibility that super intelligent AI wouldn’t be very far behind.
Machine learning is a field in which machines are trained to develop algorithms to perform a task. This field flies in the face of traditional programming, in which humans assemble the algorithms and program machines to compute them. In machine learning, humans provide the training data (inputs that lead to certain outcomes) and program machines to determine how certain inputs will lead to certain outcomes. This is a powerful paradigm because it frees humans from needing to know how to get to the outcomes themselves, they just need to provide enough training data for a machine to figure out how a certain combination of inputs can lead to an outcome. For instance, if the task at hand is to catch a fish, traditional programming would teach a machine how to use a fishing rod - how to hold the rod, how to release the spool, how to cast a line. Machine learning would give a machine a library of videos showing people succeeding or failing to catch fish, leaving it up to the machine to go through each video and determine/learn how to catch a fish . Perhaps certain casting techniques are better than others, or maybe the weather plays a big role, the machine might pick up on patterns a human doesn't even think consider.
Deep learning is subset of machine learning in which the machine’s learning style is loosely based on the neural wirings of a biological brain. That is, the machine is programmed to replicate how brains learn, via an artificial neural network. I’ll talk about that further in a subsequent post, but these neural networks will have many layers of "neurons" that explore many possible pathways for a set of inputs to end up as a set of outcomes. What's fascinating is that it's very hard to analyze these neural nets since they are essentially a jumble of weights that calculate patterns in input data. Take an artificial neural net (ANN) that can recognize if a cat is in a photo. For humans, we identify cats via features we deem catlike (pointed ear, furry bodies, whiskers). An ANN might notice that some group of pixels relates to some other group of pixels in a way that suggests a cat is there, but to a human this just looks like a bunch of math and data. For all we know, the ANN may come up with a more precise way to recognize cats that no human has ever thought of, some subtle pattern that renders simple feature recognition (it has a cheezburger?) obsolete.
Anyways it's all pretty exciting stuff and I'm looking forward to breaking past my surface level knowledge of deep learning, machine learning, and AI. One question answered, many more to go.
Spelling Bee Tldr;
Deep learning is a technique in the field of machine learning that advances us closer to creating artificial general intelligence.