There are two classes of artificial intelligence (AI): general (aka strong or full) and applied (aka narrow or weak). Applied is designed to handle specific tasks and is much more common than general, which attempts to be intelligent in the way one would think of an intelligent person — and may always be the stuff of science fiction. This article will confine itself to applied AI.
Machine learning (ML) does not exist on its own but is a subset of AI. It has been said that without machine learning artificial intelligence cannot progress.
I had a physics professor who said there are two types of problems: impossible and trivial. Any problem you haven’t solved yet is impossible. Once solved, it becomes trivial. Something similar can be said of AI: tasks that once were considered so difficult that they could be handled (if they could be handled at all) only by artificial intelligence are now done routinely, so one could say that artificial intelligence is a moving target.
ML vs. data mining
“in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge.” Data mining can also be called unsupervised learning.
ML versus rule-based systems
A rule-based system, as the name implies, uses a defined set of rules — generally expressed as a set of “if-then” statements — to tell the machine what to do in every situation. Each decision can, of course, lead to other decisions. An expert system is an example of a rule-based system.
There is also rule-based machine learning, in which the system uses incoming data to create new rules.
Machine learning applications
Current applications of machine learning include image recognition, speech recognition, medical diagnosis, statistical arbitrage (uses in securities trading), learning associations (uses in marketing), prediction (also used in marketing), and information extraction, “the process of extracting structured information from unstructured data.”
Machine learning and neural networks
A machine can learn by simply observing its environment, which sounds easy but turns out to be difficult to implement; by trying things at random and being rewarded for right answers; or by being taught (supervised learning): “When inputs are [A] the correct answer is [X]; when inputs are [B] the correct answer is [Y].” A special case of machine learning is a neural network. Essentially a group of electronic neurons (“nodes,” in neural network parlance) whose connections to each other can be initiated, terminated and modified (weighted) as a result of experience, a neural network is fed a set of inputs (the larger, the better, in general), and creates one or more outputs. In the purest case, the system has no prior “knowledge” of the problem to be solved, so the first set of outputs will be completely wrong when compared to the desired output. The system then tries again, comparing the new output to that desired. The process iterates thousands of times, with those connections between neurons associated with closer-to-correct results being strengthened, and other weakened. After enough repetitions, the system will, in theory, come up with the right answer every (or almost every) time.
While a neural network can come up with highly effective ways to solve a problem, the learning process can take a long time, which is why it’s common to train neural networks. It also suffers from the drawback of being essentially a black box. The system cannot explain how it comes up with its answers. One could make the argument that it is, therefore, an example of Clark’s Third Rule: “Any sufficiently advanced technology is indistinguishable from magic.”
A simple neural network consists of three logical layers: the input layer accepts incoming data. The Output layer presents results. Between is a hidden layer. More complex systems, with multiple hidden layers, are used for deep learning, currently used for such complex pattern-recognition tasks as recognizing faces, reading their expressions, and so on. Despite some well-publicized failures (multi-racial facial recognition, for example), the field is developing very rapidly, driven by such deep-pocketed companies as Google, and represents the future of artificial intelligence. – enterpriseiotinsights