Pattern recognition is "the act of taking in raw data and taking an action based on the category of the pattern". Most research in pattern recognition is about methods for supervised learning and unsupervised learning.
Pattern recognition aims to classify data (patterns) based either on a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. This is in contrast to pattern matching, where the pattern is rigidly specified.
A wide range of algorithms can be applied for pattern recognition, from simple naive Bayes classifiers or k-nearest neighbor algorithm to powerful neural networks.
Facial Recognition uses Pattern Matching
Image and good info can be found at: http://www.docentes.unal.edu.co/morozcoa/docs/pr.php
genetic algorithms (1950s)
decision trees (1960s)
support vector machines (1980s).
Data mining commonly involves four classes of tasks:
· Clustering - is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data.
· Classification - is the task of generalizing known structure to apply to new data. For example, an email program might attempt to classify an email as legitimate or spam. Common algorithms include decision tree learning, nearest neighbor, naive Bayesian classification, neural networks and support vector machines.
· Regression - Attempts to find a function which models the data with the least error.
· Association rule learning - Searches for relationships between variables. For example a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
Rewriting or Graph Rewriting may be useful as well
Machine Learning and Pattern Recognition are really the same thing, but from different angles
Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm.
· Supervised learning generates a function that maps inputs to desired outputs. For example, in a classification problem, the learner approximates a function mapping a vector into classes by looking at input-output examples of the function.
· Unsupervised learning models a set of inputs, like clustering.
· Semi-supervised learning combines both labeled and unlabeled examples to generate an appropriate function or classifier.
· Reinforcement learning learns how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback in the form of rewards that guides the learning algorithm.
· Transduction tries to predict new outputs based on training inputs, training outputs, and test inputs.
· Learning to learn learns its own inductive bias based on previous experience.