Machine Learning Algorithm

From Verify.Wiki
Jump to: navigation, search
Machine Learning Algorithm.png

Machine Learning

Key points to consider when choosing an algorithm

Machine Learning can be separated several processes according to the learning task. For each process, there are many algorithms can be chosen:

Data cleaning process Feature Selection process Learning process Evaluation process
Cluster

Anomaly Detection

Decision Tree

Kernel PCA

Polynomial

Linearity

Accuracy

Training Effort

Learning Curve

Data cleaning process:

Cluster, Anomaly Detection Algorithm

Feature Selection process

Decision Tree, Kernel PCA

Learning process:

First, make sure learning algorithm is polynomial.

Linearity - Algorithms that use linearity assume classes can be separated by a straight line. This assumption is appropriate for some problems but reduces accuracy when the problem is non-linear. Examples of algorithms are logistic regression and support vector machines. [1]

Evaluation process

Accuracy - It is important that the data is accurate. In some cases, the answers provided will not be perfectly accurate, but an approximation of data can be effective. [2]

Training effort - It has a great impact on the Accuracy in Machine learning. They accompany each other as to where choosing of algorithms. [3]

Learning Curve - It is an effective method to check if an algorithm is working correctly. Also algorithm results can be proven by the learning curve method.

Classification of Machine Learning Algorithm

Machine learning algorithms can be separate into a discriminative model and generative model. Discriminative models are usually used in the supervised learning task and also maximize the data likelihood. Generative models are commonly used in the unsupervised learning task and also maximize a posterior. This means that generative model takes the distribution of model parameters into consideration. Common used discriminative models have linear regression, logistic regression, LDA (linear discriminative analysis), SVM, Neural network, etc. Common used generative models have HMM (Hidden Markov Model), Naive Bayes, GMM (Gaussian Mixture Model), LDA (Latent Dirichlet Allocation), etc.

Algorithms Grouped by Learning Style

  • Problems

Classification, Clustering, Regression, Anomaly detection, Association rules, Reinforcement learning, Structured prediction, Feature engineering, Feature learning, Online learning, Semi-supervised learning, Unsupervised learning and Learning to rank Grammar induction.

  • Supervised learning algorithm

(classification • regression) Decision trees, Ensembles (Bagging, Boosting, Random Forest), k-NN, Linear regression, Naive Bayes, Neural networks, Logistic regression, Perceptron, Relevance vector machine (RVM) and Support vector machine (SVM).

  • Clustering Algorithm

BIRCH, Hierarchical k-means, Expectation-maximization (EM), DBSCAN, OPTICS, Mean-shift, Dimensionality reduction, Factor analysis CCA ICA LDA NMF PCA t-SNE, Structured prediction, Graphical models (Bayes net, CRF, HMM), Anomaly detection, k-NN Local outlier factor, Neural nets, Autoencoder, Deep learning, Multilayer perceptron, RNN Restricted Boltzmann machine and SOM Convolutional neural network.

  • Online Learning Algorithm

HMM (Hidden Markov Model), Kalman Filter and Partical Filter.

  • Theory

Bias-variance dilemma, Computational learning theory, Empirical risk minimization, Occam learning, PAC learning, Statistical learning and VC theory. [4]

Algorithms Grouped By Similarity [5]

Algorithms are often grouped by the similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods and this is the most useful way to group algorithms, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering.

Regression Algorithms

Regression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

Regression methods are a workhorse of statistics and have been cooped into statistical machine learning. This may be confusing because regression can be used to refer to the class of problem and the class of algorithm. Really, regression is a process.

The most popular regression algorithms are:

  • Ordinary Least Squares Regression (OLSR)
  • Linear Regression
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

Instance-based Algorithms

Instance-based learning model a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.

The most popular instance-based algorithms are:

  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)
  • Locally Weighted Learning (LWL)

Regularization Algorithms

An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

The most popular regularization algorithms are:

  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net
  • Least-Angle Regression (LARS)

Decision Tree Algorithms

Decision tree methods construct a model of decisions made based on actual values of attributes in the data.

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

The most popular decision tree algorithms are:

  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5 and C5.0 (different versions of a powerful approach)
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • M5
  • Conditional Decision Trees

Bayesian Algorithms

Bayesian AlgorithmsBayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.

The most popular Bayesian algorithms are:

  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

Clustering Algorithms

Clustering, like regression describes the class of problem and the class of methods.

Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are:

  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

Association Rule Learning Algorithms

Association rule learning are methods that extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.

The most popular association rule learning algorithms are:

  • Apriori algorithm
  • Eclat algorithm

Artificial Neural Network Algorithms

Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.

The most popular artificial neural network algorithms are:

  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Radial Basis Function Network (RBFN)

Deep Learning Algorithms

Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.

The most popular deep learning algorithms are:

  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

Dimensionality Reduction Algorithms

Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information.

This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.

  • Principal Component Analysis (PCA)
  • Principal Component Regression (PCR)
  • Partial Least Squares Regression (PLSR)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit
  • Linear Discriminant Analysis (LDA)
  • Mixture Discriminant Analysis (MDA)
  • Quadratic Discriminant Analysis (QDA)
  • Flexible Discriminant Analysis (FDA)

Ensemble Algorithms

Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Gradient Boosted Regression Trees (GBRT)
  • Random Forest

Other Algorithms

Many algorithms were not covered.

For example, what group would Support Vector Machines go into? It’s own?

I did not cover algorithms from speciality tasks in the process of machine learning, such as:

  • Feature selection algorithms
  • Algorithm accuracy evaluation
  • Performance measures

Speciality sub-fields of machine learning, such as:

  • Computational intelligence (evolutionary algorithms, etc.)
  • Computer Vision (CV)
  • Natural Language Processing (NLP)
  • Recommender Systems
  • Reinforcement Learning
  • Graphical Models

And more…

Comparison of machine learning algorithms

Machine Learning vs Statistics

The simple answer to this question is not much. Both of them are concerned with the same question: how do we learn from data? But there is some bias between them.

Statistics is about drawing valid conclusions

It cares deeply about how the data was collected, methodology, and statistical properties of the estimator. Much of Statistics is motivated by problems where you need to know precisely what you're doing (clinical trials, other experiments).

Statistics insists on proper and rigorous methodology and is comfortable with making and noting assumptions. It cares about how the data was collected, the resulting properties of the estimator or experiment (e.g. p-value, unbiased estimators), and the kinds of properties you would expect if you did a procedure many times.

Machine Learning is about prediction

It cares deeply about scalability and using the predictions to make decisions. Much of Machine Learning is motivated by problems that need to have answers (e.g. image recognition, text inference, ranking, computer vision, medical and healthcare, search engines.) ML is happy to treat the algorithm as a black box as long as it works. Prediction and decision-making is king, and the algorithm is only a means to an end. It's very important in ML to make sure that your performance would improve (and not take an absurd amount of time) with more data.

If you combine the two, you get Statistical Machine Learning, which is about the prediction made using many assumptions and valid statistical techniques. [6]

Machine Learning vs Artificial Intelligence

At their core both Machine Learning and Artificial Intelligence are concerned with allowing systems to analyze the data sets at hand, extrapolate from them and draw conclusions that allow them to take (or suggest) appropriate courses of action. The applicability of each term in the context of the big data, analytics and business intelligence is reasonably similar. In the wider context though where then does one end and the other take over?

Most agree that Machine Learning is a kind of subset of Artificial Intelligence. The focus of Machine Learning are the algorithms that allow the system to use smaller data sets and extrapolate to address new and, previously, unknown situations. Since the focus of AI is to create a system that can behave like humans much more is needed. To behave like a human a system has to pass the Turing Test, i.e. provide a response to a situation that to an interpreter would be indistinguishable from that a human would provide. The expectation from the AI system would be that not only would it be able to arrive at the right course of action but would also be able to communicate it, put in place an execution plan (even if it is only to deliver the communication) and in the case of a physical action, act on it. The rationale is thus that for a computer system to become artificially intelligent it would have to draw on machine learning in addition to several other capabilities like Natural Language Processing, Knowledge Representation, Planning, Scheduling and most likely Robotics too. [7]


Algorithm Accuracy Training time Linearity
Boosted decision tree Excellent Moderate
Neural network Excellent
Support vector machine Moderate Uses linearity
k-means Moderate Uses linearity
Logistic regression Fast Uses linearity

Details of Machine Learning Algorithms

1. Mathematics

  • Linear Algebra

Most of the machine learning problem need to be represented in mathematical. After that, there are kinds of algorithms to solve the problem. However, both of that need linear algebra.

  • Probability

A key concept in the field of machine learning is that of uncertainty. It arises both through the noise on measurements, as well as through the finite size of data sets. Probability theory provides a consistent framework for the quantification and manipulation of uncertainty and forms one of the central foundations for machine learning. Bayesian, Exponential Family Distribution, and Information Theory Basic are very useful to building the learning algorithm

  • Optimization

Optimization methods in machine learning contain Lagrange multipliers, gradient descent, coordinate ascent. Lagrange multipliers, also sometimes called undetermined multipliers, are used to find the stationary points of a function of several variables subject to one or more constraints. Gradient descent and coordinate ascent are the heuristic methods.

2. Model Selection

Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. The model selection contains several topics. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice.

First, the mathematical approach commonly taken decides among a set of candidate models and this set must be chosen by the researcher. Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial.

Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.

3. Dimension Reduction

Dimension reduction is the process of reducing the number of random variables under consideration, via obtaining a set "uncorrelated" principle variables. It can be divided into feature selection and feature extraction.

4. Graphical Model

A graphical model or probabilistic graphical model (PGM) is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.

Reference

  1. https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/
  2. https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/
  3. https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/
  4. https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/
  5. http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/
  6. https://normaldeviate.wordpress.com/2012/06/12/statistics-versus-machine-learning-5-2/
  7. http://www.lemoxo.com/machine-learning-artificial-intelligence-same-but-different/

Verification history