While machine learning has continued to evolve since its inception nearly seven decades ago, deep learning has triggered a new phase in the evolution of machine learning.
Researchers and institutions have been using machine learning algorithms to develop various models to improve statistics, recognize speech, predict risks, and improve other applications. Many machine learning algorithms developed in the last 20-30 years are still used today. Deep learning, a form of machine learning based on artificial neural networks , has renewed interest in artificial intelligence and inspired the development of better tools, processes and infrastructure.
- Common misconceptions about Artificial Intelligence
- All about machine learning
- What is deep learning?
History of Machine Learning
The story of machine learning began in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced the mathematical model of a neural network. Alan talked about neural networks at a summer conference on the Dartmouth College campus in 1956. At the conference, 10 researchers met for six weeks to lay the groundwork for a new field involving automata theory and symbolic reasoning.
This distinguished group, many of whom will continue to make seminal contributions to this new field, named this new concept artificial intelligence so that it would not be confused with cybernetics, a competing field of research focused on control systems. erceptrons, the single-layer neural networks used at the time, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, which highlighted the limitations of current neural network algorithms and caused a shift in emphasis in AI research.
The 1973 publication Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other machine learning algorithms that leveraged neural networks distancing. A decade later, the book Machine Learning: An Approach to Artificial Intelligence by Ryszard S. Michalski, Jaime G. Carbonell, and Tom M. Mitchell defined machine learning as a field driven largely by the symbolic approach.
In the 1990s, driven by the rise of the internet and the increase of available data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models we see today.
- What is an AI (artificial intelligence) accelerator?
- What are artificial neural networks (ANN) and how do they work?
- What is Artificial Intelligence as a Service (AIaaS)?
The history of Deep Learning
As the return to data-driven machine learning began in the 1990s, the work built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team used back propogation to create deeper neural networks and added new layers to the networks. They also coined the term Deep Learning , finding a way to strengthen or weaken connections across many layers in the network .
The democratization of computing power and the rise of Deep Learning
Machine learning needs high computing power. In the early days, researchers had to keep their projects small or access expensive supercomputers. The democratization of distributed computing in the early 2000s enabled researchers to perform computations between clusters of relatively low-cost commercial computers.
Now it's cheaper and easier than it was back then to try hundreds of models to find the best combination of data properties, parameters, and algorithms. The industry is taking this democratization further with applications and related tools for machine learning processes that bring DevOps principles to machine learning deployment.
Machine learning also does only as well as the data the system was trained on, and if the datasets are small, it becomes difficult for models to infer patterns. As the data generated by mobile, social media, IoT, and digital customer interactions grows, the training material needed to mature has driven the proliferation of deep learning techniques.
Deep learning took off until 2012, after Hinton's team won ImageNet, a popular data science challenge for their work to classify images using neural networks. Google's subsequent approach to scaling deep learning across distributed clusters of computers also really sped things up.
How will Deep Learning change the field of Machine Learning?
Even when deep neural networks are not used directly, they continue to indirectly lead to fundamental changes in the field of machine learning, including:
It will be possible to frame problems
The predictive power of deep learning has prompted data scientists to consider different ways of framing problems that arise in other types of machine learning.
For example, in language processing the focus is on what comes next in most texts. Many problems have also been reformulated in Computer Vision so that instead of trying to understand the geometry, algorithms predict the labels of different parts of an image.
Human insight will be involved in automation
The power of big data and deep learning is changing the way models are built. Human analytics and insights are being replaced by raw computing power.
The job of posing a machine learning problem has now been taken over by advanced algorithms and millions of hours of CPU time have been converted into pre-trained models so data scientists can spend more time focusing on other projects or customizing models.
Data can be used more efficiently
Deep learning also helps data scientists solve problems with smaller datasets and solve problems where data is not labeled.
These techniques reduce the need for manually labeled and processed data. This enables researchers to build large models that can capture complex relationships that represent the nature of the data, not just relationships that represent the task at hand.
0 Comments