In 2008, Max Jaderberg and Shai Halevi published «Deep Speech». In it was presented the technology «Deep Speech», which allowed the system to determine the phonemes of spoken language. The system entered four sentences and was able to output sentences that were almost grammatically correct, but had the wrong pronunciation of several consonants. Deep Speech was one of the first programs to learn to speak and had a great impact on research in the field of natural language processing.
In 2010, Jeffrey Hinton describes the relationship between human-centered design and the field of natural language processing. The book was widely cited because it introduced the field of human-centered AI research.
Around the same time, Clifford Nass and Herbert A. Simon emphasized the importance of human-centered design in building artificial intelligence systems and laid out a number of design principles.
In 2014, Hinton and Thomas Kluver describe neural networks and use them to build a system that can transcribe a person with a cleft lip. The transcription system has shown significant improvements in speech recognition accuracy.
In 2015, Neil Jacobstein and Arun Ross describe the TensorFlow framework, which is now one of the most popular data-driven machine learning frameworks.
In 2017, Fei-Fei Li highlights the importance of deep learning in data science and describes some of the research that has been done in this area.
Artificial neural networks and genetic algorithms
Artificial neural networks (ANNs), commonly referred to simply as deep learning algorithms, represent a paradigm shift in artificial intelligence. They have the ability to explore concepts and relationships without any predefined parameters. ANNs are also capable of studying unstructured information that goes beyond the requirements of established rules. Initial ANN models were built in the 1960s, but research has intensified in the last decade.
The rise in computing power opened up a new world of computing through the development of convolutional neural networks (CNNs) in the early 1970s. In the early 1980s, Stanislav Ulam developed the symbolic distance function, which became the basis for future network learning algorithms.
By the late 1970s, several CNNs were deployed on ImageNet. In the early 2000s, floating point GPUs provided exponential performance and low power consumption for data processing. The emergence of deep learning algorithms is a consequence of the application of more general computational architectures and new methods for training neural networks.
With the latest advances in multi-core and GPU technology, training neural networks with multiple GPUs is possible at a fraction of the cost of conventional training. One of the most popular examples is GPU deep learning. Training deep neural networks on GPUs is fast, scalable, and requires low-level programming capabilities to implement modern deep learning architectures.
Optimization of genetic algorithms can be an effective method for finding promising solutions to computer science problems.
Genetic algorithm techniques are usually implemented in a simulation environment, and many common optimization problems can be solved using standard library software such as PowerMorph or Q-Learning.
Traditional software applications based on genetic algorithms require a trained expert to program and customize their agent. To enable automatic scripting, genetic algorithm software can be distributed as executable source code, which can then be compiled by ordinary users.