Compared to other machine learning technologies today, deep learning can provide more sophisticated and complex behaviors, and has therefore received a lot of attention and has been applied in a wide range of applications.
These properties are very important, especially for devices that are critical to safety such as assisted driving and self-driving cars, as well as natural language processing (the machine can identify words based on the context of the conversation).
As with artificial intelligence and machine learning, deep learning has been studied for decades. The difference now is that deep learning is being added to all types of chips – from data centers to simple microcontrollers. And as algorithms for training and reasoning become more effective, this part of machine learning/artificial intelligence is emerging with a variety of application models—some for applications with very narrow applications and others for far-reaching applications. Far more extensive environmental decisions.
Chris Rowen, CEO of Babblabs, said: "Some of these are expected to be needed for automated car chips. The technology using neural networks is effective and the cost of power is quite low. So it makes sense to add deep learning subsystems. We have seen many startups flooding into this market. There are 25 deep learning areas. Some companies are facing the cloud, and they hope to eliminate Nvidia in this area. But there are also some companies at the low end of the market. Both of these situations are highly specialized approaches to highly specialized matrix multiplication in areas where low- or medium-precision calculations need to be run as quickly as possible. The computational power requirements of these calculations are endless. ."
This is a good omen for the various tools, chips, software and expertise to achieve these goals.
Wally Rhines, president and CEO of Mentor, a Siemens company, said: "There is a lot of pattern recognition in the automotive market. You can also see this in the growth of data demand for data centers. There are dozens of companies doing it for themselves. A dedicated processor for learning."
In all of these markets and many other markets, deep learning is a growing area. Mike Gianfagna, vice president of marketing at eSilicon, which is currently developing deep learning chips, said: "Deep learning has been used to perform iPhoneX face recognition and for natural language processing, and it has also been designed for use in automated cars to identify one. Whether the object is a dog or a cat. On the surface, the deep learning chip is much like a data center chip. It might use the HBM memory stack to store the training data so that the data can be quickly entered into the chip and possibly using a custom memory. But this is easier to implement than a network chip. A network chip may require 50 to 90 modules, but they all differ. The number of modules in the deep learning chip is very large, but it is mostly a duplicate of the same module. So its Layout is easier, and its modules are more compact and perform better."
From a business perspective, deep learning, machine learning, and artificial intelligence can achieve economies of scale, which continues to drive sales of PCs, smartphones/tablets, and the cloud.
Rowen said: "In the past, performing billions of multipliers per second was considered a lot. But now even edge devices can perform hundreds of millions of multiplications. If you have trillions of times, they can make a lot of Value. You can see this on the latest generation of NVIDIA chips. They are no longer 10 trillion floating-point operations per second, but 100 trillion times per second. But these are becoming more and more dedicated, design The versatility is getting worse and worse. There are some very succinct ways to provide calculations of this size."
Figure 1: Comparison of deep learning and machine learning, from XenonStack
What is deep learning?
Although deep learning is hot, the definition of deep learning is still somewhat confusing. It is located under artificial intelligence, either juxtaposed with machine learning or a subset of machine learning. The difference is that machine learning uses algorithms developed for specific tasks. Deep learning is more about the representation of the matrix on a multi-layer structure, where each layer uses the output of the previous layer as input. This method more closely mimics the activity of the human brain. It not only recognizes whether a baseball is moving, but also roughly predicts where it will land.
But behind this, there is no consensus on the exact way in which deep learning works, especially as it moves from training to reasoning. Deep learning is more about the mathematical distribution of complex behaviors. To get this characterization and shape it, you can take advantage of many forms of architecture. Deep neural networks (DNN) and convolutional neural networks (CNN) are the most common. Cyclic neural networks (RNNs) are also applied, and this architecture adds time dimensions. The downside of RNN is that processing, memory, and storage requirements are huge, limiting its use to large data centers.
British computer scientist and cognitive psychologist Geoffrey Hinton has also been pushing the concept of a capsule network that stacks layers in a layer of neural networks, essentially increasing the density of those layers. According to Hinton, this gives a much better result because it recognizes highly overlapping numbers. And this is one of the themes of most of the research in this field today - how to speed up the whole process.
This problem is very complicated, and the human brain has no way to understand everything, so all of this must be modeled and theorized. For chip makers, this is nothing new. Since the chip reached the 1 micron process node, it has been difficult to visualize all the different parts of a design. But in computer science, much progress has been largely two-dimensional. The mathematical representation of rotating and tilting objects is much more difficult and requires a lot of computing resources. In order to improve speed and efficiency, researchers are trying to find ways to reduce these calculations. Regardless, this is a big problem, and deep learning experts still have a clear understanding of this.
Michael Schuldenfrei, CTO of Optimal+: “Deeply learning some parts in a black box. Understanding the actual decision-making process is very difficult. You can explain which model a machine learning algorithm comes from. You have to do a lot of work comparing different algorithms. Deep learning algorithms are more complex than machine learning algorithms, but we find that they all have one thing in common: the answers from these algorithms may be different in different products. So on product A, random forests may work well, while in products The effect on B is another algorithm or combination of algorithms."
Deep learning algorithms can be traced back to the late 1980s. David White, a distinguished engineer at Cadence, said: "A lot of work started with the US Post Office. It needs to recognize handwritten numbers. They realized they needed a way to reduce the size of the input space, so they used an extra layer to extract Features. Since then, deep learning algorithms have been greatly developed."
Deep learning algorithms typically require far more computing power than machine learning. White said: "The architecture used for deep learning is specific. It uses convolution, pooling, and specific activation functions. Some techniques are similar to machine learning, while others are different."
However, this approach does not benefit everything.
Babblabs' Rowen said: "More parameters can essentially model very complex behaviors. But you must train it with the corresponding data set. If your problem is simple, deep neural networks will not be more accurate. Statistical modeling only It can do so. Humans can learn a lot about objects and how they move when they are rotated. Compared to current machine learning or deep learning methods, the human brain needs only a much smaller sample to build this knowledge. In the human brain. Currently, the machine does not have the concept that you are rotating an object. It does not know perspective. It can only be learned from millions of samples."
More in-depth learning in the market
Although the boundaries between deep learning and machine learning are not always clear, the application of these different parts is gaining attention.
Gordon Cooper, product manager for Synopsys' embedded vision processor division, said: "The deep learning for embedded vision is well defined. We are also seeing it used for radar and audio processing - you can apply the CNN algorithm Processing. We also see a lot of requests for IT processes that apply deep learning and artificial intelligence to these areas."
The reason why this technology is becoming easier to use is that many technology building blocks can be used directly, including many off-the-shelf algorithms and a variety of off-the-shelf and custom or semi-custom processors, accelerators, and inexpensive memory. Cooper said: "When using RNN, you can choose long-term and short-term memory (LSTM). If power consumption is not an issue, you can use GPUs. There are also embedded chips that have lower performance but pay more attention to power consumption and size. Still a big problem, especially when you access data from DRAM, so there are memory management techniques and multiply/accumulate inside the chip."
Figure 2: Growth in deep learning applications from Semiconductor Engineering
Chip companies have also gained some first-hand experience through the internal use of this technology.
ESilicon's Gianfagna said: "Now we use machine learning to manage our computational farm for I/O throughput. This task is very difficult. We track all the CPUs, then tune them according to the work and create predictive loads. This is basically predicting load balancing, and most of the work is done in software. Cloud companies like Google and AWS (Amazon Web Services) use deep learning to handle their workflow and load balancing, and they use Hardware to do. Deep learning may be the most intensive part of the chip used in these operations."
A newer application of this technology involves robots. Cadence's White said: "The key here is that these devices need to be continuously learned, because the tasks that the robots do change and the environment changes. So if you are manufacturing in the Philippines, not Europe, then the software may have to adapt. This is true for many devices in the IoT space. Sensors can appear in systems with very different environmental conditions. This requires an adaptive system, which may be the next big machine learning, deep learning, and artificial intelligence. Waves. For gas sensors that detect different wavelengths, the performance of the sensor will decrease as the signal changes. So the question is whether a system can adapt to this change and still be able to get the job done. You don't want to condense water every time on the lens. Turn off the system when you are in the beads."
Deep learning is also appearing in mobile phones. Apple's iPhoneX uses deep learning to do face recognition. Synopsys' Cooper said: "You can also use it on your phone to optimize your images and apply filters based on deep neural network technology. But each market has its own needs. For cancer detection, the difficulty is to get enough data points. Tens of thousands are not enough. In the automotive industry, on the contrary, you have a lot of videos, but what can you do with this data? The key here is how to use technology to find important parts of these videos."
to sum up
How can deep learning be applied effectively, how to understand where and where it can enhance value? The semiconductor industry has just touched the surface. How can I quickly complete all the necessary training and reasoning with minimal power consumption? This aspect will continue to be blurred.
In the past, deep learning techniques were closely tied to large computers and supercomputers. But as devices become smaller and deeper learning is used in more restricted applications, the technology will have a greater impact in more and more markets. The take-off of this market has just begun.
NINGBO LOUD&CLEAR ELECTRONICS CO.,LIMITED , https://www.loudclearaudio.com