Making friends with robots, the road to innovation of AI technology

People are always unsatisfied. In today's era of rapid technological development, we are too dependent on computers. We are looking forward to making friends with us like real human beings, talking with us, and playing with us. However, the implementation of these artificial intelligence requires strong technical support, and the calculations that the computer asks for can even exceed the computational limits of the most advanced machines.

In the face of escalating demand, large technology companies want to find some inspiration from biology. They are rethinking the nature of computers and want to develop a more human-like machine.

The new development of computers may weaken the power of Intel in the chip industry, and will fundamentally change the semiconductor industry with an annual output value of 335 billion US dollars. The semiconductor industry is the key to today's high-tech products.

Making friends with robots, the road to innovation of AI technology

Figure: Microsoft's Xuedong Huang (left) and Doug Burger (right) believe that the company needs to focus on developing specialized chips

For half a century, computer manufacturers have used a single chip that is suitable for a variety of situations. As the world's largest semiconductor chip manufacturer, Intel has always been a major producer of such chips.

Today, computer engineers are developing new chips. The operation of the new machine will be subdivided into small parts, and each small division requires its own unique chip. In addition, the energy consumption of this specialized chip will be greatly reduced.

The changes in Google's data center indicate that other companies in the industry will also see some changes. Most of Google's servers are still using CPUs, but they are also working with some custom chip vendors. In addition, Google is developing algorithms that enable speech recognition and other artificial intelligence applications.

Google's transformation is due to the needs of the company's own development. For years, Google has been operating the world's largest computer network. Although this data empire involves many parts of the world, it is not enough for Google's research and development.

In 2011, a research team led by Jeff Dean, one of Google's most famous engineers, conducted a study on neural networks. The study of neural networks helps to realize the self-learning of computer algorithms.

A few months later, Dean and his team built an upgraded version of the spoken language recognition service, which was much more accurate than the services that Google introduced at the time. However, to achieve this function, Google's current data center alone is not enough.

So Dean suggested that Google could build a computer chip for this kind of artificial intelligence.

Changes in the data center are gradually spreading to other parts of the technology sector. In the next few years, companies like Google, Apple and Samsung will launch artificial intelligence chips for smartphones. Microsoft is also designing a chip for augmented reality. In addition, companies like Google and Toyota are also developing chips for autonomous vehicles.

Microsoft: From Intel's CPU to its own FPGA

Some of the chips on the market today are used to store information, some are used in toys and televisions, and some are installed in various computer programs, such as supercomputers, personal computers or smartphones used to create global warming models. .

Moore's Law points out that traffic on the Internet is about doubled every year. The law was proposed by Intel co-founder Gordon Moore. As the chip continues to evolve, IBM's leading researcher Robert Dennard has proposed Dennard Scaling.

By 2010, it was discovered that the actual doubling of traffic would take longer than Moore's Law. In addition, Denn's scaling law is beginning to be inapplicable because chip designers have found that the physical materials used to make processors have reached their limits. That is to say, if the company wants to make a chip with more computing power, it can no longer rely on the upgrade of the processor, but more computers, more space and more energy consumption.

Researchers in industry and academia have been working hard to develop Moore's Law and explore new chip materials and design techniques. However, Microsoft researcher Doug Burger has a different idea. He suggested that instead of relying on the stable evolution of the central processor, it will transfer some of the load to the specialized chip.

During Christmas 2010, Burger and several other Microsoft chip researchers began exploring new hardware to improve Microsoft's search engine Bing.

At the time, Microsoft was just beginning to improve Bing's machine learning algorithm, which can improve search results by analyzing how people use services. Although the requirements for building such an algorithm are lower than those required to build a neural network, existing chips are still difficult to meet its development needs.

Burger and his team studied a variety of options and finally decided to use a field programmable gate array (FPGA). Software such as Windows has always used Intel's central processing unit (CPU), and these software can't reprogram the chip.

However, with the FPGA, Microsoft's software can program the chip.

Microsoft began mass-installing this chip in 2015. Now, almost every new server connected to a Microsoft data center is equipped with one such programmable chip. In addition, this chip is very helpful for Microsoft's search engine Bing and cloud computing Azure.

Develop a neural network to let computers learn to "listen"

In the fall of 2016, like Google's engineer Jeff Dean, another research team at Microsoft also built a neural network that is more accurate than the average person in speech recognition.

Making friends with robots, the road to innovation of AI technology

Photo: The picture shows Jeff Dean, one of Google's most famous engineers. He once suggested that the company should develop a chip specifically for artificial intelligence. Nowadays, this kind of chip is already available. It is the Tensor processing unit (TPU) designed by Google itself.

Huang Xuedong is a leader in the field of Microsoft speech recognition. He and his team used Microsoft's speech recognition service to use a specialized chip made by Nvidia instead of relying too much on Intel's chips as before.

Huang Xuedong said that this specialization chip allowed them to catch up with the gap that would have taken at least five years to catch up.

However, there is also a problem with this kind of chip, that is, if the neural network is trained by this method, a lot of experiments are needed. Researchers must repeat training and constantly adjust algorithms and improve training data. In addition, there are hundreds of algorithms in this process at any given time, which requires strong computing power, and the use of standardized chips alone cannot meet this requirement.

As a result, some leading Internet companies are training neural networks with a chip called a graphics processing unit (GPU). This low-power chip is mainly made by Nvidia, which was originally used to process images such as games. In addition, in the operation of neural networks, the GPU runs much faster than the CPU.

NVIDIA's boom is due to the popularity of this chip. Now, NVIDIA is producing the chip for the US Internet giant and some of the world's largest Internet companies, some of which are particularly demanding. In the past year, NVIDIA's quarterly revenue has tripled and has exceeded $409 million.

Specialized chips will become more and more popular

At present, many companies use GPUs when developing their own neural networks, but it is only part of this project. Once the neural network is trained on a task, it needs to be dedicated to performing this task.

For example, after training in speech recognition algorithms, Microsoft will add it to an online service to identify voice commands that people make on smartphones.

Google has built its own specialized chip, the TPU mentioned above; Nvidia is building a similar chip; Microsoft is letting Altera help create a specialized chip.

Other companies are also close behind. For example, Qualcomm, which focuses on making chips for smartphones, and some startups are also developing artificial intelligence chips. Market research firm IDC predicts that by 2021, the total revenue of servers equipped with such chips will reach $6.8 billion, accounting for about 10% of the entire server market.

Making friends with robots, the road to innovation of AI technology

Figure: Bart Sano, vice president of Google’s platform, believes that specialized chips are still not important to the company’s operations.

Burger pointed out that in Microsoft's global network, such chips are still only a relatively small part. Bart Sano, vice president of Google's platform, also believes that specialized chips are not too important for the company's operations.

Mike Mayberry is the head of Intel Labs, and he doesn't pay much attention to the development of specialized chips. This may be because Intel controls more than 90% of the data center market, so it has been the largest manufacturer of traditional chips. In Mike Mayberry's view, if the central processor makes the appropriate improvements, it can meet the current needs without the help of other chips.

Two years ago, Intel spent $16.7 billion to acquire Altera, the company that mentioned the development of programmable chips for Microsoft. This is also Intel's largest acquisition so far. Last year, Intel acquired Nervana for $408 million. Now, Intel and Nervana's team work together to develop chips for training and operating neural networks.

Bill Coughran, a partner at Silicon Valley venture capital firm Sequoia Capital, said Intel needs to think about how to enter new areas without affecting its traditional business.

At present, Intel not only competes with chip makers such as Nvidia and Qualcomm, but also competes with companies such as Google and Microsoft.

Google is designing a second-generation TPU chip. The company said the chip will be available later this year.

At present, these changes only happen inside large data centers. Spreading to other industries should be just a matter of time.

TO-220 package

TO-220 Transistor Outline Package is a type of in-line package often used in high-power transistors and small and medium-sized integrated circuits.

It not only plays the role of mounting, fixing, sealing, protecting the chip and enhancing the electrothermal performance, but also connects to the pins of the package shell with wires through the contacts on the chip, and these pins pass through the wires on the printed circuit board. Connect with other devices to realize the connection between the internal chip and the external circuit.Because the chip must be isolated from the outside world to prevent impurities in the air from corroding the chip circuit and causing electrical performance degradation. On the other hand, the packaged chip is also easier to install and transport.

TO-220 package is a kind of in-line package that is often used for fast recovery diodes. TO-220AC represents a single tube with only two pins and only one chip inside.

The diodes are packaged in TO-220AC and can be used in switch mode power supplies (SMPS), power factor correction (PFC), engine drives, photovoltaic inverters, uninterruptible power supplies, wind engines, train traction systems, electric vehicles and other fields.



TO-220 package,Super Fast Recovery Rectifiers,Fast Rectifiers Diode,Ultra Fast Rectifiers

Changzhou Changyuan Electronic Co., Ltd. , https://www.cydiode.com

This entry was posted in on