I have a hunch: in 2018, everything will change dramatically. The amazing breakthroughs we've seen in deep learning in 2017 will continue into 2018 in a powerful way. Research results in the field of deep learning in 2017 will be applied to everyday software applications.
Here are my 10 predictions for 2018 deep learning:
1
Most hardware startups in deep learning will fail
Many deep learning hardware startups will begin shipping their silicon products in 2018 (the core component of deep learning hardware is made up of crystalline silicon). Most of these companies will go bankrupt because they forgot to deliver good software to support their new solution. The DNA of these startups is hardware. Unfortunately, in the field of deep learning, software is as important as hardware. Most of these startups don't understand software and don't know the cost of developing software. These companies may deliver silicon products, but nothing can run on them.
Researchers will begin to use these tensor calculation cores, not only for reasoning, but also for accelerated training. Intel's solution will continue to be postponed and is likely to be disappointing. Records show that Intel can't implement the plan in mid-2017, and everyone doesn't know when the company will post the news. Google will continue to use the machine learning chip TPU to bring surprises to the world. Maybe GOogle enters the hardware industry by licensing its IP to other semiconductor vendors. If it can continue to be the only real player besides NVIDIA, it makes sense to do so.
2
Meta learning will become the new SGD
In 2017, there were many weighty research results in the field of meta-learning. As the research community has a better understanding of meta-learning, the old stochastic gradient descent method (SGD) will be put on hold and replaced by a more efficient approach that combines development and exploratory search methods. Progress in unsupervised learning will increase, but it is primarily driven by meta-learning algorithms.
3
Generating models drives a new way of modeling
There will be more and more scientific research on generating models. Currently, most research is carried out in the field of image and speech generation. However, we will find that these methods will be integrated into the tools used to model complex systems, including the application of deep learning in economic modeling.
4
Self-game learning is automated knowledge creation
AlphaGo Zero and AlphaZero are a huge leap from self-gaming learning from scratch. In my opinion, its impact is as important as the impact of the emergence of deep learning. Deep learning has found a universal function approximator, and intensive self-game learning has found a common way of knowledge creation. I look forward to seeing more progress related to self-game learning.
5
Intuitive machines will narrow the semantic gap
This is one of the most ambitious predictions I have made. We will narrow the semantic gap between intuitive machines and rational machines. Dual process theory (the concept of two cognitive machines, one without models and the other based on models) will become a more general concept of how we should construct new artificial intelligence. In 2018, the concept of artificial intuition will no longer be an edge concept, but a generally accepted concept.
6
Interpretation ability is impossible to achieve - we must fake it
There are two problems with the ability to interpret. One of the more common problems is that these explanations have too many rules that people usually cannot fully grasp. The second problem is less common, that is, the machine creates some completely unfamiliar and unexplained concepts. We have seen this in the AlphaGo Zero and Alpha Zero strategies. Humans will find that some of the moves they take when playing chess are unconventional, but it may simply be because humans do not have the ability to understand the logic behind it.
In my opinion, this is an unsolvable problem. Instead, the machine will become very good at "forgery explanation." In short, the purpose of the interpretable machine is to understand an explanation that is comfortable for humans or an explanation that is understandable at the level of human intuition. However, in most cases, humans cannot get a complete explanation.
We need to make progress in deep learning by creating false explanations.
7
Research results in the field of deep learning will multiply
In 2017, it has been difficult to master all the results of deep learning research. The number of papers submitted at the 2018 ICLR conference was approximately 4,000. In order to catch up with the meeting time, a researcher must read 10 papers a day.
The problems in this area are still worsening because the theoretical framework is constantly changing. In order to make progress in the field of theory, we need to find more advanced mathematical methods that allow us to have better insights. This will be a difficult process because most researchers in the field of deep learning do not have the corresponding mathematical background to understand the complexity of these systems. Deep learning requires researchers from the theory of complexity, but such researchers are few and far between.
Due to too many research papers and lack of theory, we are now in a very embarrassing situation. Also missing is the general roadmap for general artificial intelligence. Because the theory is weak, the best thing we can do is to create a milestone roadmap related to human cognition. We have only one framework derived from the speculative theory of cognitive psychology. This is a bad situation because the empirical evidence from these areas is uneven.
In 2018, research papers on deep learning may increase three to four times.
8
Industrialization is achieved through the teaching environment
The path to a deeper learning system that is more predictable and controllable is achieved through the development of a specific teaching environment. If you want to find the most original teaching method, you only need to look at how the deep learning network is trained. We will see more progress in this area.
More companies are expected to disclose their internal infrastructure and explain how they are deploying deep learning on a large scale.
9
The emergence of conversational cognition
Our approach to measuring the progress of general artificial intelligence (AGI) is outdated. There is a need for a new paradigm to address the dynamic (ie, non-stationary) complexity of the real world. We will see more coverage of this new field in 2018.
10
We need artificial intelligence to apply to the field of ethics
The need for artificial intelligence to be more widely used in the ethical field will increase. Nowadays, people are increasingly aware of the catastrophic effects of unintended consequences of automation out of control. The simple automation we found on Facebook, Twitter, Google, Amazon and other sites today may have side effects on society.
We need to understand the ethics of deploying machines that predict human behavior. Facial recognition is one of the most dangerous abilities we have. As a society, we need to ask ourselves to use artificial intelligence only for the overall benefit of society, rather than using artificial intelligence as a weapon to increase inequality.
In the coming year, we will see more discussion about ethics. However, don't expect new regulations to be introduced. In understanding the impact of artificial intelligence on society, policy makers often lag behind for several years. I don't expect them to stop playing politics and start solving real social problems. The American people have become victims of numerous security breaches, but we have not seen the government adopt new legislation or actions to resolve this serious problem. So we should not blindly look forward to it.
Normal Electric Test Pen,Screw-Driver With Voltage Tester,Combination Screw Driver Set With Tester,Voltage Detector Tester
YINTE TOOLS (NINGBO) CO., LTD , https://www.yinte-tools.com