CHINESE ENGLISH
Industry News

Location:Home > News > Industry News... > How will artificial ...

How will artificial intelligent hardware and software be developed after 2018

Source:Updated:2018-01-26 13:50:52

How will deep neural networks and machine learning develop in the larger AI field in 2018 and beyond? How can we develop more and more complex machines to help humans in our daily lives? These are the concerns of Eugenio Culurciello, a professor of machine learning hardware at purdue university. Note that the focus of this article is not on the AI predictions, but rather a detailed analysis of the development trajectory, trends, and technical requirements of the field to help create more useful AI. Of course, not all machine learning is about AI, and there are other easy targets, so let's take a closer look.

The goal of AI is to achieve human and superhuman capabilities through machines so that they can help us in our daily lives. Automatic driving vehicle, smart home, intelligent assistant and security cameras will be implanted with AI technology is the first goal, home cooking and cleaning robots, unmanned drones and robots is the second goal. Other goals include assistant on mobile devices, full time assistant (you can hear and see our life experience). The ultimate goal of the AI field is to create a completely autonomous synthetic entity that can act in the same way as humans or beyond humans in everyday work.
How will artificial intelligent hardware and software be developed after 2018?
software

In this case, the software is defined as the neural network architecture trained by the optimization algorithm to solve specific tasks. Today, neural networks are practical tools for learning how to solve problems, which involve sorting through big data sets. But this is not all the AI, it requires in the real world, in the absence of supervision and learning, also want to absorbing experience had never seen often need to combine previous knowledge in order to solve the current challenges.

How can the current neural network evolve into AI?

Neural network architecture: a few years ago, when the neural network architecture is developed, we often think that from the data parameter has a huge advantage of automatic learning algorithm, and this function is more powerful than write by hand algorithm. But we forgot to mention one small detail, which is that the neural network architecture as a "training solution to a particular task base" does not learn from the data. In fact, it is still manually designed by developers. In view of this, it is now one of the main limitations of the AI field.

However, neural network architecture is the basic core of learning algorithm. Even if our learning algorithms are capable of mastering new skills, if the neural networks are not correct, they can't get the right results. The problem with the neural network architecture learning from the data is that it takes too long to conduct multi-architecture experiments on a large dataset. We must try to train multiple architectures from scratch and see which one works best. This is the very time-consuming trial and error process we use today! We should overcome this limitation and think more about this very important question.

Unsupervised learning: we can't always interfere with the neural network and guide each of their experiences. We cannot correct them in every instance and provide their performance feedback. Our lives will continue! But that's what we're doing today with the supervised neural network: we're helping each instance, so that they can be executed correctly. Instead, humans learn from a few examples and can self-calibrate and learn more complex data in a continuous fashion.

Predictive neural networks: one of the main limitations of neural networks is that they do not have one of the most important features of the human brain, predictive power. One of the main theories about how the human brain works is that it constantly predicts that it has predictive code. If you think about it, you will find that we use it every day. You bring up an object that you think is light, but it turns out to be very heavy. This will surprise you, because when you approach it, you already predict how it will affect you and your body, or your overall environment.

Prediction not only lets us know the world, but also knows when we don't know it and when to learn. In fact, we keep information about things we don't know and surprise us so we won't make the same mistake next time! Cognitive absolute attention mechanism in the brain - and we have a clear link: we naturally has the ability to give up 99.9% of the sensory input, only focus on is very important to our survival data, including where a threat, where is the place where we escape from it. Or, in the modern world, where my phone falls when we rush out. ? Building predictive neural networks is the core of our interaction with the real world and can play a role in complex environments. Therefore, this is the core network of any reinforcement learning.

Address the limitations of neural networks.

The limitations of the current neural network: unpredictable, unexplained, and transient instability, we need a new neural network. Neural Network Capsules is a method to solve the limitations of the current Neural Network, but we think it must have some additional features:

1) video frame operation: this is very simple, because what we need to do is to let the capsule route to check the many data points of the latest time. This is equivalent to building up the associated memory at the most recent important data points. Note that these are not the latest expressions of recent frames, but rather their latest and different expressions. You can obtain different expressions of different content by simply preserving expressions that are different from the predefined values. This important detail only allows for the preservation of relevant information in recent history, rather than a series of useless data points.

2) predictive neural network capability: this is already part of dynamic routing, which forces each layer to predict the next layer of expression. This is a very powerful self-learning technique that, in our view, trumps all other unsupervised performance studies that we have developed in the community. Capsules now need to be able to predict long-term spatiotemporal relationships, but they have not yet been implemented.

Continuous learning: this is important because neural networks need to constantly learn new data points to survive. The current neural network cannot learn new data, and every time it needs to be retrained from scratch. Neural networks need to be able to self-assess the need for retraining, and the fact that they do know something. This also needs to be demonstrated in real life and in intensive learning tasks, and we want machines to do new tasks without forgetting the old ones.

Transfer learning: or how we can make these algorithms learn by watching video, just as we learn how to cook new things. This is an ability that requires all of the factors listed above, and is also important for strengthening learning. Now you can train your machine to do what you want it to do by giving examples, just like we humans do.

Intensive learning: this is the holy grail of deep neural network research: how to teach machines how to learn in a real world environment! It requires self-study, continuous learning, predictive power, and a lot of things we don't know. There is a lot to learn in the field of reinforcement learning, but for the authors, it only scratches the surface of the problem.

Intensive learning is often referred to as "cherry on the cake", meaning it is just a small training in the plastic synthesis of the brain. But how do we get a "universal" brain that solves all the problems easily? This is a chicken-and-egg problem! Today, if you want to solve the problem of reinforcement learning one by one, we need to use standard neural network: the depth of a neural network, it receives a large amount of data input, such as video or audio, and compressed into it; A sequence learning neural network, such as RNN, to understand the task.

Both of these are obvious solutions to the problem, which is obviously wrong at the moment, but this is what everyone is using, because they are currently available building blocks. The result is not impressive: we can learn from the beginning to play video games, and to master such as chess and go completely observable game, but needless to say, compared to solve the problem in the complicated world, these are trivial. Imagine that AI could play "Horizon Zero Dawn" better than humans, and I'll wait and see!

But this is what we want to see, the machines that work like us. Our recommendation for intensive learning is to use predictive neural networks and associative memory that can be used continuously to store recent experiences.

Don't more recurrent neural network (RNN) : because they had particularly bad in parallel, even on the special custom machine is slow, because their memory bandwidth utilization rate is very high, memory bandwidth limitations exist. Focused neural networks are more efficient, can be trained and deployed more quickly, and are less scalable in terms of training and deployment. In neural networks, attention is likely to make many architectures really change, but it does not get the recognition it deserves. The combination of associative memory and attention is the core of the development of the next neural network. We recognize that attention - based neural networks will gradually replace RNN based speech recognition and find their way through enhanced learning frameworks and general artificial intelligence.

The positioning of information in the classified neural network: in fact, this is a problem that has been solved and will be embedded into the future neural network architecture.

hardware

Deep learning hardware is the core of progress. Now let's forget the rapid extension of deep learning, from 2008 to 2012 in recent years progress depends mainly on hardware: with the help of social media, on each phone cheap image sensor can collect huge datasets, but only in secondary importance. GPU allows for accelerated training of deep neural networks. Over the past two years, machine learning hardware has boomed, especially for hardware that is deep neural networks.

Several companies is working in this field, including nvidia and Intel, Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google, Graphcore, Groq, huawei, ARM and Wave Computing, etc., all of them in developing custom high-performance micro chip, can deep neural network training and running. The key is to provide the lowest power consumption and the highest measurable performance, while calculating the most recent useful neural network operation, rather than the original theoretical operation per second. But few people in this field know how hardware really changes machine learning, neural networks, and AI, and few know the importance of microchips and how to develop them.

Training or reasoning: many companies are making microchips that provide neural network training. This is part of nvidia's market, which is the DE facto training hardware so far. But this training is only a small part of the deep neural network. For each training step, there are millions of deployments in the actual application. For example, you can now use a target detection neural network in the cloud, which has been trained once and can be used in many images. But once trained, it can be used by millions of computers for billions of data.

What we are trying to say here is that the importance of training hardware is negligible compared to the number of times you use it, and the chipset that is used for training requires additional hardware and additional skills. This causes the same performance to consume higher power, so it is not the optimal state for the current deployment. Training hardware is important, but it's easy to modify the reasoning hardware, but it's not as important as many people think.

Applications: hardware that provides training faster and less power is important in this area because it will allow faster creation and testing of new models and applications. But the really important step is to apply the required hardware, primarily the reasoning hardware. There are many applications that are not available today, mainly because of hardware, not software. For example, our phones can be voice based assistants, currently suboptimal, because they can't run all the time. Even our family assistants can't live without electricity unless we have more microphones or equipment around them, or we can't follow. But perhaps the biggest application is to remove the phone's screen from our lives and embed it in our visual system. Without super-efficient hardware, all this and more applications would be impossible.

Winners and losers: on the hardware side, the winners will be the ones that will be able to perform better with minimal power and quickly put their devices into the market. Imagine using a mobile phone instead of SoC, which happens every year. Now imagine embedding a neural network accelerator into memory. This may be quicker to conquer the market and penetrate quickly, which is what we call the winner.

The application

We briefly discussed the application in the "target" section above, but we need to discuss it in detail. How will AI and neural networks get into our daily lives? Here's our list:

Classification images and video: already exist in many cloud services. The next step is to do the same thing in the field of smart cameras, and there are many vendors here today. The neural network hardware will allow the removal of the cloud and processing more and more data locally, protecting privacy and saving network bandwidth will be the winner.

Voice assistant: they are becoming a part of our lives, playing music and controlling basic devices in our smart devices. But dialogue is a basic human activity, and we often take it for granted. The small device you can talk to is a revolution that is taking place. Voice assistants are getting better and better for us. But they are still connected to the grid, and the real assistant we want should be able to stay with us at any time. hand

How about the machine? The hardware wins again here, because it will make that possible. Alexa, Cortana and Siri can always be there for you. The phone will soon be your smart home device, another victory for smartphones. But we also want it in our car and with us moving in the city. We need to deal with voice locally and reduce cloud support. More privacy and less bandwidth costs. Hardware is expected to be available within 1-2 years.

The real smart assistant: the voice assistant is great, but what we really want is an assistant who can see what we see. When we walk around, it can analyze our environment. And the neural network hardware will be able to satisfy your desire again, because it is very expensive to analyze video, and it currently restricts the current silicon hardware. In other words, it's much harder to do something than a voice assistant. But it's not impossible. Many smart startups like AiPoly already have similar software, but lack the hardware to run it. Also note that replacing the phone screen with a wearable glass device really makes our assistant part of us!

Cooking robots: the next biggest device will be cooking and cleaning robots. Here, we may soon have hardware.

Product|Technical|News|Propaganda|Partner|About

Copyright © 2014 Quanstar Intelligent Controls(shanghai)Co.LTD All rights reserved ICP: Shanghai ICP No. 10214886

Address: 152 Ring Road, Shanghai Comprehensive Industrial Development Zone

Tel: 021-57472600    E-mail:Mark.liang@quanstar.cn

Website buildingTrueland