Artificial intelligence is the holy grail of technology that some scientists are trying to achieve with neural network simulation techniques. Accelerating the performance of AI would result in more and more high-performing systems.
Brain signals operate at a speed of around 130 metres per second. Light, on the other hand, travels at around 300,000 kilometres per second. Imagine operating at the speed of light, the maximum value physically possible through the use of artificial intelligence algorithms. Many researchers are working to optimise the use of neural networks by allowing computers to solve complex mathematical calculations at this blazing speed.
Why do we need artificial intelligence? Science says it could be applied to robots for use in dangerous jobs that save millions of lives. The human race would feel safer. What impact will artificial intelligence have on our lives? For some, the highest performing artificial intelligence is a dangerous precedent for a future too dependent on machines to make decisions about everything from medical diagnosis to the functions of self-driving vehicles.
Some distributors have their eyes on these blazing speeds, and I believe it’s the best route we could take on the way to smarter devices.
Artificial intelligence involves all those operations characteristic of human intellect and performed by computers. These include planning, language comprehension, object and sound recognition, learning, and problem-solving. Our body, through various sensory inputs, can recognise certain situations and make relevant decisions.
Machine learning is necessarily a road for the implementation of artificial intelligence. Deep learning is one of the approaches that has been inspired by the structure of the brain, or the interconnection of its various neurons.
Deep learning technology is based on artificial neural networks that receive learning algorithms and continuously increasing amounts of data to improve the efficiency of training processes. The deep learning technique mainly consists of two phases: training and inference.
The IoT is a set of connected sensors, and thanks to artificial intelligence and control system, it is possible to make decisions and operate actuators for the control of various movements (robot arms).
A neural network is inspired by the functioning of the human brain. It is a calculation system made up of interconnected units (such as neurons), which process information by responding to external inputs, thus transmitting the relevant information between different units.
Artificial neural networks work on the same logic, but “neurons” are just mathematical equations. Artificial neural networks contain thousands and thousands of these “mathematical neurons” arranged in layers, and each performs a calculation step and then passes the control to the next neuron (see figure 1).
GPU, TPU, and Deep Learning
With increasing data flow (IoT and smart cities/homes), deep learning is becoming a key player in delivering big data predictive analytics solutions. Deep learning requires computational solutions, and GPUs are capable of satisfying large amounts of data to train models.
The computational units of the graphics cards have a different architecture than the central computing units. Instead of processing one instruction at a time in a serial mode, they carry out thousands of operations simultaneously (in parallel). This means that the computing power of a typical GPU is tens of times higher than that of a typical CPU.
Over the years, however, their computing power has proved no longer sufficient for some applications, and, above all, their high-energy consumption has made them almost obsolete for machine learning and artificial intelligence.
Tensor processing units (TPUs) offer 15 to 30 times higher performance than the most powerful GPUs and have 30 to 80 times more energy efficient than CPUs and GPUs. Google is working in this field, and, some time ago, they presented some “second generation” versions of TPUs.
The Tensor processing units will boost the development of the most advanced applications of machine learning and artificial intelligence. Among these, applications include speech recognition, instant translations, and imaging (see figure 2).
AI at the Speed of Light
Can the use of hardware and software technologies lead to the massive use of deep learning technologies? Can we operate at the speed of light?
A team of researchers designed an artificial neural network capable of analysing large volumes of data and identifying objects at the speed of light. This network, called deep diffractive neural network (D2NN), uses a light scattering technique that exploits the optical properties of objects. The technology exploits the techniques of deep learning and consists mainly of passive diffractive layers that work collectively.
These layers form an “optical network” that modulates light. The network can identify an object because the light coming from the object is mainly diffused toward a single pixel assigned to that type of object.
The team performed the computer simulation and used the 3D printer to create polymeric wafers with irregular surfaces to diffuse the light. It is made up of 5 parallel sheets printed in 3D. Each wafer was made up of thousands of pixels, through which light could travel and then be diffracted. The D2NN network is therefore primarily a set of pixels, each associated with an object. The researchers employed AI techniques and a THz light source to train the model according to the light diffracted by each object.
The network has proved to be scalable, as it can easily be resized with 3D techniques—and it is easily configurable—but above all, it can work at the speed of light with minimum energy consumption.
The device has been trained to recognise 55,000 images and after training has learned to recognise them with an accuracy of 91.75% (see figure 3).
It is necessary to develop a robust research area that can evaluate the social and economic effects of artificial intelligence systems, to enhance their positive effects and reduce human risks. By measuring the impacts of these technologies, we can strengthen the design and development of artificial intelligence, help public and private entities to ensure the reliability and transparency of their systems, and reduce the risk of errors.
Artificial intelligence is destined to revolutionise technology. Digital transformation employs new tools such as machine learning—with the support of the Internet of Things—to make objects even smarter. In the last decades, artificial intelligence (AI) has recorded a real boom, finding application in various fields such as voice recognition and self-driving cars. Several high-profile voices have warned us about the possible risks of artificial intelligence—from the danger of “singularity,” which is when the machines have full control over humans, to cyber-attacks, which are a danger for digital security that involves not only privacy and data theft, but also the functional safety of people in an all-connected ecosystem (IoT and IIoT).
But really, it depends on how the machine has been programmed (or educated). In proper cases, artificial intelligence will perform tasks according to rules instead of like a hacker.