Computer vision is considered a subfield of artificial intelligence, in which the goal is to build a computer that replicates the visual intelligence of the human brain. The exponential increase in the flow of data, the continuous improvement of the technology, and the development of new, more sophisticated, algorithms are all procedures that have allowed us to reach essential results in the field of computer vision. The safety and convenience promised by vehicle automation depend on high process control and correct data management. The goal is to maximise the capabilities of these systems and meet the ever-stringent demands for vehicle safety and reliability. The demand for advanced driver assistance systems (ADAS) and new computer vision solutions is due to the desire to reduce the number of road accidents, and it’s something engineers should be keenly aware of.
Computer vision represents a series of processes to create an approximate model of the real world (in three dimensions) starting from a two-dimensional image model. The aim is to replicate human vision and the interpretative process of the images to make interactive processes that are carried out on a specific area of interest. Computer vision is a continually evolving market segment, and it embraces every potential field of technological application, including automotive and self-driving cars, but also the whole industry 4.0 sector.
The computer uses methodologies such as recognition of characteristics (feature detection) and recognition of angles (corner detection) to identify the contents of an image. In simple terms, these methodologies are used by computer vision algorithms to look for the lines that meet at an angle and understand a specific part of the image with a shade of colour. These angles and characteristics are the building blocks that help find the more detailed information contained in an image. Moreover, to facilitate the recognition, the algorithm performs a structural analysis and segmentation of the image to understand where the regions of interest are located, providing information on the spatial arrangement of colours or intensities in an image.
Computer vision is closely related to machine learning. Through unsupervised learning, computer vision algorithms are trained with a massive set of labelled data to reorganise the data so that it makes sense. This is important and necessary to make comparisons with similar photos or videos and to establish what the analysed image can represent—and what can be learned from it.
ADAS systems consist of various embedded control hardware and software solutions, which utilise various signals acquired by sensors to electronically control various driving systems, such as the engine, transmission, and the brakes. This is a challenge that many companies such as Texas Instruments, Analog Devices, Maxim Integrated, and Microchip have aimed to improve. ADAS depends on a series of devices such as radar, LiDAR, ultrasonic sensors, photonic mixing device, cameras, and other solutions that allow a vehicle to monitor near and far fields in every direction—and to evolve and improve the sensors algorithms which ensure the safety of passengers and pedestrians, based on factors such as traffic and weather conditions.
Modern ADAS systems act in real time through warnings to the driver or the direct operator of the control systems. Moreover, an ADAS system should be accurate and fast in data processing, robust, reliable, and with low error rates. In addition to functional requirements, ADAS must be protected from malicious intent, which can compromise the system and deliberately cause accidents with loss of life (see figure 1). The cameras allow image data to be obtained from the processing unit—and vice-versa—as quickly and efficiently as possible. Some of the significant compromises in the design of ADAS camera systems are image quality, bandwidth, and latency.
Pedestrian detection systems are based on cameras—a single camera or, in the most sophisticated systems, stereo cameras to promote better perception. A driver’s sleepiness alert system monitors the face of the driver to assess the position of the head, eyes, and other similar factors that may indicate a lack of attention.
The electronic control is done by using ECUs (engine control units). To achieve the combination of performance, cost, and flexibility typical of a software-based approach, many system architects are considering the use of field-programmable gate arrays (FPGA) or SoC solutions. The IVS-70 camera (figure 2), for example, based on parallel computation algorithms performed on heterogeneous SoCs, allows the acquisition and processing of the video image (stereo), thanks to the use of AMD SoCs of the G series and FPGA SmartFusion2 of Microsemi.
In the various market studies, the automotive industry is cited as the largest artificial vision industry. The vision intelligence SoC platform proves to be a fixed point for the IoT, exploiting the 10 nm FinFET process technology, with various resources for the processing of on-device cameras in all areas where energy savings are achieved. The SoCs integrate the image signal processor (ISP) and the Qualcomm AI engine on a heterogeneous computing architecture with multi-core CPU, vector processor, and an ARM-based GPU (Figure 3). The same R-Car V3H system-on-chip (SoC) by Renesas able to provide the highest performance computing capacity for computer vision with low power consumption, also addressed to the next generation of electric vehicles.
Artificial intelligence, along with machine vision solutions, dramatically improve the technology of many industries. In the automotive sector, it is possible to implement new driving assistance solutions to improve road safety. With the rapid improvement of advanced driver assistance systems (ADAS) and automated driving technologies, self-driving vehicles are becoming a reality. Engineers should take note.