The Future of Innovation: Embracing Neural Networks

The Future of Innovation: Embracing Neural Networks

Bryce Johnstone from Imagination Technologies predicts that convolutional neural networks (CNNs) will be crucial for autonomous driving.

Neural networks are making their way into various fields these days. They customize your social media feeds, enhance your photos, and drive features like eye-tracking in AR and VR headsets. They also boost security with better facial recognition and crowd behavior analysis, and are better than humans at spotting fraud in online payments.

In the automotive world, neural networks will power driverless car systems for collision avoidance and identifying objects on the road. They’ll also handle tasks like monitoring driver alertness, tracking where the driver is looking, detecting road signs, and more. As autonomous vehicles become more common, the use of these networks will grow too.

When we talk about advanced driver-assistance systems (ADAS), moving up to higher levels of autonomy means cars will need more sensors and especially cameras. These cars must make sense of their surroundings using technologies like lidar, radar, and infrared. Sensor fusion is when a car takes input from all these sensors to understand its environment and act accordingly.

At level two autonomy, drivers are still required to steer and stay engaged. But to reach levels four and five, the car needs to process a lot more data very quickly to perform actions like automatic braking and steering.

To interpret data from multiple cameras efficiently, designers use convolutional neural networks. CNNs mimic how the human brain works, using layers of mathematical operations to make sense of the data they receive, whether it’s images, speech, or text. For neural networks to be effective, they need extensive training with large amounts of data.

For instance, to create a road-sign recognition system, the network would need to process hundreds of thousands or even millions of road sign images from various angles and under different conditions. During this training phase, the network learns to recognize various road signs.

Once trained, the network then uses inference to recognize new data in real time, like spotting a road sign while driving. Training a network to be highly accurate requires a lot of computing power, typically provided by arrays of GPUs in data centers.

CNNs can run on general computing resources like CPUs or GPUs, but GPUs generally perform neural network tasks better, providing a significant performance boost and reducing power consumption. Yet, using dedicated hardware optimized for specific networks and algorithms can offer even greater performance improvements and power efficiency. This capability allows complex neural network calculations to be executed efficiently at the network edge.

smartautotrends