• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Electrical Engineering News and Products

Electronics Engineering Resources, Articles, Forums, Tear Down Videos and Technical Electronics How-To's

  • Products / Components
    • Analog ICs
    • Battery Power
    • Connectors
    • Microcontrollers
    • Power Electronics
    • Sensors
    • Test and Measurement
    • Wire / Cable
  • Applications
    • 5G
    • Automotive/Transportation
    • EV Engineering
    • Industrial
    • IoT
    • Medical
    • Telecommunications
    • Wearables
    • Wireless
  • Learn
    • eBooks / Handbooks
    • EE Training Days
    • Tutorials
    • Learning Center
    • Tech Toolboxes
    • Webinars & Digital Events
  • Resources
    • White Papers
    • Design Guide Library
    • Digital Issues
    • Engineering Diversity & Inclusion
    • LEAP Awards
    • Podcasts
    • DesignFast
  • Videos
    • EE Videos and Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Bill’s Blogs
  • Advertise
  • Subscribe

How GPUs, AI and Deep Learning Make Smart Products Smarter – Part 1

September 6, 2017 By Alianna J. Maren, Ph.D.

Figure 1: Simulation-based training, such as NVIDIA’s Isaac simulator, allows robots and other AI-based applications to build deep, complex, knowledge bases that include worst-case conditions before they are deployed in the real world. (Image courtesy of NVIDIA)

Introduction

This is an expanded, two-part version of the article that originally appeared in the September 2017 print edition of PD&D. It explores  how advances in artificial intelligence (AI), advanced computing architectures, and low-cost, high-performance sensors are enabling the development of a growing number of commercial applications for autonomous vehicles, drones, robots  and other so-called “autonomous edge devices”.

Since AI combined with deep learning (DL) is a relatively new field, Part 1, presented here, provides a brief introduction to AI/DL systems, and how they differ from conventional embedded systems. Part 2 looks at how the technology is applied to applications like sensor fusion and autonomous vehicles.

AI vs. Standard Computing

“AI is just the modern way of doing software.” – Jensen Huang, co-founder & CEO, NVIDIA

Since AI-based systems are, at their core, still digital computers, the tools used to develop AI/DL applications have much in common with those used to program conventional computing applications. There are still functions and many aspects that programmers will recognize.

However, there some aspects of how AIs work, and how they are prepared to support a specific application, that will seem strange, if not downright alien, to the average code-slinger. These differences arise from some basic characteristics of an AI:

(1) A world model that provides the AI with a “machine’s-eye view” of the world it is operating in.

(2) The AI’s ability to learn from new information and update its world model.

(3) The AI’s ability to make inferences that allow it to deal with situations that it has not specifically observed or learned about before.

These unique characteristics are responsible for one of the most fundamental difference between creating traditional software and AI development. When programming a conventional computer, every response has to be pre-specified by the programmer. In contrast, a good deal of programming an AI involves allowing it to “learn” from examples of situations it will encounter, and training it to produce the responses it is expected to give.

The ability to learn and adapt without additional programming enables AI-based systems to do many things that were difficult or impossible to do with conventional computing. It will also become apparent that AI systems require much more powerful computing elements and present the developer with a number of new and complex technical challenges.

A Machine’s-Eye View

With AI, there are aspects where the program will “learn” what to do based on training examples set up by its developer. Part of the developer’s job is to assemble sufficient instances of the diverse kinds of data that the program can expect to find, and identify what the desired results should be for these different cases. The AI program learns to perform correctly through using one of several different possible learning algorithms.

An AI’s understanding of its environment relies on its world model, a complex data structure that contains its knowledge about things and relationships between things. For example, an autonomous vehicle’s world model will know a great deal about roads, signs, other cars, and everything that it can reasonable expect to encounter on the road. It allows the AI to know whether another car is stationary or moving, and if the traffic light it “sees” is green or red. Its world model also contains information about relationships between things, such as the distance between itself and other cars, as well as how that distance is changing over time.

An AI builds its world model from a combination of information generated by the developer and raw sensor data that it collects, correlates and assembles into a coherent set of things and relationships.

In order to function effectively, the AI must use the data it collects and together with analysis of its past performance to consistently update and refine its world model.  In real world/real-time applications, such as autonomous vehicles, the AI updates its model based on inputs from multiple sensors, many of which provide visual data from cameras, or image-like inputs (LIDAR, RADAR, etc…).  

This task of assembling a unified ‘picture” from a heterogeneous collection of sensors is known as sensor fusion, a technique that will be discussed in greater depth later in this article.

The Power of Inference

In addition, the types of AI/DL applications currently under development will not only be required to learn, but also to make inferences from what it already knows about the world and figuring out how to respond to the new, novel situations. This involves applying a method (or multiple methods) for identifying the situations that it has already learned which are most relevant to the new situation, and then interpreting between them.  

Making reliable inferences is especially challenging because it requires the AI to identify the situations it already knows about that are the closest match to its current inputs using information that is noisy or incomplete. In the case of an autonomous vehicle, for example, a sudden change of pixel values between two objects that it has previously identified as being parked cars can have multiple interpretations and correspondingly different responses. If it infers that the newly-detected features are a paper bag blown by the wind, it will have a very different response than if it is a child.

Extracting inferences from a model typically requires a great deal of computation. Some of these tasks may exceed the processing abilities of the AI’s on-board resources and must be off-loaded to more a powerful system (typically cloud-based) via a cellular connection or other wireless link. Developing strategies for efficiently segmenting computing tasks between local and remote resources present several challenges addressed in a subsequent section of this article.

Thus, much of what is being developed as the various auto manufacturers train their AI systems is a combination of both learning across a diverse range of scenarios, which can be done via both actual experience and through simulation. It also involves developing and testing inference techniques that enable the AI to deal with situations that cannot be pre-learned.  

 

GPUs – Not your Grandma’s CPUs

Artificial Intelligence’s evolution from an academic curiosity to a commercially viable technology has been largely due to the past decade’s advances in computing hardware. Most of the recent breakthroughs in AI applications are based on algorithms and processing techniques originally developed during the latter part of the 20h century which could only be run on the largest, fastest computers of the time. This has changed as computing densities increased and new computing architectures became available.

Arguably, the most common AI-friendly computing architecture is the GPU (graphics processing unit), originally developed as an array processor, optimized for the pixel and vector manipulation tasks associated with graphics and video acceleration. Those same capabilities are also essential for accelerating neural networks, which are essentially vector computations.

Figure 2. A block Diagram of an early GPU (Radeon 9700). (Image by ScotXW, courtesy of Wikipedia)

NVIDIA, in particular, has spearheaded the GPU’s evolution with a series of processors, each tailored to different processing environments and purposes. Among these are Xavier, a system-on-a-chip (SoC) designed for autonomous cars, which will be capable of 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power. The Xavier integrates the new NVIDIA GPU architecture called Volta that integrates a custom 8-core CPU, as well as a new computer vision accelerator.

Figure 3. NVIDIA’s Xavier System-on-Chip. (Image courtesy of NVIDIA)

NVIDIA also pioneered the use of software tools like CUDA, a parallel computing platform and application programming interface (API) model that allow developers to create powerful GPU-based applications without having to master the intricacies of the GPU’s unique architecture and command set. In particular, CUDA allows developers create applications for the GPUs using general-purpose programming languages, such as C, C++ and FORTRAN. CUDA can also parse the application code to allocate sequential tasks to a standard CPU and assign the tasks with high levels of parallelism to the GPU, resulting in what NVIDIA refers to as “GPU computing.”

In the second installment of this article, we will look at how these technologies are being applied to commercial applications, including sensor fusion, autonomous vehicles, and security systems.

About the Author: Alianna J. Maren, Ph.D., is a scientist and inventor, with four artificial intelligence-based patents to her credit. She teaches AI and deep learning at Northwestern University’s School of Professional Studies, Master of Science in Predictive Analytics program. She is the senior author of the Handbook of Neural Computing Applications (Academic, 1990), and is working on a new book, Statistical Mechanics, Neural Networks, and Machine Learning. Her weekly blog, posted at www.aliannajmaren.com, addresses topics in neural networks, deep learning, and machine learning.

References:

  1. NVIDIA’s Isaac robot simulator (May 10, 2017): https://nvidianews.nvidia.com/news/nvidia-ushers-in-new-era-of-robotics-with-breakthroughs-making-it-easier-to-build-and-train-intelligent-machines
  2. Jensen Huang quote about AI is just the “modern way of doing software”: https://techcrunch.com/2017/05/05/ai-everywhere/
  3. NVIDIA’s Xavier (for autonomous vehicles): https://blogs.nvidia.com/blog/2016/09/28/xavier/
  4. NVIDIA’s Volta (for AI, inferencing): https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/
  5. SAE (Society of Automotive Engineers) levels for automobile autonomy: https://www.sae.org/news/3544/
  6. Industry timeline for different SAE levels: https://venturebeat.com/2017/06/04/self-driving-car-timeline-for-11-top-automakers/
  7. Qualcomm (total volume of data per vehicle per day by 2020): https://www.qualcomm.com/solutions/automotive/drive-data-platform
  8. Mentor (total numbers of sensors per car, shift to a single FPGA-based sensor fusion processor): https://www.techdesignforums.com/blog/2017/04/06/autonomous-vehicle-drs360/
  9. Qualcomm (wireless as well as other factors): https://www.theregister.co.uk/2017/01/12/ces_2017_all_wireless_energies_are_channelled_to_the_connected_car/
  10. NVIDIA teams with Microsoft – HGX-1: https://nvidianews.nvidia.com/news/nvidia-and-microsoft-boost-ai-cloud-computing-with-launch-of-industry-standard-hyperscale-gpu-accelerator
  11. NVIDIA’s Xavier: 1st AI car superchip: https://blogs.nvidia.com/blog/2016/09/28/xavier/
  12. NVIDIA Jetson TX2: https://devblogs.nvidia.com/parallelforall/jetson-tx2-delivers-twice-intelligence-edge/
  13. NVIDIA Jetson TX2 specs: https://www.jetsonhacks.com/2017/03/14/nvidia-jetson-tx2-development-kit/tx1vstx2/
  14. More NVIDIA Jetson TX2 specs (tech details): https://elinux.org/Jetson_TX2
  15. Qualcomm acquisition of NXP Semiconductors: https://www.theregister.co.uk/2017/01/12/ces_2017_all_wireless_energies_are_channelled_to_the_connected_car/
  16. Intel acquisition of Mobileye: https://optics.org/news/8/3/19
  17. Mobileye – 5th gen chip: https://www.mobileye.com/our-technology/evolution-eyeq-chip/ 
  18. NVIDIA teams with Bosch: https://blogs.nvidia.com/blog/2017/03/16/bosch/
  19. Toyota’s Research Institute –home robotics: https://www.tri.global/about/
  20. Audio Analytic: https://www.audioanalytic.com/
  21. Smart manufacturing and robotics (Jan. 27, 2017 article): https://roboticsandautomationnews.com/2017/01/27/smart-factories-to-drive-industrial-robot-market-says-report/10998/
  22. Cobots allow workers to produce entire systems, not just a single components (May, 2016): https://news.nationalgeographic.com/2016/05/financial-times-meet-the-cobots-humans-robots-factories/
  23. Ford’s use of robotics – automobile manufacturing in Germany https://www.cnbc.com/2016/10/31/ford-uses-co-bots-and-factory-workers-at-its-cologne-fiesta-plant.html
  24. Robotics on the factory floor in Wisconsin (Aug. 5, 2017 article): https://www.chicagotribune.com/news/nationworld/midwest/ct-wisconsin-factory-robots-20170805-story.html
  25. Cobots allow manufacturers to create more custom parts (February 23, 2016): https://www.packagingdigest.com/robotics/heres-how-collaborative-robots-provide-custom-automation-cost-benefits-to-manufacturers-2016-02-23
  26. Customization via cobots (Jan. 13, 2017): https://www.ien.com/automation/article/20849060/collaborative-robots-are-showing-up-in-the-strangest-places

You Might Also Like

Filed Under: Artificial intelligence

Primary Sidebar

EE Engineering Training Days

engineering

Featured Contributions

GaN reliability milestones break through the silicon ceiling

From extreme to mainstream: how industrial connectors are evolving to meet today’s harsh demands

The case for vehicle 48 V power systems

Fire prevention through the Internet

Beyond the drivetrain: sensor innovation in automotive

More Featured Contributions

EE Tech Toolbox

“ee
Tech Toolbox: Internet of Things
Explore practical strategies for minimizing attack surfaces, managing memory efficiently, and securing firmware. Download now to ensure your IoT implementations remain secure, efficient, and future-ready.

EE Learning Center

EE Learning Center
“ee
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.
“bills

R&D World Podcasts

R&D 100 Episode 10
See More >

Sponsored Content

Advanced Embedded Systems Debug with Jitter and Real-Time Eye Analysis

Connectors Enabling the Evolution of AR/VR/MR Devices

Award-Winning Thermal Management for 5G Designs

Making Rugged and Reliable Connections

Omron’s systematic approach to a better PCB connector

Looking for an Excellent Resource on RF & Microwave Power Measurements? Read This eBook

More Sponsored Content >>

RSS Current EDABoard.com discussions

  • RF-DC rectifier impedance matching
  • GanFet power switch starts burning after 20 sec
  • Four-MOSFET Synchronous Rectification for High-Efficiency LLC Converter
  • How to solve this electronic problem?
  • Colpitts oscillator

RSS Current Electro-Tech-Online.com Discussions

  • Need Help Figuring Out the Schematics Of Circuit Board
  • Wish to buy Battery, Charger and Buck converter for 12V , 2A router
  • applying solder paste from a jar
  • Question i-nears headphones magnetic drivers
  • An Update On Tarrifs
Search Millions of Parts from Thousands of Suppliers.

Search Now!
design fast globle

Footer

EE World Online

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDABoard Forums
  • Electro-Tech-Online Forums
  • Engineer's Garage
  • EV Engineering
  • Microcontroller Tips
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

EE WORLD ONLINE

  • Subscribe to our newsletter
  • Teardown Videos
  • Advertise with us
  • Contact us
  • About Us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy