• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Electrical Engineering News and Products

Electronics Engineering Resources, Articles, Forums, Tear Down Videos and Technical Electronics How-To's

  • Products / Components
    • Analog ICs
    • Battery Power
    • Connectors
    • Microcontrollers
    • Power Electronics
    • Sensors
    • Test and Measurement
    • Wire / Cable
  • Applications
    • 5G
    • Automotive/Transportation
    • EV Engineering
    • Industrial
    • IoT
    • Medical
    • Telecommunications
    • Wearables
    • Wireless
  • Learn
    • eBooks / Handbooks
    • EE Training Days
    • Tutorials
    • Learning Center
    • Tech Toolboxes
    • Webinars & Digital Events
  • Resources
    • White Papers
    • Educational Assets
    • Design Guide Library
    • Digital Issues
    • Engineering Diversity & Inclusion
    • LEAP Awards
    • Podcasts
    • DesignFast
  • Videos
    • EE Videos and Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Bill’s Blogs
  • Advertise
  • Subscribe

Will Artificial Intelligence Take Over the World?

August 11, 2016 By Karl Stephan, Consulting Engineer, Texas State University, San Marcos

The phrase “artificial intelligence” (AI for short) conjures up images of robots with superhuman brains conspiring to take over the world. So far, nothing like that has happened outside of science fiction. But some recent developments in the field may lead to big changes in the way engineers deal with computers, and could make millions of present-day jobs obsolete in the bargain.

The field of AI was conceived in the 1950s, about the same time computers became advanced enough in terms of memory and processing speed to outperform humans in narrow but significant ways. One of the early leaders was MIT computer scientist Marvin Minsky (1927- 2016), who confidently predicted in 1967 that “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved.” Back then, most AI programs consisted of explicit programming instructions like “if this happens, do that.” In the 1960s, Minsky and others also investigated an alternative to such explicit programming: a system called a “neural network.” But in a 1969 book, he concluded that neural networks would never be as useful for AI as the top-down, explicitinstruction method. This discouraged research work on neural networks, and in the meantime AI funding crashed as results failed to live up to early promises.

AI neural networks use software to imitate the partly-analog, partly-digital way that neurons in the brain work. Despite Minsky, some AI researchers never gave up on the idea. During the 1990s, their efforts to make flexible neural networks that were able to learn from mistakes and thereby improve their performance began to win international contests that pitted pattern-recognition programs against each other. Progress in the last five years or so with advanced deep-learning multi-layer neural networks has been spectacular.

In March of this year, a program called AlphaGo, developed using deep-learning techniques by a division of Google, defeated a world-champion Go player named Lee Sedol. (For those not familiar with the game, Go is played on a sort of checkerboard on steroids, and experts say it’s even more complicated than playing chess.)

Once you’ve written a deep-learning neural-network program, you’re not done yet. Then you have to train it. For example, suppose you want your AI machine to recognize cat photos. Training amounts to showing the software a whole bunch of photos of cats, each one labeled “CAT” and other photos of anything else, labeled “NOT A CAT.” After enough of this sort of thing, the software starts to learn the difference.

There is a philosophical problem with all this that troubles some people. In the old direct-programming days, if someone asked you how your program did a certain thing, you could show logically, step by step, exactly how the program executed. But with neural-network AI, even the software developers don’t know what’s really going on when the system recognizes a particular image, except that a whole lot of numbers are flying around in ways that get results. And it’s results that the corporate world is interested in, not the philosophy. The corporate datamining service CB Insights estimates that the total U.S. venture-capital money put into AI startup companies in 2010 was about $20 million. In 2013, that figure quadrupled to $80 million, and in 2015 it soared above $300 million. Hot AI researchers in academia are being lured away in droves to work for the likes of Google and smaller AI startup firms.

What does the advent of new powerful AI programs mean for working engineers and designers? I can think of two things right away.

One is that systems of all kinds will need more and better-integrated sensors: audio, video, and maybe kinds of sensors no one has thought of yet. So sensors with wider bandwidths and closed-loop controls (think of Robbie the Robot looking at you when you speak up) will probably show up in tandem with increasingly sophisticated and effective AI systems. A glance at one of Google’s self-driving cars tells you that sensors are an important part of the AI game. Sensors will be integrated with the system to an unprecedented degree. The entire system, including its likely users and use environment, will have to be considered holistically as the AI software learns about its surroundings and what it is expected to do.

Once learning-capable AI systems are widely deployed, the training part may be something that ordinary engineers can handle, perhaps even better than taskspecific programmers. Already one startup company is developing neural-network AI systems to be deployed on the cloud so anyone with Internet access can use them, in principle.

Finally, there is the question of ethics. Certain AI neural-network programs have demonstrated “superhuman” patternrecognition capabilities, meaning that they are better at recognizing certain images than human beings are. One can imagine all sorts of dire consequences from this trend. Let’s hope we don’t learn about controlling these new machines the hard way: by allowing a tragedy to happen and then cleaning up the mess. The better way is to minimize the downside of powerful AI software intelligently as the great possible benefits are realized. But we humans will need to keep our wits about us, because the computers will be using theirs too.

This blog originally appeared in the July/August 2016 print issue of Product Design & Development. 

You Might Also Like

Filed Under: Artificial intelligence

Primary Sidebar

EE Engineering Training Days

engineering

Featured Contributions

zonal architecture

Addressing zonal architecture challenges in the automotive industry

zonal architecture

Addressing zonal architecture challenges in the automotive industry

A2L refrigerants drive thermal drift concerns in HVAC systems

Why outdoor charging demands specialized battery connectors

How Li-ion batteries are powering the shift in off-highway equipment

More Featured Contributions

EE Tech Toolbox

“ee
Tech Toolbox: 5G Technology
This Tech Toolbox covers the basics of 5G technology plus a story about how engineers designed and built a prototype DSL router mostly from old cellphone parts. Download this first 5G/wired/wireless communications Tech Toolbox to learn more!

EE Learning Center

EE Learning Center
“ee
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.
“bills
contribute

R&D World Podcasts

R&D 100 Episode 10
See More >

Sponsored Content

Designing for Serviceability: The Role of Interconnects in HVAC Maintenance

From Control Boards to Comfort: How Signal Integrity Drives HVAC Innovation

Built to Withstand: Sealing and Thermal Protection in HVAC Sub-Systems

Revolutionizing Manufacturing with Smart Factories

Smarter HVAC Starts at the Sub-System Level

Empowering aerospace E/E design and innovation through Siemens Xcelerator and Capital in the Cloud

More Sponsored Content >>

RSS Current EDABoard.com discussions

  • SMPS feedback circuit
  • connector model question
  • Can I use TLV75533PDBVR for powering ADC
  • Step Up Push Pull Transformer design / construction
  • Snooping Around is All

RSS Current Electro-Tech-Online.com Discussions

  • More fun with ws2812 this time XC8 and CLC
  • Pic18f25q10 osccon1 settings swordfish basic
  • Pickit 5
  • turbo jet fan - feedback appreciated.
  • I Wanna build a robot
Search Millions of Parts from Thousands of Suppliers.

Search Now!
design fast globle

Footer

EE World Online

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDABoard Forums
  • Electro-Tech-Online Forums
  • Engineer's Garage
  • EV Engineering
  • Microcontroller Tips
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

EE WORLD ONLINE

  • Subscribe to our newsletter
  • Teardown Videos
  • Advertise with us
  • Contact us
  • About Us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy