• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Electrical Engineering News and Products

Electronics Engineering Resources, Articles, Forums, Tear Down Videos and Technical Electronics How-To's

  • Products / Components
    • Analog ICs
    • Connectors
    • Microcontrollers
    • Power Electronics
    • Sensors
    • Test and Measurement
    • Wire / Cable
  • Applications
    • Automotive
    • Industrial
    • IoT
    • Medical
    • Telecommunications
    • Wearables
    • Wireless
  • Resources
    • Covid-19
    • DesignFast
    • Ebooks / Tech Tips
    • EE Forums
      • EDABoard.com
      • Electro-Tech-Online.com
    • FAQs
    • 2020 LEAP Awards
    • Oscilloscope Product Finder
    • Podcasts
    • Webinars
    • White Papers
  • Videos
    • Teardown Videos
  • Lee’s Teardowns
    • Teardown Videos
  • Learning Center
  • 5G
  • Women in Engineering

Memory centric computing and memory system architectures

August 10, 2020 By Jeff Shepard

This three-part FAQ series began by considering “Memory Basics – Volatile, Non-volatile and Persistent.” Part two looked at “Memory Technologies and Packaging Options.” Recently developed persistent memory technologies, three-dimensional packaging for memories, advanced multicore processors, demands to process larger and larger datasets and databases, and artificial intelligence are driving the emergence of a new category of data center architectures referred to as memory-centric computing. Memory centric computing is a broad concept. This FAQ considers several examples of emerging memory-centric architectures developed by various vendors.

In-memory computing

In-memory computing has been used in large-scale database applications for several years. The earlier forms of in-memory computing were based on conventional von Neumann computer architectures. And while they were an improvement over earlier systems, they did not get the maximum benefit from in-memory computing. More recently, new “computational computing” architectures and so-called “big memory computing” software have been introduced.

Memory centric computing system architectures

(Left) Schematic of conventional von Neumann computer architecture, where the memory and computing units are physically separated. To perform a computational operation and to store the result in the same memory location, data is shuttled back and forth between the memory and the processing unit. (Right) IBM Research developed an alternative architecture where the computational operation is performed in the same memory location. (Image: Kurzweil)

A research team from IBM Research developed a computational memory architecture (pictured above) that is expected to speed up processing by 200 times with greater energy efficiency. IBM’s computational memory architecture eliminates most of the data transfers between memory and CPU. It is aimed at artificial intelligence applications and enables ultra-dense massively-parallel operation.

Earlier this year, MemVerge announced its Memory Machine™ software to support big memory computing. Big memory computing is the combination of DRAM, persistent memory, and MemVerge’s Memory Machine software technologies, where the memory is abundant, persistent, and highly available.

MemVerg’s Memory Machine™ software is scalable and is expected to be used with artificial intelligence, machine learning, and financial market data analytics. (Image: MemVerge)

Memory Machine is a Linux-based software subscription deployed on a single server or in a cluster. Once DRAM and persistent memory are virtualized, Memory Machine can set-up DRAM as a “fast” tier for persistent memory and provide enterprise-class data services including snapshots, replication, cloning, and recovery.

Heterogeneous-memory storage engine

Micron has developed an open-source heterogeneous-memory storage engine (HSE) designed to maximize the capabilities of new storage technologies by intelligently and seamlessly managing data among multiple storage classes. HSE is designed to overcome HDD-based architecture limitations and improve throughput up to six times, latency up to 11 times, and SSD endurance by seven times when compared to alternative solutions. Most storage engines, and therefore most storage applications, were written for hard drives and never optimized for SSDs, flash-based technologies or storage class memory. HSE offers a way to maximize the performance of those legacy applications.

Heterogeneous-memory storage engines are designed specifically for solid-state drives and storage-class memory. (Image: Micron)

HSE is designed with massive-scale databases in mind, such as billions of key counts, terabytes of data, and thousands of concurrent operations. HSE transparently exploits multiple classes of media — from NAND flash to Micron’s 3D XPoint persistent memory technology.

In-memory database

An in-memory database (IMDB) is a computer system that stores and retrieves data records that reside in a computer’s main memory, e.g., random-access memory (RAM). With data in RAM, IMDBs have a speed advantage over traditional disk-based databases that incur access delays since storage media like hard disk drives and solid-state drives (SSD) have significantly slower access times than RAM. This means that IMDBs are useful for when fast reads and writes of data are crucial.

The term IMDB is also used loosely to refer to any general-purpose technology that stores data in memory. For example, an in-memory data grid, which is a distributed system that can store and process data in memory to build high-speed applications, has provisions for IMDB capabilities.

In-memory caching systems are used to speed up access to slow databases (especially disk-based) are another example of IMDB, essentially emulating the performance of IMDBs. While these caches are not true IMDBs since they do not store data long term, they work in conjunction with disk-based databases as a (mostly) functional equivalent to IMDBs. The IMDBs temporarily store frequently-accessed data that reside in the underlying database to deliver that data faster to applications.

Near memory computing

Near memory, computing was one of the earliest architectures aimed at reducing latency and improving throughput in computing systems. It is still widely used today. Near memory, computing is based on heterogeneous system-in-package (SiP) products that are highly integrated semiconductors combining FPGAs or processors with various advanced components, all within a single package. FPGA-based SiP products address next-generation platforms, which are increasingly requiring higher bandwidth, increased flexibility, and increased functionality, all while lowering power profiles and footprint requirements.

An FPGA-based SiP approach provides many system-level advantages compared to conventional integration schemes. At the core of Intel’s SiP products is a monolithic FPGA, which provides users the ability to customize and differentiate their end system to meet their system requirements. Additional system-level benefits include:

  • Higher Bandwidth: SiP integration using EMIB, enables the highest interconnect density between FPGA and the companion die. This arrangement results in high bandwidth connectivity between the SiP components.
  • Lower Power: Companion die (such as memory) are placed as close as possible to the FPGA. The interconnect traces between the FPGA and the companion die are thus very short and do not need as much power to drive them, resulting in lower overall power and optimum performance/watt.
  • Smaller Footprint: The ability to heterogeneously integrate components in a single package results in smaller form factors and fewer circuit board layers.
  • Increased Functionality: SiP helps reduce routing complexity at the PCB level because the components are already integrated within the package.
  • Mixed Process Nodes: SiP enhances the ability to incorporate different die geometries and silicon technologies.
  • Faster Time to Market: SiP enables faster time to market by integrating already proven technology and reusing common devices or tiles across product variants.

Near memory solutions integrate high-density DRAM close to the FPGA, within the same package. (Image: Intel)

That concludes this three-part FAQ series on memory technologies and applications. You may also enjoy reading part one, “Memory Basics – Volatile, Non-volatile and Persistent,” and part two, “Memory Technologies and Packaging Options.”

References

Heterogeneous-Memory Storage Engine, Micron
In-Memory Database, hazelcast
In-Memory Processing, Wikipedia
In-Memory Computing, Kurzweil
Near-Memory Computing, Intel

You may also like:


  • Memory technologies and packaging options

  • Memory basics – Volatile, non-volatile and persistent

Filed Under: FAQ, Featured, Microcontroller Tips Tagged With: FAQ

Primary Sidebar

EE Training Center Classrooms

“ee

“ee

“ee

“ee

Featured Resources

  • NEW! EE World Online Learning Center
  • CUI Devices – CUI Insights Blog
  • EE Classroom: Power Delivery
  • EE Classroom: Building Automation
  • EE Classroom: Aerospace & Defense
  • EE Classroom: Grid Infrastructure

Autonomous & Connected Vehicles 2019


RSS Current EDABoard.com discussions

  • Advantages of not instantiating DPRAM but to realize by registers
  • How to compare Matlab/Theory <=> Cadence: Switched-cap. Integrator: Mag & Phase
  • SI and SE of double stage synchoronizer
  • Can i use pic mcu as switch on dc-dc step up?
  • smps

RSS Current Electro-Tech-Online.com Discussions

  • NE555p circuit help
  • new to Ardunio but trying to compile
  • Primary FET heatsink connected to earth in offline flyback?
  • Accumulator?
  • Creepage distances for offline SMPS

Oscilloscopes Product Finder

Follow EE World on Twitter

Tweets by @EEWorldOnline

Footer

EE World Online

EE WORLD ONLINE NETWORK

  • DesignFast
  • EDABoard Forums
  • Electro-Tech-Online Forums
  • Microcontroller Tips
  • Analog IC Tips
  • Connector Tips
  • Engineer's Garage
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips
  • Wire & Cable Tips
  • 5G Technology World

EE WORLD ONLINE

  • Subscribe to our newsletter
  • Lee's teardown videos
  • Advertise with us
  • Contact us
  • About Us
Follow us on TwitterAdd us on FacebookFollow us on YouTube Add us on Instagram

Copyright © 2021 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy