Samsung develops high-bandwidth memory with onboard AI processing

What just happened? Samsung this week announced it has developed what it is calling the industry’s first High Bandwidth Memory (HBM) with built-in artificial intelligence. A real processing-in-memory, PIM for short, gives you AI computing capabilities directly along the memory subsystem, accelerating processing from data centers, high-performance computers and as a consequence AI-enabled mobile apps.

As Korean explains, most computing systems in these modern times are based on the von Neumann architecture . Involving sequential processing approach involves appearance CPU and memory units because of data constantly being shuffled with regards to, creating a bottleneck.

Samsung’s HBM-PIM puts a DRAM-optimized AI engine inside each you are likely to bank. This enables parallel processing and in addition minimizes data movement, improving functioning and reducing power consumption. In line with Samsung, when used in conjunction that includes Samsung’s HBM2 Aquabolt , the architecture is able to deliver more than “twice the system normal daily functioning while reducing energy consumption just by more than 70 percent. ”

Sammy further marked that HBM-PIM doesn’t require all hardware or software changes, allowing faster integration into existing gadgets.

Samsung’s new HBM-PIM is currently being tried by AI solution partners, equipped with validations expected to be finished by first half of 2021. Samsung an element presenting the new tech at the International reach and international Solid-State Circuits Virtual Conference ( ISSCC ) today in a paper titled, “A 20nm 6GB function-in-memory DRAM according HBM2 with a 1 . 2TFLOPS programmable computing unit using bank-level parallelism for machine learning applications. alone

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: