Deep neural networks are generating much of the exciting progress stemming from generative AI. But their architecture relies on a configuration that is a virtual speedbump, ensuring the maximal efficiency can not be obtained.
Constructed with separate units for memory and processing, neural networks face heavy demands on system resources for communications between the two components that results in slower speeds and reduced efficiency.
IBM Research came up with a better idea by turning to the perfect model for its inspiration for a more efficient digital brain: the human brain.
Comments are closed.