2022 DesignCon Shows Evolution Of Electronic Chip Communication And Memory – Forbes

DesignCon is a trade show focused on electronic product design, electronic components as well as the applications that drive demand for electronics. It has been going on for decades in Silicon Valley. The in-person conference included three engaging keynotes. John Bowers, Frad Kavli Chair of Nanotechnology, from UCSB talked about how photonics could be used in high-capacity co-packaged electronics. Laurence Moroney, Artificial Intelligence Lead at Google talked about practical applications for AI and Machine Learning. Jose Morey, Consultant for NASA, IBM, Hyperloop Transportation and Liberty BioSecurity; gave an inspiring talk on mankind’s future in space, curing old age and a future enabled by robots.

John Bowers showed the future evolution of co-packaged optics and electronic chips for data center communication as shown below. True co-packaging will require chip stacking and heterogeneous integration of various types of chips, including optical engines. The PIPES project that UCSB is involved with is building technology for 10 Tbps links with 0.5pJ/bit efficiency that includes technology such as quantum dot lasers.

Electronic products need memory and storage to work and there were several sessions at DesignCon that explored how storage and memory is evolving to meet the needs of current and future products. As shown in the image from Rambus talks below, memory technology is evolving to provide higher bandwidth, capacity and new more efficient and secure computer architectures, driven by new interconnects (e.g. CXL) and data center disaggregation.

MORE FOR YOU

Memory is a big part of server costs and must be efficiently utilized to provide the best total cost of ownership (see figure below). CPUs, memory and storage have different lifecycles and should be replaced separately. This has driven the use of pools of similar resources, such as a memory pool using CXL.

In addition, data access and on-chip data movement are extremely costly in terms of energy (see below). This is causing systems and data center designers to rethink architectures to emphasize data locality and minimize data movement.

CXL is enabling disaggregation of memory with changes in near term memory access shown below. CXL offers memory bandwidth and capacity expansion with “far memory” providing additional memory tiers that can include non-volatile memories.

Conventional memory systems for AI applications include on-chip memory (with the highest bandwidth, but limited capacity), HBM (with very high bandwidth and density but high cost) and GDDR (which has a good tradeoff between bandwidth, power efficiency, cost and reliability).

Memory is also playing a big role in edge computing, which also reduces potential energy consumption by processing data close to where it is generated. While data centers play a big role in ML training, edge computing play an important role in ML inference. The figure below shows Rambus’s view of memory types for servers, ML training and inference. The sweet spot for inference favors GDDR6. Accelerator cards look like they will play an important role in AI edge computing and automotive applications.

Rambus is also offering root of trust solutions for automotive design to prevent hacking vehicles that are increasingly rolling computer systems. One of their talks got into advanced packaging options, including UCIe (chiplet spec.) and HBM solutions that are approaching 1 TB/s bandwidth.

The 2022 DesignCon covered electronic design and integration, including photonic in chip communication. Rambus presented talks on the need to process data closer to where it is stored and discussing the use of various xDDR and HBM memories for various applications including edge AI training, edge inference and ADAS.

This UrIoTNews article is syndicated fromGoogle News