The list of startups building hardware to process artificial intelligence applications just grew longer with the addition of EnCharge AI Inc.
Today the company announced a $21.7 million Series A funding round to support the development of a semiconductor hardware and software stack it claims can deliver 15 times the performance of competitors in low-power environments.
Led by a team of engineering Ph.D.s and incubated at Princeton University, EnCharge AI’s in-memory computing technology was born out of Department of Defense funding and six years of development. The processor is basically a highly programmable application-specific integrated circuit that takes a unique approach to memory management to enable its high performance, said Chief Executive Naveen Verma (pictured).
Like most AI hardware startups, the company is targeting edge computing use cases. “This is an extremely high-value space,” Verma said. “We feel pretty convinced that there are a lot of critical applications to be unlocked here.”
The processor the company is developing uses charge-based memory, which differs from conventional memory design in that it reads data from the electrical current on a memory plane rather than from individual bit cells. This enables the use of capacitors, which are “extremely precise devices, “ Verma said. In contrast, semiconductors “are super-messy things, sensitive to temperature and so on.”
The greatest efficiency is gained during a data reduction operation involving matrix multiplication. “Instead of communicating individual bits, you communicate the result,” Verma said. “You can do that by adding up the currents of all the bit cells, but that’s noisy and messy. Or you can do that accumulation using the charge. That lets you move away from semiconductors to very robust and scalable capacitors. That operation can now be done very precisely.”
15-fold performance boost
EnCharge AI says its tests have shown chips and hardware can achieve over 150 trillion operations per second per watt for 8-bit compute precision. “The best in class available with today’s technologies are sitting at the 10 TOPS/W level,” Verma said. “We’re talking about 15X higher efficiency. This has been demonstrated in generations of test chips.”
The processor, which is still in development, will be mounted on cards that plug into PCIe interfaces on a range of devices and will not be tied to a particular central processing unit chip. It can be used in concert with graphics processing units too.
The company is also building a software stack that supports the popular PyTorch framework, TensorFlow libraries and Open Neural Network Exchange operators. One critical element of the platform is a proprietary compiler that optimizes code for the custom-built microprocessor.
EnCharge AI has already received five patents for its work. Athough it doesn’t expect to ship a production-ready device until early 2024, it’s in active discussions with potential customers, according to Chief Operating Officer Echere Iroaga. “We’re not tailoring it for one customer but we want to make sure we have all the features that a specific set of customers wants,” he said.
The company currently has about 25 employees who come from Nvidia Corp., Advanced Micro Devices Inc., Intel Corp., Waymo LLC, SambaNova Systems Inc. and other chipmakers. “They have been there and done that with more traditional accelerators,” Verma said. “They’re the kind of people who can break the rules while building rigorous, industrial-strength hardware.”
Verma is a professor of electrical and computer engineering at Princeton. Chief Product Officer Kailash Gopalakrishnan was formerly an IBM Corp. Fellow and led its AI hardware and software development. Iroaga was most recently general manager of Macom Technology Solutions Holdings Inc.’s connectivity business unit.