Home PC News Stanford researchers design accelerator chip that speeds up AI inferencing

Stanford researchers design accelerator chip that speeds up AI inferencing

Researchers at Stanford have developed hardware that can run AI tasks quickly and energy-efficiently by harnessing special-built chips. A paper published in Nature Electronics describes the chips, each of which have data processors built next to their own memory storage, which leverage algorithms to meld eight separate cores into one AI processing engine called the Illusion System.

AI accelerators like Illusion are a type of specialized hardware designed to speed up AI applications, particularly neural networks, deep learning, and machine learning. They’re multicore in design and focus on low-precision arithmetic or in-memory computing, both of which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains.

The researchers, including Stanford computer scientists Mary Wootters and Subhasish Mitra as well as electrical engineer H.S. Phillip Wong, developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the U.S. Defense Advanced Research Projects Agency. It builds on the team’s prior work with a new memory technology, called RRAM, which stores data even when electricity is switched off — like flash memory, but faster and at low power.

“If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip.”

The team built and tested its prototype, which incorporates RRAM, with collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. In simulations, the researchers showed how systems with 64 hybrid chips — eight times the number in the prototype — could run AI applications seven times faster than current processors using one-seventh as much energy.

According to the researchers, these capabilities could one day enable Illusion to power augmented and virtual reality glasses that use AI to learn by spotting objects and people in the environment, providing wearers with contextual information. As a step toward this, the team developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multichip systems. Collaborators from Facebook helped test AI programs that validated the efforts.

The next steps will entail increasing the processing and memory capabilities of the individual hybrid chips and demonstrating how to mass-produce them cheaply, Wong says. He believes Illusion could be ready for marketability within three to five years. “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” he added.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Most Popular

Recent Comments