Listen to this article
|

The GrAI One Development Kit enables early adopters to take advantage of low latency computation. | Credit: GrAI Matter Labs
GrAI Matter Labs, a Paris-based startup developing a neuromorphic computing architecture for edge AI inferencing, raised $14 million in a venture round led by iBionext. GrAI Matter Labs’ existing investors also participated in the round, as well as new investor Bpifrance.
GrAI Matter Labs will use the funding to accelerate design and market launch of its first GrAI full-stack AI system-on-chip platform. The full stack system will be especially useful for visual inference capabilities in robotics, industrial automation, AR/VR and surveillance products and markets, the company said.
“We are excited to bring the fastest AI per Watt to every device on the edge,” said Ingolf Held, CEO of GrAI Matter Labs. “This funding will help us to partner with application specialists and integrators, and to deliver best-in-class visual inference performance, system-on-chip platforms and end-to-end applications to our customers.”
GrAI Matter Labs’ current accelerator chip GrAI One and the GrAI One HDK are available for product evaluation and application programming. The company said a key feature of its chips is that computation happens in the same block of silicon where weights and data are stored – computation in memory. This means very little power and time are wasted in bringing together the data and the computation.
The company takes advantage of “sparsity,” which it described as “the idea that changes in the real world don’t happen everywhere, or all at once. By identifying where the changes happen and computing only the effects and consequences of those changes, we can save up to 95% of the power normally used in processing.” It added that for any single deep neural network decision, only about 40% of the neurons actually “fire” or have a non-zero output. If the output doesn’t count, the company’s chips don’t compute its effect in the rest of the network.
GrAI Matter Labs partnered with The Robot Report earlier in 2020 on a webinar called “Achieving Low Latency Edge Computation for Robotic Applications.” Latency plays a crucial role in many robotic applications. Achieving low latency helps solve many real-world problems in SLAM, obstacle avoidance, gripping and inspection systems. The webinar introduced the concept of sparsity, how it can be leveraged by robotic applications and an architecture that exploits it to achieve low latency. Listen to the webinar on demand to learn how different compute architectures provide different capabilities for robotics and much more.
“GrAI Matter Labs has demonstrated a unique architecture capability with NeuronFlow and GrAI One – an industry first silicon compute and machine learning architecture based on learnings from biology and the human brain,” said Bernard Gilly, Chairman of iBionext. “We are thrilled with the prospects of GrAI Matter Labs and look forward to grow the company to become a major success.”
GrAI Matter Labs was founded in 2016 as Brainiac within the iBionext Start-up Studio in Paris.
Tell Us What You Think!