Artificial Intelligence (AI) is rapidly transforming various industries, from autonomous driving and robotics to healthcare and customer service. As the demand for AI applications grows, so does the need for more powerful and energy-efficient processors. In this context, Lightmatter, a company at the forefront of photonic processors, has developed the Envise chip—an innovative solution that promises unprecedented performance and energy efficiency in AI inference.
Unleashing Unprecedented Power and Efficiency
The Envise chip is a game-changer in the world of AI inference. It features 16 Envise Chips in a 4-U server configuration, consuming only 3kW of power. This remarkable power efficiency enables the chip to run the largest neural networks developed to date with exceptional performance. In fact, Lightmatter claims that the Envise chip delivers three times higher instructions per second (IPS) than the Nvidia DGX-A100, while achieving eight times the IPS per watt on BERT-Base SQuAD. These numbers are staggering and highlight the potential of the Envise chip to redefine AI inference capabilities.1
Unmatched Specifications
The Envise chip boasts several cutting-edge features that contribute to its remarkable performance. Its on-chip activation and weight storage eliminate the need to transfer data to external memory, enabling state-of-the-art neural network execution within the processor itself. Additionally, the chip utilizes a standards-based host and interconnect interface, offering seamless integration into existing systems. The inclusion of RISC cores per Envise processor provides generic off-load capabilities, enhancing the chip’s versatility. Its ultra-high-performance out-of-order super-scalar processing architecture further optimizes computation efficiency.
Read on →