Join today’s leading leaders online at Data Summit on March 9th. Register here.
According to some experts, the growth in computing power required to develop future AI systems could hit a wall with mainstream chip technologies. While start-ups like Cerabras claim to be developing hardware that can efficiently handle next-generation systems, at least in terms of power consumption, researchers fear that sophisticated AI systems – which are already expensive to train and deploy – will become the sole domain of corporations could and governments with the necessary resources.
One proposed solution is photonic chips, which use light to send signals, as opposed to electricity, which traditional processors use. Photonic chips could theoretically lead to higher performance because light generates less heat than electricity, can travel faster and is less susceptible to temperature changes and electromagnetic fields.
Lightmatter, LightOn, Celestial AI, Intel and Japan-based NTT are among the companies developing photonics technologies. So does Luminous Computing, which announced today that it has raised $105 million in a Series A round with participation from investors including Microsoft co-founder Bill Gates, Gigafund, 8090 Partners, Neo, Third Kind Venture Capital, Alumni Ventures Group , Strawberry Creek Ventures and Horsley Bridge , and Modern Venture Partners, among others. (Luminous’s post-transaction valuation is between $200 million and $300 million.)
“It’s an incredible time to be a part of the AI industry,” said Marcus Gomez, CEO and co-founder of Luminous, in a statement. “AI has become superhuman. We can interact with computers in natural language and ask them to write some code or even an essay and the output will be better than most humans could deliver. What’s frustrating is that we have the software to tackle monumental, revolutionary problems that humans can’t even begin to solve. We just don’t have the hardware to run these algorithms.”
Luminous was founded in 2018 by Michael Gao, CEO Marcus Gomez and Mitchell Nahmias. Nahmias’ research at Princeton became the cornerstone of Luminous’s hardware. Gomez, who previously founded a fashion tech startup called Swan, was a former research scientist at Tinder and spent some time working on machine intelligence and research software at Google. Gao is CEO at AlphaSheets, a data analytics platform for enterprise clients.
“Over the past decade, demand for AI computing has increased by a factor of 10,000. Ten years ago, the largest models had 10 million parameters and could be trained in 1 to 2 hours on a single GPU; Today, the largest models have over 10 trillion parameters and can take up to a year to train on tens of thousands of machines,” Gomez told VentureBeat via email. “Unfortunately, we’ve reached a dead end: the hardware just couldn’t keep up. Existing large AI models today are notoriously difficult and expensive to train because the underlying hardware simply isn’t fast enough. Training large AI models is mostly limited to this [big tech companies], since most companies cannot even afford to rent the necessary hardware. Worse, even for [big tech companies], hardware growth is slowing down so much that it is almost impossible to cope with further increasing the model size. AI progress stagnates quickly.”
In traditional hardware, transistors control the flow of electrons through a semiconductor and perform operations by reducing information to a series of ones and zeros. In contrast, Luminous hardware calculates by splitting and mixing light beams into nanometer-wide channels. Photonic chip calculations are analog, not digital, which means they are inherently less accurate. But photonic chips can perform these calculations — including the calculations used to train AI models — quickly and in parallel, shifting data and multiplying large arrays of numbers instantly.
“With … proprietary silicon photonics technology, [we’ve] has developed a novel computing architecture that scales dramatically more efficiently, allowing users to train models 100x to 1,000x larger in a reasonable amount of time at a much lower cost and with a dramatically simpler programming model,” said Gomez. “In other words, [we’ve] developed a computer that makes training AI algorithms faster, cheaper and easier.”
According to Gomez, Luminous aims to make a single computer chip as powerful as 3,000 circuit boards equipped with Google’s third-generation Tensor Processing Units (TPUs). (TPUs are custom chips specifically designed to accelerate AI development and power products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs.) For reference were Over 4,000 third-generation TPUs required to train the language model used in the 2021 MLPerf hardware machine learning performance benchmark.
While Luminous keeps the exact technical specifications of its hardware top secret, Nahmias published a scientific article in January 2020 comparing the performance of photonic and electronic hardware in AI systems with so-called “multiply-accumulate” operations. Nahmias and the other coauthors found that photonic hardware—presumably based on Luminous—was significantly better than electronic hardware in terms of power, speed, and computational density.
“If you look at where modern AI computing is bottlenecked, it’s primarily in communications at any scale — between chips, between boards, and between racks in the data center. If you don’t solve the communication bottleneck, you actually have to live with these terrible trade-off curves,” Gomez added. “Luminous uses its … silicon photonics technology to directly solve the communication bottleneck at every level of the hierarchy and when [we] say solve [we] my solution: [we’re] Increase bandwidth by 10x to 100x at any range scale.”
Photonic chips have drawbacks that need to be addressed if the technology is to go mainstream. They are physically larger than their electronic counterparts and difficult to mass-produce, partly due to the immaturity of photonic chip manufacturing factories. Additionally, photonic architectures still rely heavily on electronic control circuits, which can cause bottlenecks.
“For large applications, including AI and machine learning, and large-scale analytics, power dissipation across many components is expected to be high — an order of magnitude higher than current systems,” writes Nicole Hemsoth of The Next Platform in a January analysis of Photonics 2021 Technologies. “We’re probably at least five years to a decade away from silicon photonics-based computing.”
But Pre-Revenue Luminous – which has over 90 employees – claims to have made working prototypes of its chips, and the company intends to ship development kits to its customers within the next few months. The latest round of funding brings Luminous’s total capital to $115 million and will primarily be used to double the size of the engineering team, develop Luminous’s chips and software, and focus on “commercial-scale” production prepare,” says Gomez.
“Luminous’s initial target customers are hyperscalers building their own data centers to power their own machine learning algorithms,” Gomez continued. “Luminous’s computer has the memory, processing power, and bandwidth needed to train these super-large algorithms, and it was designed from the ground up with the AI user in mind… For users using large AI models, to increase their core revenue, we completely unblock them from extending their models and we eliminate thousands of hours that would otherwise be wasted in programming complexity and engineering overhead.”
VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Learn more