Google researchers published a new paper in Nature on Wednesday describing “an edge-based graph convolutional neural network architecture” that learned how to design the physical layout of a semiconductor in a way that allows “chip design to be performed by artificial agents with more experience than any human designer.” Interestingly, Google used AI to design other AI chips that offer more performance.
This is a significant advancement in chip design that could have serious implications for the field. Here’s how the researchers described their achievement in the abstract of the paper (the full text of which is unavailable to the public) as printed by Nature:
“Despite five decades of research, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.”
The capabilities of this method weren’t just conjecture. Google’s researchers said it was used to design the next generation of tensor processing units (TPUs) the company uses for machine learning. So they essentially taught an artificial intelligence to design chips that improve the performance of artificial intelligence.
That loop appears to be intentional. The researchers said they “believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.” Those advancements could have other benefits, too, especially if the AI-designed chips truly are better “in all key metrics.”
It would be interesting to know how this might affect Google’s reported plans to develop its own system-on-chips (SoC) for use in phones and Chromebooks. The company’s already switching to custom processors for some tasks—it reportedly replaced millions of Intel CPUs with its own video transcoding units—as well.
The method described in this paper likely wouldn’t be limited to TPUs; the company would probably be able to use it to improve other application specific integrated circuits (ASICs) meant to serve particular functions. This advancement could make it far easier to develop those ASICs so Google can ditch more off-the-shelf solutions.
Other developers should be able to benefit from the research, too, because Google has made TPUs available via a dedicated board as well as Google Cloud. Assuming the company doesn’t keep these next-generation TPUs to itself, developers ought to be able to take advantage of this artificial intelligence ouroboros before too long.