This site may earn chapter commissions from the links on this page. Terms of employ.

One of the topics we've returned to repeatedly at ExtremeTech is the difficulty of performance scaling in both CPUs and GPUs. Semiconductor manufacturers across the world are grappling with this problem, equally are CPU and GPU designers. To date, in that location have been no miracle cures or easy solutions to the trouble, just companies have turned to a wide range of alternative approaches to assist them heave baseline semiconductor performance, including new types of packaging technology. HBM, chiplets, and fifty-fifty technologies like Intel'southward new 3D interconnect, Foveros, are all part of a broad manufacture effort to observe new means to connect chips rather than focusing solely on making fries smaller.

Now, a pair of researchers are arguing that it's time to go one pace further and dump the printed-circuit lath altogether. In an article for IEEE Spectrum, Puneet Gupta and Subramanian Southward. Iyer write that it'southward fourth dimension to replace PCBs with silicon itself, and industry unabridged systems on a single wafer. If that argument sounds familiar, it's because the pair are ii of the authors of a newspaper we covered earlier this yr on the claim of wafer-scale processing for GPUs using an interconnect engineering science they developed known as Si-IF — Silicon Interconnect Fabric.

Wafer-scale processing is the idea of using an entire silicon wafer to build one enormous office — a GPU, in this case. The research squad's piece of work showed that this is potentially a viable arroyo today in terms of yield, performance, and power consumption, with amend results than would be expected from building the equivalent number of dissever GPUs using conventional manufacturing techniques. In the article, the pair notation the many problems associated with keeping motherboards effectually in the outset place, starting with the need to mount the actual physical chip in a package up to 20x larger than the CPU. The die of the CPU, after all, is typically much smaller than the physical substrate it's mounted on.

The authors argue that using concrete packages for fries (equally nosotros do when nosotros mount them on a PCB) increases the distance flake-to-scrap signals need to travel by a factor of 10, creating a mammoth speed and memory bottleneck. This is function of the problem with the and so-called "memory wall" — RAM clocks and operation accept increased far more slowly than CPU performance, in office considering of the need to wire up memory at some physical distance from the CPU parcel. It'due south also office of why HBM is able to provide such enormous memory bandwidth — moving retention closer to the CPU allows for much wider betoken paths. Packaged chips are also harder to keep cool. Why exercise nosotros exercise all this? Considering PCBs crave it.

But according to the ii researchers, silicon interposers and like technologies are fundamentally the wrong paths. Instead, they advise bonding processors, retentiveness, analog, RF, and all other chiplets directly on-wafer. Instead of solder bumps, chips would use micrometer-scale copper pillars placed directly on the silicon substrate. Chip I/O ports would be directly bonded to the substrate with thermal compression in a copper-to-copper bond. Heatsinks could be placed on both sides of the Si-IF to cool products more than readily, and silicon is a meliorate conductor of oestrus than PCB.

Latency-Comparison

The sheer difficulty of current scaling makes its own statement for exploring ideas similar this. The old Moore's Law/Dennard Scaling mantra of "smaller, faster, cheaper" isn't working anymore. It's entirely possible that replacing PCBs with a better substrate could let for essentially better functioning scaling, at least for sure scenarios. Wafer-calibration systems would be far too expensive to be installed in anyone'due south abode, but they could power the servers of the future. Companies like Microsoft, Amazon, and Google are betting billions on the thought that the next wave of high-operation computing volition exist cloud-driven, and giant wafer-based computers could find a happy dwelling house in industrial databases. Based on the results of the GPU testing we covered earlier this twelvemonth, there seems to be merit in the thought. The graph above, from the GPU newspaper, shows latency, bandwidth, and energy requirements for wafer-scale integration versus conventional methods.

Performance Isn't the But Reason the PC Exists

It's also important, however, to acknowledge that the PC ecosystem doesn't only exist to improve operation. PCs are designed to be flexible and modular in order to facilitate use in a vast array of circumstances. I have an older X79-based machine with a limited number of USB 3.0 ports. Years ago, I decided to supplement this meager number with a four-port USB 3.0 card. If my GPU needs an upgrade, I upgrade it — I don't buy an all-new system. If my motherboard fails, I can theoretically swap to a different board. A RAM failure means throwing away a stick of DDR3 and dropping in a new one, not a wholesale part replacement. That flexibility of approach is part of the reason why PCs have declined in cost and why the platform tin can be used for and so many different tasks in the first identify.

The authors practise a commendable chore of laying out both the strengths and shortcomings of adopting their Si-IF solution for semiconductor manufacturing and the article, while long, is absolutely worth reading. They note that companies would need to blueprint redundant systems at the wafer level to ensure that any failures in situ were kept to an absolute minimum. Just at the moment, the entire semiconductor industry is more-or-less aligned in the opposite direction from an thought similar this. Information technology might still be possible to buy a system with AMD CPUs and Nvidia GPUs, for example, just that would depend on an unprecedented collaboration between a client foundry (TSMC, Samsung, or GlobalFoundries) and two unlike clients.

I suspect nosotros could see companies looking into this type of buildout in the long-term, but it's not something I would await to ever replace the more conventional PC ecosystem. The benefits of upgradability and flexibility are huge, as is the economy of calibration these capabilities collectively create. Merely big data center providers might opt for this approach long-term if it yields results. Moore's Law alone isn't going to provide the answers companies are seeking. Getting rid of motherboards is a radical idea — simply if it works and can ever be done affordably, somebody will probably take a shot at it.

Now Read:

  • Intel Uses New Foveros 3D Chip-Stacking to Build Core, Atom on Same Silicon
  • MIT Develops 3D Chip That Integrates CPU, Memory
  • Edifice GPUs Out of Entire Wafers Could Turbocharge Functioning, Efficiency