
About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would integrate Nvidia’s DGX Quantum computing platform with Quantum Machine’s advanced quantum control hardware. We haven’t heard much about the results of this partnership for a while, but it’s now starting to bear fruit, bringing the industry one step closer to the holy grail of error-correcting quantum computers.
In a presentation earlier this year, the two companies showed that off-the-shelf reinforcement learning models running on Nvidia’s DGX platform could be used to better control the qubits on a Rigetti quantum chip by maintaining system calibration.
Yonatan Cohen, co-founder and CTO of Quantum Machines, noted how his company has long been trying to control quantum processors using regular classical computing engines. These compute engines were small and limited, but that’s not an issue with Nvidia’s extremely powerful DGX platform. He said implementing quantum error correction is the Holy Grail. We’re not there yet. Instead, the collaboration focused on calibration, particularly the so-called “π pulse” calibration that controls the rotation of qubits inside quantum processors.
At first glance, calibration may seem like a one-time problem. Calibrate the processor before the processor begins executing the algorithm. But it’s not that simple. “If you look at the performance of today’s quantum computers, you can achieve some level of high fidelity,” Cohen said. “But when users use their computers, they typically don’t maintain the best fidelity. It always drifts. “If you can use this kind of technology and the underlying hardware to recalibrate frequently, you can improve performance and maintain the (high) fidelity needed for quantum error correction for a long time.”
Continuously adjusting these pulses in near real-time is an extremely computationally intensive task, but because quantum systems are always slightly different, it is also a control problem that can be solved with the help of reinforcement learning.
“As quantum computers scale and improve, they create bottlenecks and really compute-intensive problems,” said Sam Stanwyck, product manager for Nvidia’s quantum computing group. “Quantum error correction is truly a huge undertaking. “This is necessary to unlock fault-tolerant quantum computing, but we also need a way to apply exactly the right control pulses to get the most out of the qubits.”
Stanwyck also emphasized that prior to DGX Quantum, no system supported the minimum latency required to perform these calculations.
As a result, even small improvements in proofreading can lead to significant improvements in error correction. “When it comes to quantum error correction, the return on investment in correction is exponential,” explains Quantum Machines product manager Ramon Szmuk. “A 10% better correction will exponentially improve the logical error (performance) of a logical qubit made up of many physical qubits. So there’s a lot of motivation here to make corrections very well and quickly.”
It is worth emphasizing that this is only the beginning of the optimization process and collaboration. What the team really did here was take a few off-the-shelf algorithms and see which one worked best (in this case TD3). In total, the actual code length to run the experiment was only about 150 lines. Of course, this depends on all the work done by both teams to integrate the various systems and build the software stack. But for developers, all of this complexity may be hidden, and both companies expect to create more and more open source libraries to take advantage of this massive platform over time.
Szmuk emphasized that although in this project the team only worked with very basic quantum circuits, it can be generalized to deep circuits as well. “If we can do this with one gate and one qubit, we can also do it with 100 qubits and 1,000 gates.”
“I would say that individual results are small steps, but they are small steps toward solving the most important problems,” Stanwyck added. “Useful quantum computing requires tight integration of accelerated supercomputing, which may be the most difficult engineering challenge. So in terms of being able to actually do this on a quantum computer and tune the pulses in a way that’s not just optimized for a small quantum computer, but in a way that’s a scalable, modular platform, I think we’re actually on the way to solving some of the problems. “It is one of the most important problems in quantum computing.”
Stanwyck also said the two companies plan to continue their collaboration to make these tools available to more researchers. With Nvidia’s Blackwell chips coming out next year, we’ll also have a much more powerful computing platform for this project.