NVIDIA Ising is the world’s first family of open quantum AI models for quantum processor calibration and quantum error correction, built to accelerate the path to useful quantum computers.
Ising Decoding is reported as up to 2.5x faster than pyMatching, the current open-source benchmark.
The accurate decoder variant delivers up to 3x higher accuracy than pyMatching on NVIDIA’s launch benchmarks.
Ising Calibration automates continuous chip calibration workflows that traditionally take days.
“AI is essential to making quantum computing practical.”
Jensen Huang, Founder and CEO, NVIDIA
Published April 14, 2026. Core sources used here: NVIDIA Newsroom, CUDA-Q documentation, NVIDIA quantum platform pages, and NVIDIA’s official Hugging Face model hub.
Hybrid Quantum Stack
Calibration
NVIDIA Ising interprets measurements and telemetry from quantum processors so AI agents can steer calibration loops continuously.
Decoding
Two decoder tracks are optimized either for low-latency correction or higher-accuracy recovery on surface-code style workloads.
Integration
The release fits into hybrid GPU-QPU workflows for simulation, inference, and real-time control across the quantum stack.
NVIDIA Ising borrows the intuition of the classical Ising model and turns that Ising model heritage into practical tooling for calibration and quantum error correction.
In statistical mechanics, the Ising model simplifies a complex physical system into interacting spins. NVIDIA uses that historical framing as the name for NVIDIA Ising, a model family designed to simplify another hard system: noisy, calibration-heavy quantum hardware.
NVIDIA Ising is not a single chatbot-style foundation model. It is a product family that spans a large calibration model plus task-specific error-correction decoders, each aimed at making hybrid quantum-classical workflows more reliable.
For visitors who are not quantum specialists, the easiest way to think about NVIDIA Ising is this: it helps turn raw chip signals into actionable control decisions fast enough to matter.
Both the classical Ising model and the new Ising release focus on taming complicated physical behavior with structured abstractions.
Quantum systems need constant recalibration and fast correction. NVIDIA Ising turns that pressure into an automation problem that can scale.
Classical Origin
The original Ising model reduces a physical system into discrete interactions, making hard behavior easier to reason about and compute.
Quantum Stack
Modern QPUs generate measurement streams, calibration drift, and decoding pressure that quickly overwhelm static operating procedures without AI-driven automation.
AI Control Plane
NVIDIA Ising converts those signals into calibration actions and decoder outputs that can feed directly into hybrid GPU-QPU systems.
Ecosystem Keywords
Each surface of NVIDIA Ising maps to a concrete engineering bottleneck in useful quantum computing.
NVIDIA Ising Calibration is a vision-language model that interprets processor measurements and helps automate the continuous calibration loop.
NVIDIA Ising Decoding provides fast and accurate variants for quantum error correction workloads, targeting the latency and fidelity limits of traditional decoders.
NVIDIA says the NVIDIA Ising models, data, and frameworks are available through GitHub, Hugging Face, and build.nvidia.com for practical experimentation.
NVIDIA Ising complements CUDA-Q software and NVQLink hardware so inference, simulation, and control can run in one hybrid quantum-classical system.
Useful quantum applications do not arrive just from more qubits. They also need AI-driven control systems, better correction loops, and better AI infrastructure around the hardware.
NVIDIA Ising addresses the two repetitive tasks that dominate the path from fragile lab hardware to scalable quantum systems: calibration and error correction. Both are data-heavy and time-sensitive, which makes them good candidates for AI acceleration.
That is why NVIDIA Ising matters beyond a single benchmark. It reframes quantum AI as operational infrastructure for quantum hardware, not as an adjacent analytics tool.
The strategic implication is larger than one model family: if inference can sit directly in the control loop, the quantum stack becomes more software-defined, more automatable, and more production-ready.
Bottlenecks
Quantum processors drift, accumulate errors, and demand constant tuning. The release targets this operating burden before it overwhelms growing systems.
What Changes
With NVIDIA Ising, inference helps decide how hardware should be tuned and how syndromes should be decoded inside a broader CUDA-Q workflow.
Early Adoption
NVIDIA’s launch names research labs, universities, and quantum companies already using NVIDIA Ising Calibration or deploying NVIDIA Ising Decoding.
Ecosystem Keywords
The stack is relevant anywhere a hybrid workflow needs better calibration, faster correction, or tighter coupling between NVIDIA Ising and quantum programs.
Use CUDA-Q applications and NVIDIA Ising-calibrated hardware loops to support more stable chemistry and materials exploration workflows.
Hybrid optimization pipelines benefit when NVIDIA Ising decoder latency and calibration quality stop being the limiting step in repeated experiments.
CUDA-Q Academic and the open NVIDIA Ising releases make the stack useful for benchmarking, teaching, and reproducing hybrid quantum-classical experiments.
Researchers working on surface codes, decoder benchmarking, and control software can use NVIDIA Ising decoders as part of real-time correction pipelines.
The NVIDIA Ising release is best understood as a combined model-plus-tooling story: open weights, CUDA-Q libraries, TensorRT-backed decoders, and QPU-GPU integration.
Decoding Speed
Launch benchmark versus pyMatching for the fast decoder variant.
Decoding Accuracy
Launch benchmark versus pyMatching for the accurate decoder variant.
Calibration Loop
NVIDIA Ising Calibration shifts calibration from periodic manual intervention toward AI-assisted control.
NVIDIA Ising Calibration is published on Hugging Face as a 35B-A3B vision-language model tuned for calibration tasks and quantum hardware measurement interpretation.
NVIDIA Ising Decoding has two SurfaceCode decoder releases listed publicly: one optimized for speed and one optimized for accuracy.
CUDA-QX QEC exposes decoder frameworks, TensorRT decoder support, and real-time decoding patterns for deployment in production-style experiments.
NVQLink positions GPUs and QPUs as a tightly coupled system for low-latency control, inference, and error-correction workflows.
Illustrative Python
import numpy as np
import cudaq_qec as qec
H = np.array([[1, 0, 1, 0],
[0, 1, 1, 1]], dtype=np.uint8)
decoder = qec.get_decoder(
"trt_decoder",
H,
onnx_load_path="ising-decoder.onnx",
engine_save_path="ising-decoder.plan",
precision="fp16",
)The quickest path into NVIDIA Ising is to install CUDA-Q, pull the open models, then wire the model family into CUDA-Q or CUDA-QX QEC examples.
Use the official quick start to set up CUDA-Q locally and confirm the base hybrid quantum toolchain is working.
Start with the NVIDIA Ising calibration model and the published SurfaceCode decoder variants on Hugging Face.
Use CUDA-Q applications, QEC examples, or academic notebooks to connect NVIDIA Ising inference to calibration and decoding tasks.
Install the platform and verify the first environment setup.
Browse CUDA-Q application walkthroughs for chemistry, optimization, and more.
Use academic notebooks and teaching material for demos, study, and labs.
Ask implementation questions and follow ecosystem conversations around CUDA-Q.
Use the official docs, model weights, and platform entry points to start with calibration or QEC workflows.