Run tensor network calculations directly from Claude Code using MCP tools and the Tenax Toolkit plugin
The Model Context Protocol lets AI assistants call external tools. Tenax provides an MCP server that gives Claude Code direct access to tensor network algorithms — no boilerplate, no copy-paste.
Ask Claude to “find the ground state energy of a 20-site Heisenberg chain” and it will call run_dmrg with the right parameters, interpret the results, and explain the physics.
Add the Tenax MCP server to your Claude Code configuration (~/.claude/settings.json):
{
"mcpServers": {
"tenax": {
"command": "uv",
"args": [
"--directory", "/path/to/tenax-mcp",
"run", "tenax-mcp"
]
}
}
}
Tenax exposes 9 tools through the MCP server. Claude Code can call any of them directly during a conversation.
Run DMRG ground state search for a 1D quantum Hamiltonian. Specify the chain length, Hamiltonian terms, bond dimension, number of sweeps, and convergence tolerance. Returns the ground state energy, energy per sweep, and convergence status.
Run the Tensor Renormalization Group on a 2D classical model. Currently supports the 2D Ising model. Provide inverse temperature or temperature, coupling constant, and TRG bond dimension. Returns free energy per site, exact solution, and relative error.
Run Higher-Order TRG on a 2D classical model. Uses higher-order SVD for more accurate coarse-graining than standard TRG. Supports configurable direction order (alternating, horizontal, vertical). Returns free energy per site, exact solution, and relative error.
Build an MPO Hamiltonian from operator terms and return diagnostic info (bond dimensions, number of terms, total Hilbert space dimension) without running a calculation.
Find the optimal contraction path and FLOP cost for a tensor network. Provide tensor labels and dimensions; returns the optimal contraction order, FLOP count, and speedup over naive contraction.
Check tensor network validity — dimension matching, charge consistency, and flow directions. Returns a validation result with a list of any issues found.
Generate complete, runnable Tenax Python code from a high-level description. Supports DMRG, TRG, HOTRG, iDMRG, iPEPS (1-site, 2-site, split-CTM), fermionic iPEPS, standard CTM with Tensor protocol, and quasiparticle excitations. Returns a ready-to-run Python script.
List available built-in spin operators and their matrix representations. Supports spin-1/2 (Sz, Sp, Sm, Id) and spin-1 (Sz, Sp, Sm, Id in the |+1⟩, |0⟩, |-1⟩ basis).
Convert a tensor network description to .net file format (Cytnx-compatible). Provide tensor names and leg labels; returns a ready-to-use network file.
The Tenax Toolkit is a Claude Code plugin that gives Claude 18 domain-specific skills for tensor network simulations. Install it once and Claude automatically knows how to guide you through DMRG, iPEPS, TRG, symmetry-aware tensors, and more.
In Claude Code, first add the plugin from the marketplace (one-time setup), then install it:
claude plugin marketplace add tenax-lab/tenax-toolkit
claude plugin install tenax-toolkit
That’s it — all 18 skills are immediately available. Claude will automatically invoke them when your questions match (e.g., asking about DMRG triggers tenax-dmrg-workflow, asking about symmetries triggers tenax-symmetry).
Complete DMRG ground-state calculation: finite DMRG, iDMRG, and 2D cylinder DMRG with AutoMPO Hamiltonians.
iPEPS pipeline: simple update, AD-based optimization with CTM environments, and quasiparticle excitation spectra.
TRG and HOTRG for 2D classical stat mech: partition functions, free energy, and phase transitions.
Fermionic iPEPS with graded tensors: FermionParity, FermionicU1, spinless fermions, and the t-V model.
Core tensor operations: DenseTensor, SymmetricTensor, label-based contraction, SVD, QR, and eigendecomposition.
Symmetry system: U(1) and Z_n, TensorIndex with charges and FlowDirection, block-sparse operations.
Build Hamiltonian MPOs from natural-language model descriptions using AutoMPO.
Design tensor network contractions using NetworkBlueprint and .net topology files.
Compute expectation values, correlation functions, entanglement entropy, and order parameters.
Diagnose shape mismatches, JAX tracing issues, gradient problems, and convergence failures.
Design and run performance benchmarks across CPU, CUDA, TPU, and Metal backends.
Run exact diagonalization and DMRG side-by-side to validate results and study truncation error.
Generate scaffolded tensor network homework problems for graduate courses.
Install Tenax, configure JAX backends, and run your first calculation.
Translate ITensor (Julia/C++) code and concepts to Tenax.
Translate TeNPy code and concepts to Tenax.
Translate Cytnx code and concepts to Tenax.
Translate quimb code and concepts to Tenax.