JAX-based tensor network library with symmetry-aware block-sparse tensors and label-based contraction
Block-sparse storage for U(1), Zn, and fermionic symmetries. Only allowed charge sectors are stored.
Legs identified by string labels; shared labels across tensors are automatically contracted.
Integrated with opt_einsum for finding optimal contraction orders in multi-tensor networks.
jit, grad, vmap work out of the box. GPU, TPU, and Metal acceleration.
Fused Cython BLAS for CPU block-sparse contractions, JIT-compiled DMRG sweeps on GPU/TPU, and multi-GPU sharding via GSPMD.
MCP server for running calculations from Claude. Built-in skills for ground states, debugging, benchmarking, and migration from ITensor, TeNPy, Cytnx, and quimb.
Finite 1D chains and 2D cylinder ground states
Infinite chains and infinite cylinders
2D classical partition functions via tensor coarse-graining
2D ground states with simple update, AD optimization (Adam, L-BFGS, CG), QR projectors, 2-site unit cells, and split-CTMRG
fPEPS with simple update and AD optimization for spinless fermions with FermionParity and FermionicU1 symmetries
Quasiparticle spectra via iPEPS at arbitrary momenta
Real-time and imaginary-time MPS evolution with 1-site and 2-site variants
Build Hamiltonians from symbolic operator descriptions
Run DMRG, TRG, HOTRG, and more directly from Claude Code. Ask questions in natural language and get tensor network calculations.
Built-in prompt templates for ground states, benchmarking, debugging, teaching, and migrating from other libraries.
from tenax import AutoMPO, DMRGConfig, build_random_mps, dmrg
L = 20
auto = AutoMPO(L)
for i in range(L - 1):
auto += (1.0, "Sz", i, "Sz", i + 1)
auto += (0.5, "Sp", i, "Sm", i + 1)
auto += (0.5, "Sm", i, "Sp", i + 1)
mpo = auto.to_mpo()
mps = build_random_mps(L, bond_dim=4)
result = dmrg(mpo, mps, DMRGConfig(max_bond_dim=100, num_sweeps=10))
print(f"Ground state energy: {result.energy:.8f}")
Every algorithm works with JAX's grad and jit. AD-based iPEPS optimization uses implicit differentiation through the CTM fixed point for stable gradients.
Same code runs on CPU, NVIDIA GPU (CUDA 12/13), Google Cloud TPU, and Apple Silicon (Metal). No code changes — just install the right backend.
CLI-driven performance benchmarks for every algorithm across all backends. Compare wall-clock timings with a single command and export to JSON or CSV.
Covers the full range: finite DMRG, iDMRG (chains and infinite cylinders), 2D cylinder DMRG, iPEPS ground states, and quasiparticle excitation spectra.
Tenax shares core ideas with ITensor, TeNPy, Cytnx, quimb, and TensorKit.jl. Our migration guides map concepts and code patterns so you can translate your existing work.
pip install tenax-tn
pip install "tenax-tn[cuda13]"
pip install "tenax-tn[cuda12]"
pip install "tenax-tn[tpu]"
pip install "tenax-tn[metal]"
git clone https://github.com/tenax-lab/tenax.git cd tenax && pip install -e ".[dev]"