Tenax

JAX-based tensor network library with symmetry-aware block-sparse tensors and label-based contraction

$ pip install tenax-tn
Experimental project — This library is under active development and largely written with the assistance of Claude Code (AI). While we test extensively, AI-generated code can contain subtle bugs. Please verify results against known benchmarks before using them in research.

Key Features

Symmetry-aware tensors

Block-sparse storage for U(1), Zn, and fermionic symmetries. Only allowed charge sectors are stored.

Label-based contraction

Legs identified by string labels; shared labels across tensors are automatically contracted.

Optimal contraction paths

Integrated with opt_einsum for finding optimal contraction orders in multi-tensor networks.

Pure JAX

jit, grad, vmap work out of the box. GPU, TPU, and Metal acceleration.

High performance

Fused Cython BLAS for CPU block-sparse contractions, JIT-compiled DMRG sweeps on GPU/TPU, and multi-GPU sharding via GSPMD.

AI integration

MCP server for running calculations from Claude. Built-in skills for ground states, debugging, benchmarking, and migration from ITensor, TeNPy, Cytnx, and quimb.

Algorithms

DMRG

Finite 1D chains and 2D cylinder ground states

iDMRG

Infinite chains and infinite cylinders

TRG / HOTRG

2D classical partition functions via tensor coarse-graining

iPEPS

2D ground states with simple update, AD optimization (Adam, L-BFGS, CG), QR projectors, 2-site unit cells, and split-CTMRG

Fermionic iPEPS

fPEPS with simple update and AD optimization for spinless fermions with FermionParity and FermionicU1 symmetries

Excitations

Quasiparticle spectra via iPEPS at arbitrary momenta

TDVP

Real-time and imaginary-time MPS evolution with 1-site and 2-site variants

AutoMPO

Build Hamiltonians from symbolic operator descriptions

AI-Powered Workflow

MCP Server

Run DMRG, TRG, HOTRG, and more directly from Claude Code. Ask questions in natural language and get tensor network calculations.

Claude Code Skills

Built-in prompt templates for ground states, benchmarking, debugging, teaching, and migrating from other libraries.

Learn more about MCP & AI workflow

Code Example

from tenax import AutoMPO, DMRGConfig, build_random_mps, dmrg

L = 20
auto = AutoMPO(L)
for i in range(L - 1):
    auto += (1.0, "Sz", i, "Sz", i + 1)
    auto += (0.5, "Sp", i, "Sm", i + 1)
    auto += (0.5, "Sm", i, "Sp", i + 1)

mpo = auto.to_mpo()
mps = build_random_mps(L, bond_dim=4)
result = dmrg(mpo, mps, DMRGConfig(max_bond_dim=100, num_sweeps=10))
print(f"Ground state energy: {result.energy:.8f}")

See all examples

Why Tenax?

Fully differentiable

Every algorithm works with JAX's grad and jit. AD-based iPEPS optimization uses implicit differentiation through the CTM fixed point for stable gradients.

Run anywhere

Same code runs on CPU, NVIDIA GPU (CUDA 12/13), Google Cloud TPU, and Apple Silicon (Metal). No code changes — just install the right backend.

Benchmark suite

CLI-driven performance benchmarks for every algorithm across all backends. Compare wall-clock timings with a single command and export to JSON or CSV.

From 1D to 2D

Covers the full range: finite DMRG, iDMRG (chains and infinite cylinders), 2D cylinder DMRG, iPEPS ground states, and quasiparticle excitation spectra.

Coming From Another Library?

Tenax shares core ideas with ITensor, TeNPy, Cytnx, quimb, and TensorKit.jl. Our migration guides map concepts and code patterns so you can translate your existing work.

View migration guides

Installation

CPU (default)

pip install tenax-tn

NVIDIA GPU (CUDA 13)

pip install "tenax-tn[cuda13]"

NVIDIA GPU (CUDA 12)

pip install "tenax-tn[cuda12]"

Google Cloud TPU

pip install "tenax-tn[tpu]"

Apple Silicon GPU

pip install "tenax-tn[metal]"

From source

git clone https://github.com/tenax-lab/tenax.git
cd tenax && pip install -e ".[dev]"