Tenax shares core ideas with other tensor network libraries — label-based contraction, symmetry-aware tensors, DMRG. These guides map concepts and code patterns from each library to their Tenax equivalents.

Note: These migration tables were generated with AI assistance from web sources and may contain inaccuracies regarding other libraries’ APIs. If you spot an error, please open an issue.

ITensor

ITensor and Tenax share label-based contraction and AutoMPO. The main differences are language (Julia/C++ vs Python/JAX) and Tenax’s explicit symmetry and flow direction on every index.

Concept Mapping

ITensor (Julia) Tenax (Python) Notes
Index(dim, "label") TensorIndex(sym, charges, flow, label) Tenax carries symmetry + flow
ITensor(idx1, idx2) DenseTensor(data, indices) Tenax requires explicit data
randomITensor(...) DenseTensor.random_normal(indices, key) JAX needs explicit RNG key
dag(idx) idx.dual() Flip FlowDirection
A * B contract(A, B) Both label-based
svd(T, i1, i2) truncated_svd(T, left_labels, right_labels) By labels, not Index objects
AutoMPO() AutoMPO(L, d) Very similar API
dmrg(H, psi0, sweeps) dmrg(mpo, mps, config) Config replaces Sweeps object
siteinds("S=1/2", N) build_random_mps(L, physical_dim=2) No site-type system

Key Differences

Side-by-Side: DMRG

ITensor (Julia):

using ITensors
N = 20
sites = siteinds("S=1/2", N)
ampo = AutoMPO()
for j in 1:N-1
    ampo += ("Sz", j, "Sz", j+1)
    ampo += (0.5, "S+", j, "S-", j+1)
    ampo += (0.5, "S-", j, "S+", j+1)
end
H = MPO(ampo, sites)
psi0 = randomMPS(sites, linkdims=10)
sweeps = Sweeps(10)
setmaxdim!(sweeps, 10, 20, 50, 100)
energy, psi = dmrg(H, psi0, sweeps)

Tenax (Python):

from tenax import AutoMPO, DMRGConfig, build_random_mps, dmrg

L = 20
auto = AutoMPO(L=L, d=2)
for i in range(L - 1):
    auto += (1.0, "Sz", i, "Sz", i + 1)
    auto += (0.5, "Sp", i, "Sm", i + 1)
    auto += (0.5, "Sm", i, "Sp", i + 1)
mpo = auto.to_mpo()
mps = build_random_mps(L, physical_dim=2, bond_dim=10)
config = DMRGConfig(max_bond_dim=100, num_sweeps=10)
result = dmrg(mpo, mps, config)

What You Gain

What You Lose

TeNPy

TeNPy uses an object-oriented hierarchy (Site → Lattice → Model → Engine). Tenax replaces this with a functional API — build the Hamiltonian directly with AutoMPO, run algorithms as pure functions.

Concept Mapping

TeNPy Tenax Notes
SpinHalfSite() spin_half_ops() Returns operator dict, no Site object
MPS.from_lat_product_state(...) build_random_mps(L, d, chi) No lattice/product-state builder
MPOModel / CouplingMPOModel AutoMPO(L, d) Functional, not class-based
TwoSiteDMRGEngine(psi, model, params) dmrg(mpo, mps, config) Functional API
eng.run() result = dmrg(mpo, mps, config) Returns result dataclass
psi.entanglement_entropy() Manual from singular values No built-in method
npc.tensordot(A, B, axes) contract(A, B) Tenax uses label matching

Key Differences

Side-by-Side: DMRG

TeNPy:

from tenpy.models.xxz_chain import XXZChain
from tenpy.networks.mps import MPS
from tenpy.algorithms.dmrg import TwoSiteDMRGEngine

model = XXZChain({"L": 20, "Jxx": 1.0, "Jz": 1.0, "bc_MPS": "finite"})
psi = MPS.from_lat_product_state(model.lat, [["up"], ["down"]])
eng = TwoSiteDMRGEngine(psi, model, {"trunc_params": {"chi_max": 100}})
E, psi = eng.run()

Tenax:

from tenax import AutoMPO, DMRGConfig, build_random_mps, dmrg

L = 20
auto = AutoMPO(L=L, d=2)
for i in range(L - 1):
    auto += (1.0, "Sz", i, "Sz", i + 1)
    auto += (0.5, "Sp", i, "Sm", i + 1)
    auto += (0.5, "Sm", i, "Sp", i + 1)
mpo = auto.to_mpo()
mps = build_random_mps(L, physical_dim=2, bond_dim=16)
result = dmrg(mpo, mps, DMRGConfig(max_bond_dim=100, num_sweeps=10))

What You Gain

What You Lose

Cytnx

Cytnx and Tenax share a label-based contraction philosophy and .net file support. The biggest differences are the backend (C++ vs JAX), Tenax’s removal of row/column rank, and Tenax’s built-in algorithms.

Concept Mapping

Cytnx Tenax Notes
UniTensor DenseTensor / SymmetricTensor No row/col rank in Tenax
Bond TensorIndex Carries symmetry, charges, flow, label
Bond.BD_IN / BD_OUT FlowDirection.IN / OUT Same concept
Network NetworkBlueprint Same .net file format
Contract(A, B) contract(A, B) Both label-based
Svd(T) truncated_svd(T, left_labels, right_labels) Explicit label partition
T.set_labels(...) T.relabel(old, new) Immutable in Tenax

Key Differences

Side-by-Side: Network Contraction

Cytnx:

auto net = Network("dmrg_eff_ham.net");
net.PutUniTensor("L", L_env);
net.PutUniTensor("W", W);
net.PutUniTensor("R", R_env);
auto result = net.Launch();

Tenax:

from tenax import NetworkBlueprint

bp = NetworkBlueprint("dmrg_eff_ham.net")  # Same .net file format
bp.put_tensor("L", L_env)
bp.put_tensor("W", W)
bp.put_tensor("R", R_env)
result = bp.launch()

What You Gain

What You Lose

quimb

Both quimb and Tenax use graph-based tensor network containers with label-based contraction, but differ in backend (NumPy/autoray vs JAX), symmetry support, and algorithm scope.

Concept Mapping

quimb Tenax Notes
qtn.Tensor(data, inds, tags) DenseTensor(data, indices) No tags; labels on TensorIndex
qtn.TensorNetwork(...) TensorNetwork() Similar graph container
tn ^ all tn.contract() Method, not operator
A & B contract(A, B) Pairwise contraction
A.reindex({"old": "new"}) A.relabel("old", "new") Immutable in Tenax
qtn.DMRG2(ham) dmrg(mpo, mps, config) Functional API
qtn.SpinHam1D(S=0.5) AutoMPO(L, d=2) Explicit site indices

Key Differences

Side-by-Side: DMRG

quimb:

import quimb.tensor as qtn

builder = qtn.SpinHam1D(S=0.5)
builder += 1.0, "Z", "Z"
builder += 0.5, "+", "-"
builder += 0.5, "-", "+"
H = builder.build_mpo(20)

dmrg = qtn.DMRG2(H, bond_dims=[10, 20, 50, 100])
dmrg.solve(tol=1e-9)

Tenax:

from tenax import AutoMPO, DMRGConfig, build_random_mps, dmrg

L = 20
auto = AutoMPO(L=L, d=2)
for i in range(L - 1):
    auto += (1.0, "Sz", i, "Sz", i + 1)
    auto += (0.5, "Sp", i, "Sm", i + 1)
    auto += (0.5, "Sm", i, "Sp", i + 1)
mpo = auto.to_mpo()
mps = build_random_mps(L, physical_dim=2, bond_dim=10)
result = dmrg(mpo, mps, DMRGConfig(max_bond_dim=100, num_sweeps=10))

What You Gain

What You Lose

TensorKit.jl

TensorKit.jl and Tenax both support symmetry-aware block-sparse tensors with fermionic statistics. TensorKit uses a category-theoretic framework (fusion trees, R-symbols, ribbon twists) that generalises to non-Abelian and anyonic symmetries. Tenax takes a more direct approach with explicit Koszul signs and a JAX backend for autodiff and GPU acceleration.

Concept Mapping

TensorKit.jl (Julia) Tenax (Python) Notes
TensorMap{S}(data, cod ← dom) SymmetricTensor(blocks, indices) No codomain/domain partition in Tenax
GradedSpace[Irrep](d₀ => n₀, ...) TensorIndex(sym, charges, flow, label) Tenax carries label on the index
FermionParity (sector, isodd::Bool) FermionParity (symmetry, charges 0/1) Equivalent Z₂ grading
FermionNumber = U1Irrep ⊠ FermionParity FermionicU1(grading=...) TensorKit uses Deligne product ; Tenax uses single class with configurable grading
FermionSpin = SU2Irrep ⊠ FermionParity No non-Abelian symmetry in Tenax
A ⊠ B (Deligne product of sectors) ProductSymmetry(sym1, sym2) TensorKit supports arbitrary products; Tenax limited to 2 factors
BraidingStyle: Bosonic(), Fermionic() BraidingStyle: BOSONIC, FERMIONIC Same concept, type hierarchy vs enum
Rsymbol(a, b, c) sym.exchange_sign(q_a, q_b) R-symbol vs explicit sign function
twist(a) sym.twist_phase(q) Both return (−1)^p for odd sectors
braid(t, perm, levels) t.transpose(labels) TensorKit distinguishes over/under crossings; Tenax uses symmetric braiding only
permute(t, (i...,), (j...,)) t.transpose(labels) TensorKit repartitions codomain/domain; Tenax has no partition
@tensor C[...] := A[...] * B[...] contract(A, B) TensorKit’s fermionic @tensor is still TODO; Tenax handles fermionic signs automatically
tsvd(t) truncated_svd(t, left_labels, right_labels) Both handle fermionic signs in factorisation
AutoMPO, dmrg, idmrg, trg, ipeps Algorithms live in MPSKit.jl for TensorKit

Key Differences

Side-by-Side: Symmetric Tensor Creation

TensorKit.jl (Julia):

using TensorKit

V = GradedSpace[FermionParity](0 => 2, 1 => 3)
W = GradedSpace[FermionParity](0 => 1, 1 => 2)
t = TensorMap(randn, V  V  W)

Tenax (Python):

from tenax import FermionParity, TensorIndex, FlowDirection, SymmetricTensor
import numpy as np, jax

sym = FermionParity()
T = SymmetricTensor.random_normal(
    indices=(
        TensorIndex(sym, np.array([0, 1], dtype=np.int32), FlowDirection.IN,  label="v1"),
        TensorIndex(sym, np.array([0, 1], dtype=np.int32), FlowDirection.IN,  label="v2"),
        TensorIndex(sym, np.array([0, 1], dtype=np.int32), FlowDirection.OUT, label="w"),
    ),
    key=jax.random.PRNGKey(0),
)

What You Gain

What You Lose