LEGACY VIEW This is the original detailed page. The new homepage is at: ← vraxion.github.io/VRAXION/
v5.0.0-β.2 · Public beta · Block A done

The network grows itself.

VRAXION is a gradient-free substrate that self-wires its own graph. Inference emerges as the fixed point of destructive interference.

peak24.6%
tests175 ✓
unsafe0
shippedApr 2026
substrate ● active vraxion.instnct ↓ signal in ● read-out INSTNCT · LIVE SUBSTRATE
FIXED-POINT ATTRACTOR
24.6%
peak next-char · english
0 bp
gradient steps in substrate
83 edges
empty-start · beats 3 400
175
tests · zero unsafe Rust
§ 01Building blocks

The model stack — one block at a time.

Each layer is frozen as a public artifact only after 100% lossless round-trip. Block A is the first fundamental building block: raw byte in, 16-dim latent out, decoder reconstructs the exact byte.

BLOCK A FROZEN

Byte Unit (L0)

1 raw byte → 16-dim latent. Tied-mirror autoencoder. 100% lossless on all 256 bytes.

arch · 8 → 16 → 16, C19, binary weights
deploy · 4 KB int8 LUT + 6.5 KB JSON
status · 100.00% lossless · reload-verified
BLOCK B FROZEN

Byte-Pair Merger (L1)

2 L0 outputs (32-dim) → single 32-dim merged. 100% lossless on all 65,536 pairs.

arch · 32 → 81 → 32, single-W mirror
deploy · 3.36 KB Huffman-packed
status · 100.00% lossless · 5/5 verified
BLOCK C FROZEN

Word Tokenizer V2

UTF-8 bytes → token IDs. Space-aware hybrid: whole-word + subword + byte fallback.

vocab · 32,294 · FineWeb-EDU trained
deploy · 4.24 MB JSON vocab
status · 30.43% Huffman on 10 MB · lossless
BLOCK D SCAFFOLD

Word Embedder

Token IDs → 64-dim vectors. Lookup table (random-init); trained end-to-end with Brain.

dim · 32,294 × 64 = 2.07M params
memory · 8.27 MB f32 / 2.07 MB int8
status · shape verified · training pending
BLOCK E SCAFFOLD

Nano Brain

[N, 64] → [N, vocab] next-token logits. Causal transformer, tied embedder/output head.

arch · 2 × (MHA + FFN), 64 dim, 4 heads
params · 2.18M tied (8.73 MB f32)
status · forward verified · training pending
FROZEN — public artifact, 100% lossless
SCAFFOLD — shape verified, training pending
§ 02Thesis

Inference is what survives the interference.

Signal enters the substrate, incompatible paths cancel, and the surviving pattern is read out. Four stages, one fixed point, zero gradients.

◆ STAGE 01

Enter

Signal projects as sparse distributed representation into the recurrent substrate.

◆ STAGE 02

Propagate

Spikes traverse the directed graph along scout-ranked parent shortlists.

NULL ZONE
◆ STAGE 03

Cancel

Incompatible modes annihilate through destructive interference.

FIXED POINT
◆ STAGE 04

Read out

The surviving attractor is the answer. Deterministic, reproducible, fast.

§ 03The Grower

From empty. To circuit.

The grower doesn't optimize weights on a fixed topology. It changes the topology itself — neuron by neuron, threshold by threshold — using a scout oracle to rank candidates before expensive search runs.

  • Bias-free threshold neurons. Stored as dot ≥ threshold. 36 bits.
  • Scout-first search. Single-signal + connect-all + pair-lift rank parents before ternary.
  • Append-only evidence. Every run writes metrics.json, golden_check.json.
graph.growth step 128/128 83 edges 80% acc

Fitness · jackpot selection · smooth cosine-bigram

evolve_language.rs · english · 1+9 ES
25% 20% 15% 10% 5% 0 100k 200k steps 24.6% peak 1+1 ES · 21.2%
jackpot Δ+3.4pp
fitness Δ+2.6pp
W-mutation0% accept
parityrust ↔ python
§ 04Quantization

Champions at every compression.

FineWeb char-LM benchmark · nf=1024 · matched-compute controls applied.

◆ CLOUD / SERVER

QAT int8

Straight-through estimator. Essentially lossless, 4× compression.

86.40%
+0.20pp over float · matched compute
compression int8 weights
◆ MOBILE / EDGE

Staged INQ int4

Incremental network quantization, staged protocol.

84.75%
−1.65pp · 8× compression
compression int4 weights
◆ IOT / FPGA

QAT binary (b1.58)

Ternary weights. Deploys to POPCOUNT hardware natively.

71.50%
−14.9pp · 32× compression
32× compression 1.58-bit
§ 05Validated findings
L0 byte interpreter locked 36 bits 100% round-trip INPUT · 8 BITS HIDDEN · 4 NEURONS t₁ t₂ t₃ t₄ OUTPUT · POPCOUNT

Shipped. Locked. Reproducible.

The public story is kept truthful with three labels: current mainline (code on main), validated finding (reproducible, not promoted), experimental (not yet default).

  • Smooth cosine-bigram fitness. +2.6pp. Broke the 17–18% ceiling.
  • 1+9 jackpot selection. +3.4pp. 24.6% peak.
  • Empty-start ≫ prefilled. 80% at 83 edges beats 64% at 3 400.
  • L0 byte interpreter · locked. 8 → 4 neurons, 36 bits, 100% round-trip.
§ 05bInteractive playground

Inspect the baked models.

Live visualizers for the L0 Byte Unit and L1 Byte-Pair Merger champions. Architecture diagrams, weight heatmaps, C19 neuron curves, and live roundtrip tests — all running in-browser with the actual baked weights.

L0 · Byte Unit

Byte Tokenizer Unit

2-layer tied-weight mirror autoencoder. 8-bit input, 24 C19 neurons, 16D latent. 100% lossless. 288 byte int4.

100% Lossless 288 B int4 24 C19
Architektura Baked Winner
CHAMPION
L1 · Byte-Pair Merger

Byte-Pair Merger

Single-W mirror tied autoencoder. 32-bit input (2 bytes), 81 C19 neurons, 3,36 KB Huffman-packed. 100% lossless on all 65 536 byte pairs.

100% · 65 536 par 3 440 B Huffman 2 867 param
Architektura Baked · Huffman Artifact
§ 06Public beta contract

Two gates. Both reproducible.

Public beta isn't green on vibes. One engine-freeze gate, one computation benchmark.

B0 · ENGINE FREEZE Gate pass

Grower regression.

run_cmd seed.42 GROWER 175 tests ✓ cargo bench zero unsafe metrics.json 24.6% golden PASS ✓

python tools/run_grower_regression.py

B1 · BYTE / OPCODE v1 Gate pending

Exact translator.

byte · 8 bits opcode · 4 bits head₁ head₂ head₃ head₄ head₅₋₈ 8 FROZEN BIT-HEADS LUT · exact COPY / NOT / INC / DEC 100% round-trip

python tools/run_byte_opcode_acceptance.py

The beta is live.

Clone, run the five-minute proof, file a finding — or an honest critique. Apache 2.0 noncommercial.

Tweaksvraxion.beta
Animated visuals