v5.0.0-β.1 · Public beta

The network grows itself.

VRAXION is a gradient-free substrate that self-wires its own graph. Inference emerges as the fixed point of destructive interference.

peak24.6%
tests175 ✓
unsafe0
shippedApr 2026
substrate ● active vraxion.instnct ↓ signal in ● read-out INSTNCT · LIVE SUBSTRATE
FIXED-POINT ATTRACTOR
24.6%
peak next-char · english
0 bp
gradient steps in substrate
83 edges
empty-start · beats 3 400
175
tests · zero unsafe Rust
§ 01Thesis

Inference is what survives the interference.

Signal enters the substrate, incompatible paths cancel, and the surviving pattern is read out. Four stages, one fixed point, zero gradients.

◆ STAGE 01

Enter

Signal projects as sparse distributed representation into the recurrent substrate.

◆ STAGE 02

Propagate

Spikes traverse the directed graph along scout-ranked parent shortlists.

NULL ZONE
◆ STAGE 03

Cancel

Incompatible modes annihilate through destructive interference.

FIXED POINT
◆ STAGE 04

Read out

The surviving attractor is the answer. Deterministic, reproducible, fast.

§ 02The Grower

From empty. To circuit.

The grower doesn't optimize weights on a fixed topology. It changes the topology itself — neuron by neuron, threshold by threshold — using a scout oracle to rank candidates before expensive search runs.

  • Bias-free threshold neurons. Stored as dot ≥ threshold. 36 bits.
  • Scout-first search. Single-signal + connect-all + pair-lift rank parents before ternary.
  • Append-only evidence. Every run writes metrics.json, golden_check.json.
graph.growth step 128/128 83 edges 80% acc

Fitness · jackpot selection · smooth cosine-bigram

evolve_language.rs · english · 1+9 ES
25% 20% 15% 10% 5% 0 100k 200k steps 24.6% peak 1+1 ES · 21.2%
jackpot Δ+3.4pp
fitness Δ+2.6pp
W-mutation0% accept
parityrust ↔ python
§ 03Quantization

Champions at every compression.

FineWeb char-LM benchmark · nf=1024 · matched-compute controls applied.

◆ CLOUD / SERVER

QAT int8

Straight-through estimator. Essentially lossless, 4× compression.

86.40%
+0.20pp over float · matched compute
compression int8 weights
◆ MOBILE / EDGE

Staged INQ int4

Incremental network quantization, staged protocol.

84.75%
−1.65pp · 8× compression
compression int4 weights
◆ IOT / FPGA

QAT binary (b1.58)

Ternary weights. Deploys to POPCOUNT hardware natively.

71.50%
−14.9pp · 32× compression
32× compression 1.58-bit
§ 04Validated findings
L0 byte interpreter locked 36 bits 100% round-trip INPUT · 8 BITS HIDDEN · 4 NEURONS t₁ t₂ t₃ t₄ OUTPUT · POPCOUNT

Shipped. Locked. Reproducible.

The public story is kept truthful with three labels: current mainline (code on main), validated finding (reproducible, not promoted), experimental (not yet default).

  • Smooth cosine-bigram fitness. +2.6pp. Broke the 17–18% ceiling.
  • 1+9 jackpot selection. +3.4pp. 24.6% peak.
  • Empty-start ≫ prefilled. 80% at 83 edges beats 64% at 3 400.
  • L0 byte interpreter · locked. 8 → 4 neurons, 36 bits, 100% round-trip.
§ 05Public beta contract

Two gates. Both reproducible.

Public beta isn't green on vibes. One engine-freeze gate, one computation benchmark.

B0 · ENGINE FREEZE Gate pass

Grower regression.

run_cmd seed.42 GROWER 175 tests ✓ cargo bench zero unsafe metrics.json 24.6% golden PASS ✓

python tools/run_grower_regression.py

B1 · BYTE / OPCODE v1 Gate pending

Exact translator.

byte · 8 bits opcode · 4 bits head₁ head₂ head₃ head₄ head₅₋₈ 8 FROZEN BIT-HEADS LUT · exact COPY / NOT / INC / DEC 100% round-trip

python tools/run_byte_opcode_acceptance.py

The beta is live.

Clone, run the five-minute proof, file a finding — or an honest critique. Apache 2.0 noncommercial.

Tweaksvraxion.beta
Animated visuals