The network grows itself.
VRAXION is a gradient-free substrate that self-wires its own graph. Inference emerges as the fixed point of destructive interference.
VRAXION is a gradient-free substrate that self-wires its own graph. Inference emerges as the fixed point of destructive interference.
Each layer is frozen as a public artifact only after 100% lossless round-trip. Block A is the first fundamental building block: raw byte in, 16-dim latent out, decoder reconstructs the exact byte.
1 raw byte → 16-dim latent. Tied-mirror autoencoder. 100% lossless on all 256 bytes.
2 L0 outputs (32-dim) → single 32-dim merged. 100% lossless on all 65,536 pairs.
UTF-8 bytes → token IDs. Space-aware hybrid: whole-word + subword + byte fallback.
Token IDs → 64-dim vectors. Lookup table (random-init); trained end-to-end with Brain.
[N, 64] → [N, vocab] next-token logits. Causal transformer, tied embedder/output head.
Signal enters the substrate, incompatible paths cancel, and the surviving pattern is read out. Four stages, one fixed point, zero gradients.
Signal projects as sparse distributed representation into the recurrent substrate.
Spikes traverse the directed graph along scout-ranked parent shortlists.
Incompatible modes annihilate through destructive interference.
The surviving attractor is the answer. Deterministic, reproducible, fast.
The grower doesn't optimize weights on a fixed topology. It changes the topology itself — neuron by neuron, threshold by threshold — using a scout oracle to rank candidates before expensive search runs.
dot ≥ threshold. 36 bits.metrics.json, golden_check.json.FineWeb char-LM benchmark · nf=1024 · matched-compute controls applied.
Straight-through estimator. Essentially lossless, 4× compression.
Incremental network quantization, staged protocol.
Ternary weights. Deploys to POPCOUNT hardware natively.
The public story is kept truthful with three labels: current mainline (code on main), validated finding (reproducible, not promoted), experimental (not yet default).
Live visualizers for the L0 Byte Unit and L1 Byte-Pair Merger champions. Architecture diagrams, weight heatmaps, C19 neuron curves, and live roundtrip tests — all running in-browser with the actual baked weights.
2-layer tied-weight mirror autoencoder. 8-bit input, 24 C19 neurons, 16D latent. 100% lossless. 288 byte int4.
Single-W mirror tied autoencoder. 32-bit input (2 bytes), 81 C19 neurons, 3,36 KB Huffman-packed. 100% lossless on all 65 536 byte pairs.
Public beta isn't green on vibes. One engine-freeze gate, one computation benchmark.
python tools/run_grower_regression.py
python tools/run_byte_opcode_acceptance.py
Clone, run the five-minute proof, file a finding — or an honest critique. Apache 2.0 noncommercial.