The work you’re referring to is a very recent research direction (2025–2026) associated with Christos Tzamos and collaborators, often discussed under “computers inside transformers” or “WASM-in-transformer”.
Here’s a clear breakdown of what it is and why it’s important:
๐ง Core Idea: WebAssembly inside a Transformer
Recent work (e.g., from Percepta AI) shows that a transformer can embed and execute a WebAssembly (WASM) interpreter directly inside its weights.
-
Instead of calling external tools or running code separately
-
The model itself executes programs step-by-step internally
๐ In effect, the transformer becomes both:
-
a neural model (learning patterns)
-
AND a symbolic computer (executing programs)
They “implemented a WebAssembly interpreter inside the transformer weights”
⚙️ What exactly is happening?
1. Compile a computer into the model
-
A RAM-like abstract machine + WASM interpreter is encoded into the transformer
-
Represented using attention + matrix operations
2. Feed in a program
-
The input tokens include:
-
program (in WASM or compiled form)
-
initial memory/state
3. Transformer executes it
-
Each forward pass simulates steps of execution
-
Produces:
-
execution trace
-
final output
๐ No Python, no external runtime — everything happens inside the transformer
๐ Key Results (from reports)
-
Deterministic execution (not probabilistic text generation)
-
Runs tens of thousands of steps per second on CPU (~33K tokens/sec)
-
Demonstrates that transformers can act as general-purpose computers
๐ง Why WebAssembly?
WebAssembly (WASM) is ideal because:
-
Low-level, structured bytecode (like a portable assembly)
-
Deterministic and safe
-
Easy to model as state transitions
So it becomes a natural bridge between:
-
symbolic computation
-
neural computation
๐งฉ Conceptual Shift
This work challenges the usual architecture:
| Traditional LLM | Tzamos-style system |
|---|
| Predicts next token | Executes programs |
| Uses external tools (Python, APIs) | Internal execution |
| Probabilistic | Deterministic |
| Text interface | Computational substrate |
๐ This suggests:
Transformers are not just language models — they can be universal computers
๐ฌ Relation to Theory
This connects to known results:
-
Transformers are Turing complete (in theory)
-
This work shows a practical construction
-
Bridges:
-
neural networks
-
programming languages
-
compilers
๐งช Implications for Research (very relevant to your work)
Given your interests (AI + biology + hypothesis-driven models), this paradigm could enable:
1. Hybrid symbolic–neural models
-
Embed biological simulation rules inside transformers
-
Combine:
-
learned representations
-
mechanistic models
2. Deterministic reasoning inside LLMs
-
Replace:
-
With:
-
executable programs inside the model
3. “Model = program + data”
-
Instead of prompting → reasoning
-
You compile algorithms into weights
⚠️ Limitations / Open Questions
-
Scaling to large real-world programs
-
Training vs hand-encoding interpreters
-
Efficiency vs specialized runtimes
-
Debuggability
๐งญ Bottom Line
The “Christos Tzamos + WebAssembly + transformer” work shows:
A transformer can contain and execute a full programming language runtime internally, effectively turning it into a neural computer.
This is a major step toward:
-
neural-symbolic unification
-
tool-free AI systems
-
and potentially programmable foundation models