github.com/bioc is a github org with all bioconductor software, one repo per package.
This site is to serve as my note-book and to effectively communicate with my students and collaborators. Every now and then, a blog may be of interest to other researchers or teachers. Views in this blog are my own. All rights of research results and findings on this blog are reserved. See also http://youtube.com/c/hongqin @hongqin
Sunday, March 29, 2026
Friday, March 27, 2026
pennAITech funding statement
As a reminder, for any presentations or publications showcasing your pilot work, please make sure to include a funding acknowledgment: “Research reported in this publication/ presentation was supported by the National Institute On Aging of the National Institutes of Health under Award Number P30AG073105. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.”
Tuesday, March 24, 2026
WebAssembly inside a Transformer; “computers inside transformers”
The work you’re referring to is a very recent research direction (2025–2026) associated with Christos Tzamos and collaborators, often discussed under “computers inside transformers” or “WASM-in-transformer”.
Here’s a clear breakdown of what it is and why it’s important:
๐ง Core Idea: WebAssembly inside a Transformer
Recent work (e.g., from Percepta AI) shows that a transformer can embed and execute a WebAssembly (WASM) interpreter directly inside its weights.
- Instead of calling external tools or running code separately
- The model itself executes programs step-by-step internally
๐ In effect, the transformer becomes both:
- a neural model (learning patterns)
- AND a symbolic computer (executing programs)
They “implemented a WebAssembly interpreter inside the transformer weights”
⚙️ What exactly is happening?
1. Compile a computer into the model
- A RAM-like abstract machine + WASM interpreter is encoded into the transformer
- Represented using attention + matrix operations
2. Feed in a program
-
The input tokens include:
- program (in WASM or compiled form)
- initial memory/state
3. Transformer executes it
- Each forward pass simulates steps of execution
-
Produces:
- execution trace
- final output
๐ No Python, no external runtime — everything happens inside the transformer
๐ Key Results (from reports)
- Deterministic execution (not probabilistic text generation)
- Runs tens of thousands of steps per second on CPU (~33K tokens/sec)
- Demonstrates that transformers can act as general-purpose computers
๐ง Why WebAssembly?
WebAssembly (WASM) is ideal because:
- Low-level, structured bytecode (like a portable assembly)
- Deterministic and safe
- Easy to model as state transitions
So it becomes a natural bridge between:
- symbolic computation
- neural computation
๐งฉ Conceptual Shift
This work challenges the usual architecture:
| Traditional LLM | Tzamos-style system |
|---|---|
| Predicts next token | Executes programs |
| Uses external tools (Python, APIs) | Internal execution |
| Probabilistic | Deterministic |
| Text interface | Computational substrate |
๐ This suggests:
Transformers are not just language models — they can be universal computers
๐ฌ Relation to Theory
This connects to known results:
- Transformers are Turing complete (in theory)
- This work shows a practical construction
-
Bridges:
- neural networks
- programming languages
- compilers
๐งช Implications for Research (very relevant to your work)
Given your interests (AI + biology + hypothesis-driven models), this paradigm could enable:
1. Hybrid symbolic–neural models
- Embed biological simulation rules inside transformers
-
Combine:
- learned representations
- mechanistic models
2. Deterministic reasoning inside LLMs
-
Replace:
- fragile chain-of-thought
-
With:
- executable programs inside the model
3. “Model = program + data”
- Instead of prompting → reasoning
- You compile algorithms into weights
⚠️ Limitations / Open Questions
- Scaling to large real-world programs
- Training vs hand-encoding interpreters
- Efficiency vs specialized runtimes
- Debuggability
๐งญ Bottom Line
The “Christos Tzamos + WebAssembly + transformer” work shows:
A transformer can contain and execute a full programming language runtime internally, effectively turning it into a neural computer.
This is a major step toward:
- neural-symbolic unification
- tool-free AI systems
- and potentially programmable foundation models
openai life sciences
https://openaifoundation.org/news/update-on-the-openai-foundation
At the Foundation, we’ve identified three initial focus areas where we think this work could make a real difference:
AI for Alzheimer’s: Alzheimer’s is one of the hardest and most heartbreaking diseases families face – and one of the toughest problems in medicine. AI’s ability to reason across complex data could help researchers uncover new insights. We will be partnering with leading research institutions, with an initial focus on mapping disease pathways, detecting biomarkers for clinical care and clinical trials, and accelerating personalization of treatments – including, where possible, repurposing existing FDA-approved molecules.
Public Data for Health: Many of medicine’s biggest advances have been made possible by shared scientific data, and public access to data is essential to deliver the promise of AI for scientific breakthroughs. We will help partners create and expand open, high-quality datasets – and, where appropriate, help responsibly open previously closed ones – so researchers everywhere can leverage AI and use data to drive progress across diseases.
Accelerating Progress on High-Mortality and High-Burden Diseases: We believe AI can help lead to scientific breakthroughs, and lower the cost and risk of developing or repurposing therapies, particularly in high-mortality and high-burden disease areas that are underfunded. We will bring together AI researchers and disease experts, starting with a focused workshop to identify how best to empower scientists with AI tools and surface promising opportunities.
Saturday, March 7, 2026
DNABERT waterfield
todo: DNABERT2
• DNABERT is installed.
- Found repo: https://github.com/jerryji1993/DNABERT.git
- Cloned to: DNABERT (commit b6da04e)
- Created Conda env: dnabert (Python 3.6)
- Installed:
- DNABERT in editable mode (pip install -e .)
- Example dependencies
- pybedtools/pysam (via mamba + pip wheel fallback)
- PyTorch stack (torch 1.10.2+cu102, torchvision 0.11.2)
Verification passed in dnabert:
- torch 1.10.2+cu102
- transformers 2.5.0
- pybedtools 0.8.1
- pysam 0.19.0
Use it with:
source ~/miniforge3/etc/profile.d/conda.sh
conda activate dnabert
cd /home/xxx/github/DNABERT
Tuesday, March 3, 2026
waterfield GPUs
GPU Inventory Summary
| GPU Type | Partition | Total Nodes | GPUs per Node | Total GPUs |
| NVIDIA RTX P6000 | rtxp6000flex-1 | 30 | 1 | 30 |
| rtxp6000flex-2 | 12 | 2 | 24 | |
| rtxp6000flex-4 | 8 | 4 | 32 | |
| rtxp6000flex-8 | 8 | 8 | 64 | |
| NVIDIA H100 | h100flex-1 | 30 | 1 | 30 |
| h100flex-2 | 8 | 2 | 16 | |
| h100flex-4 | 8 | 4 | 32 | |
| h100flex-8 | 8 | 8 | 64 | |
| NVIDIA H200 | h200flex-8 | 8 | 8 | 64 |
| NVIDIA B200 | b200flex-8 | 2 | 8 | 16 |
| NVIDIA MegaGPU* | reserved-clync008 | 1 | 8 | 8 |
Total Counts by Model
RTX P6000 Series: 150 GPUs
H100 Series: 142 GPUs
H200 Series: 64 GPUs
B200 Series: 16 GPUs
Reserved/Special: 8 GPUs