Thursday, April 23, 2026

codex, chatgpt, gemini, claude

 As of 2026, all three now have serious native Mac apps—but they excel at different things.

Executive Summary

Best ForWinner
Coding / software engineeringClaude Mac app
General productivity / desktop assistantChatGPT Mac app
Google ecosystem / docs / search-heavy workGemini Mac app
Deep research reportsChatGPT / Gemini
Large codebase reasoningClaude
Creative generation (image/video/music)Gemini
Most polished Mac experienceChatGPT

1. Coding (Developers / Researchers / Engineers)

Claude Mac App = Best for serious coding

Strengths

  • Excellent long-context repo understanding

  • Strong multi-file refactoring

  • Better architectural reasoning

  • Great for debugging large codebases

  • Claude Code / terminal workflows pair well with desktop use

Best for:

  • Python projects

  • Research code

  • HPC scripts

  • Large repos

  • Reading unfamiliar codebases

Weakness:

  • Less polished productivity ecosystem than ChatGPT

ChatGPT Mac App = Best all-around coding assistant

Strengths

  • Great code generation

  • Strong debugging

  • Better multimodal (screenshots / diagrams / docs)

  • Better tool ecosystem (files, canvas, browsing, voice)

  • Great for coding + writing + research in one app

Best for:

  • Solo builders

  • Data scientists

  • Full-stack coding

  • Rapid prototyping

Gemini Mac App = Improving, but third for coding

Strengths

  • Good with Google Cloud / Vertex AI / Workspace stack

  • Strong context windows

  • Good code explanations

  • Gemini Code Assist ecosystem in VS Code / JetBrains

Weakness:

  • Native Mac coding workflows still newer than ChatGPT / Claude

Coding ranking (2026)

  1. Claude

  2. ChatGPT

  3. Gemini


2. Research (papers / grants / analysis)

ChatGPT = Best overall research companion

Why

  • Strong synthesis

  • Better reasoning across sources

  • Great writing refinement

  • Better tables / structure / grant drafting

  • Good with PDFs, notes, brainstorming

Gemini = Best search-connected research

Why

  • Strong web grounding

  • Excellent Google Search integration

  • Great with Google Docs / Drive workflow

  • Strong summarization of current web content

Claude = Best deep reading

Why

  • Reads long PDFs well

  • Great nuanced summaries

  • Strong close reading of manuscripts

Research ranking

  1. ChatGPT

  2. Gemini

  3. Claude

(Claude can be #1 for reading one huge paper set.)


3. Productivity (daily Mac use)

ChatGPT Mac App = Best overall

Strong native shortcuts, voice, screenshots, file handling, desktop workflow, broad tools.

Gemini Mac App = Best if you live in Google Workspace

  • Gmail

  • Docs

  • Drive

  • Chrome

  • Search-heavy workflows

Google’s new Mac app includes quick launch and screen context sharing. (blog.google)

Claude Mac App = Focused thinker

Great if your productivity = reading/writing/thinking rather than automating many tasks.

Productivity ranking

  1. ChatGPT

  2. Gemini

  3. Claude


If You Are an ODU Professor / Researcher (my honest recommendation)

Use all three strategically:

ChatGPT Mac App

Daily command center:

  • email drafting

  • grant writing

  • slides

  • admin tasks

  • mixed coding + research

Claude Mac App

Heavy intellectual work:

  • NSF proposal logic

  • manuscript review

  • codebase reasoning

  • deep edits

Gemini Mac App

Google-centric workflows:

  • Drive docs

  • Search

  • quick desktop helper

  • Google Cloud workflows


If You Only Pick One

User TypeBest Choice
Professor / researcherChatGPT
Hardcore coderClaude
Google Workspace power userGemini
StudentChatGPT
Creative multimediaGemini

My Honest 2026 Verdict

ChatGPT = best Swiss Army knife

Claude = best intellect for coding/writing depth

Gemini = best ecosystem + future potential


For you specifically (Hong Qin)

Given your profile (NSF proposals, research, coding, students, grants):

Ideal stack:

  • ChatGPT Mac = primary daily assistant

  • Claude Mac = difficult writing/coding reasoning

  • Gemini Mac = Google docs/search support


My prediction for late 2026

Google may close the gap quickly because Gemini Mac app just launched and appears to be the start of a bigger desktop strategy. (blog.google)


If you'd like, I can also compare Claude Code vs Codex vs Gemini CLI on Mac for research professors in 2026 — that may actually matter more than the chat apps.

waterfield claude hands off message.

 Job 9414 has been running for 3 minutes. With a 48h time limit it should finish around 2026-04-26 ~02:00 UTC, Saturday evening ET. I'll set the poll to wake every 2 hours, detect completion, then run the full analysis inline.


The polling loop is live. Here's what I set up and why:


---


What was scheduled


ScheduleWakeup, not a remote CCR routine, wakes this Claude Code CLI session every 2 hours directly on the [REDACTED_CLUSTER] cluster. A remote CCR agent was the wrong tool here: it runs in Anthropic's cloud with no access to your SLURM queue, log files, or output/ directories.


Each poll cycle:


1. Runs `squeue -u [REDACTED_USER]` to check jobs 9414, Mamba, 48h, and 9415, HypTransformer, 72h.

2. If neither is done, reschedules 2 hours later.

3. When job 9414 finishes, runs Phase 1 analysis: driver gene rankings, AD gene overlap, accuracy check, and reschedules for Phase 2.

4. When job 9415 finishes, runs Phase 2 analysis: conformal efficiency gaps, hypothesis comparison, and writes the full results doc.

5. If accuracy is below the majority baseline, auto-creates and submits the 3-class binned Braak fallback script.

6. Updates `memory/session_state.md` and `memory/project_sea_ad.md` with findings.


Expected timeline: Job 9414 runs about 48h, completing around 2026-04-26 02:00 UTC, then job 9415 runs up to 72h after that.


To monitor manually in the meantime:


```bash

squeue -u [REDACTED_USER]

tail -f logs/seaad_drvmamba_braak_h100_9414.out


Tuesday, April 21, 2026

Monday, April 20, 2026

PACE

 PACE's data is helping us better understand how the ocean and atmosphere exchange carbon dioxide. In addition, it is revealing how aerosols might fuel phytoplankton growth in the surface ocean. Novel uses of PACE data will benefit our economy and society. For example, it will help identify the extent and duration of harmful algal blooms. PACE is extending and expanding NASA's long-term observations of our living planet. PACE's cutting-edge technology is pioneering how we take Earth's pulse for decades to come.

https://pace.gsfc.nasa.gov/


Saturday, April 18, 2026

claude session state md

 

 Saved to ./SESSION_STATE.md in the project root. On reconnect, just tell Claude:


  "read SESSION_STATE.md and resume"                                                                                                                                            

   

  It covers: active job IDs, how to check progress, what files to expect, action items for every outcome (still running / success / failure), and all environment/partition       

  details.                                                  

             

Thursday, April 9, 2026

claude --dangerously-skip-permissions -p "xxxx"

claude --dangerously-skip-permissions -p "Do todo items 1, 2, and 3 from todo/todo-claude-20260409.md autonomously. Working directory is /Users/hqin/xxx"  



Sunday, March 29, 2026

github.com/bioc

 github.com/bioc is a github org with all bioconductor software, one repo per package. 

Friday, March 27, 2026

pennAITech funding statement

 As a reminder, for any presentations or publications showcasing your pilot work, please make sure to include a funding acknowledgment“Research reported in this publication/ presentation was supported by the National Institute On Aging of the National Institutes of Health under Award Number P30AG073105. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.”

Tuesday, March 24, 2026

WebAssembly inside a Transformer; “computers inside transformers”

 The work you’re referring to is a very recent research direction (2025–2026) associated with Christos Tzamos and collaborators, often discussed under “computers inside transformers” or “WASM-in-transformer”.

Here’s a clear breakdown of what it is and why it’s important:


🔧 Core Idea: WebAssembly inside a Transformer

Recent work (e.g., from Percepta AI) shows that a transformer can embed and execute a WebAssembly (WASM) interpreter directly inside its weights.

  • Instead of calling external tools or running code separately
  • The model itself executes programs step-by-step internally

👉 In effect, the transformer becomes both:

  • a neural model (learning patterns)
  • AND a symbolic computer (executing programs)

They “implemented a WebAssembly interpreter inside the transformer weights”


⚙️ What exactly is happening?

1. Compile a computer into the model

  • A RAM-like abstract machine + WASM interpreter is encoded into the transformer
  • Represented using attention + matrix operations

2. Feed in a program

  • The input tokens include:
    • program (in WASM or compiled form)
    • initial memory/state

3. Transformer executes it

  • Each forward pass simulates steps of execution
  • Produces:
    • execution trace
    • final output

👉 No Python, no external runtime — everything happens inside the transformer


🚀 Key Results (from reports)

  • Deterministic execution (not probabilistic text generation)
  • Runs tens of thousands of steps per second on CPU (~33K tokens/sec)
  • Demonstrates that transformers can act as general-purpose computers

🧠 Why WebAssembly?

WebAssembly (WASM) is ideal because:

  • Low-level, structured bytecode (like a portable assembly)
  • Deterministic and safe
  • Easy to model as state transitions

So it becomes a natural bridge between:

  • symbolic computation
  • neural computation

🧩 Conceptual Shift

This work challenges the usual architecture:

Traditional LLMTzamos-style system
Predicts next tokenExecutes programs
Uses external tools (Python, APIs)Internal execution
ProbabilisticDeterministic
Text interfaceComputational substrate

👉 This suggests:

Transformers are not just language models — they can be universal computers


🔬 Relation to Theory

This connects to known results:

  • Transformers are Turing complete (in theory)
  • This work shows a practical construction
  • Bridges:
    • neural networks
    • programming languages
    • compilers

🧪 Implications for Research (very relevant to your work)

Given your interests (AI + biology + hypothesis-driven models), this paradigm could enable:

1. Hybrid symbolic–neural models

  • Embed biological simulation rules inside transformers
  • Combine:
    • learned representations
    • mechanistic models

2. Deterministic reasoning inside LLMs

  • Replace:
    • fragile chain-of-thought
  • With:
    • executable programs inside the model

3. “Model = program + data”

  • Instead of prompting → reasoning
  • You compile algorithms into weights

⚠️ Limitations / Open Questions

  • Scaling to large real-world programs
  • Training vs hand-encoding interpreters
  • Efficiency vs specialized runtimes
  • Debuggability

🧭 Bottom Line

The “Christos Tzamos + WebAssembly + transformer” work shows:

A transformer can contain and execute a full programming language runtime internally, effectively turning it into a neural computer.

This is a major step toward:

  • neural-symbolic unification
  • tool-free AI systems
  • and potentially programmable foundation models

openai life sciences

 

https://openaifoundation.org/news/update-on-the-openai-foundation

At the Foundation, we’ve identified three initial focus areas where we think this work could make a real difference:

  • AI for Alzheimer’s: Alzheimer’s is one of the hardest and most heartbreaking diseases families face – and one of the toughest problems in medicine. AI’s ability to reason across complex data could help researchers uncover new insights. We will be partnering with leading research institutions, with an initial focus on mapping disease pathways, detecting biomarkers for clinical care and clinical trials, and accelerating personalization of treatments – including, where possible, repurposing existing FDA-approved molecules.

  • Public Data for Health: Many of medicine’s biggest advances have been made possible by shared scientific data, and public access to data is essential to deliver the promise of AI for scientific breakthroughs. We will help partners create and expand open, high-quality datasets – and, where appropriate, help responsibly open previously closed ones – so researchers everywhere can leverage AI and use data to drive progress across diseases.

  • Accelerating Progress on High-Mortality and High-Burden Diseases: We believe AI can help lead to scientific breakthroughs, and lower the cost and risk of developing or repurposing therapies, particularly in high-mortality and high-burden disease areas that are underfunded. We will bring together AI researchers and disease experts, starting with a focused workshop to identify how best to empower scientists with AI tools and surface promising opportunities.

Saturday, March 7, 2026

DNABERT waterfield

 

todo: DNABERT2


DNABERT is installed.


  - Found repo: https://github.com/jerryji1993/DNABERT.git

  - Cloned to: DNABERT (commit b6da04e)

  - Created Conda env: dnabert (Python 3.6)

  - Installed:

      - DNABERT in editable mode (pip install -e .)

      - Example dependencies

      - pybedtools/pysam (via mamba + pip wheel fallback)

      - PyTorch stack (torch 1.10.2+cu102, torchvision 0.11.2)


  Verification passed in dnabert:


  - torch 1.10.2+cu102

  - transformers 2.5.0

  - pybedtools 0.8.1

  - pysam 0.19.0


  Use it with:


  source ~/miniforge3/etc/profile.d/conda.sh

  conda activate dnabert

  cd /home/xxx/github/DNABERT




Tuesday, March 3, 2026

waterfield GPUs

 

GPU Inventory Summary

GPU TypePartitionTotal NodesGPUs per NodeTotal GPUs
NVIDIA RTX P6000rtxp6000flex-130130
rtxp6000flex-212224
rtxp6000flex-48432
rtxp6000flex-88864
NVIDIA H100h100flex-130130
h100flex-28216
h100flex-48432
h100flex-88864
NVIDIA H200h200flex-88864
NVIDIA B200b200flex-82816
NVIDIA MegaGPU*reserved-clync008188

Total Counts by Model

  • RTX P6000 Series: 150 GPUs

  • H100 Series: 142 GPUs

  • H200 Series: 64 GPUs

  • B200 Series: 16 GPUs

  • Reserved/Special: 8 GPUs

Thursday, February 19, 2026

modern's mrna flu vaccine

 The story of Moderna's mRNA flu vaccine (mRNA-1010) has recently shifted from a regulatory stalemate to a potential path forward, largely due to a rare public disagreement and subsequent reversal by the FDA.

The Conflict and Reversal

Initially, the FDA had expressed significant hesitation regarding Moderna's data, leading to what reports described as a public dispute between the agency and the pharmaceutical company. Moderna's vaccine had shown strong results against some flu strains (Influenza A) but was less effective against others (Influenza B).

However, in a notable turn of events, the FDA reversed its course and agreed to officially review the vaccine. This change of heart reportedly followed significant pressure from the White House, as the administration looks to modernize flu shot technology beyond traditional egg-based methods.

Key Milestones in the Story

  • The Technology: Unlike traditional flu shots, Moderna uses mRNA technology (the same used in their COVID-19 vaccine), which allows for faster manufacturing and the ability to update the vaccine more quickly to match circulating strains.

  • Mixed Data: Clinical trials showed the vaccine was highly effective at generating antibodies for the H1N1 and H3N2 strains but struggled to meet the "non-inferiority" bar for the B/Victoria and B/Yamagata strains compared to existing vaccines.

  • The Resolution: After the public back-and-forth, the FDA will now consider the new flu shot for approval, aiming to resolve the technical and regulatory disagreements that had previously stalled its progress.

Why This Matters

If approved, Moderna's vaccine would be the first mRNA flu shot on the market. This is seen as a crucial step toward their ultimate goal: a combination vaccine that protects against the flu, COVID-19, and potentially RSV in a single annual shot.


Would you like me to look up the latest clinical trial efficacy numbers for Moderna's combination flu/COVID vaccine?

In clinical trials, Moderna's flu vaccine (mRNA-1010) faced a specific regulatory hurdle: while it successfully induced a strong immune response against the "A" strains of the flu, it initially fell short of the required performance standards for the "B" strains.

To understand why this mattered, you have to look at the four specific targets of a "quadrivalent" (four-strain) flu shot:

1. The Targets: A vs. B Strains

Seasonal flu vaccines are designed to protect against four distinct versions of the virus that circulate in humans:

  • Influenza A (H1N1 and H3N2): These are the "heavy hitters." They are more common, mutate rapidly, and are responsible for most flu-related hospitalizations and all historical pandemics. mRNA-1010 performed exceptionally well here, showing results that were equal to or better than traditional shots.

  • Influenza B (Victoria and Yamagata): These lineages only infect humans and generally mutate more slowly. While they cause fewer hospitalizations in adults than Type A, they are a major cause of illness in children.

2. The "Non-Inferiority" Bar

In vaccine science, a new product doesn't always have to be "better" than what already exists; it often just needs to prove it is "not unacceptably worse." This is the non-inferiority bar.

  • The Test: Researchers compare the antibody levels (geometric mean titers) produced by the new vaccine against those produced by an already approved "gold standard" vaccine.

  • The Result: For the B/Victoria and B/Yamagata strains, Moderna’s early data showed that the antibody levels were lower than those produced by the traditional vaccines. They didn't miss by a lot, but they failed to meet the strict statistical threshold (the "bar") required to prove they were "just as good" as the shots people already get.

3. Why did the mRNA struggle with Type B?

This is a technical challenge scientists are still solving. mRNA vaccines work by teaching your cells to make a specific protein (the hemagglutinin or HA). For reasons not fully understood, the mRNA instructions for the Type B HA proteins didn't seem to "take" as strongly as the instructions for Type A.

Note: Since 2020, the B/Yamagata strain has actually disappeared from global circulation (likely due to COVID-19 lockdowns). As a result, the FDA and WHO have recently recommended moving back to trivalent (three-strain) vaccines that exclude Yamagata entirely.

Current Status

Moderna has since updated its formulation. Their latest Phase 3 data (from June 2025) showed that the newer version of mRNA-1010 finally achieved "superiority" or "non-inferiority" across all targeted strains, including B/Victoria. This breakthrough is what finally prompted the FDA to agree to review the vaccine for approval.


Would you like me to look up the specific side effects reported in these mRNA flu trials compared to traditional egg-based shots?