Thursday, December 14, 2023

Q star algorithm

 

https://twitter.com/DrJimFan/status/1728100123862004105

In my decade spent on AI, I've never seen an algorithm that so many people fantasize about. Just from a name, no paper, no stats, no product. So let's reverse engineer the Q* fantasy. VERY LONG READ: To understand the powerful marriage between Search and Learning, we need to go back to 2016 and revisit AlphaGo, a glorious moment in the AI history. It's got 4 key ingredients: 1. Policy NN (Learning): responsible for selecting good moves. It estimates the probability of each move leading to a win. 2. Value NN (Learning): evaluates the board and predicts the winner from any given legal position in Go. 3. MCTS (Search): stands for "Monte Carlo Tree Search". It simulates many possible sequences of moves from the current position using the policy NN, and then aggregates the results of these simulations to decide on the most promising move. This is the "slow thinking" component that contrasts with the fast token sampling of LLMs. 4. A groundtruth signal to drive the whole system. In Go, it's as simple as the binary label "who wins", which is decided by an established set of game rules. You can think of it as a source of energy that *sustains* the learning progress. How do the components above work together? AlphaGo does self-play, i.e. playing against its own older checkpoints. As self-play continues, both Policy NN and Value NN are improved iteratively: as the policy gets better at selecting moves, the value NN obtains better data to learn from, and in turn it provides better feedback to the policy. A stronger policy also helps MCTS explore better strategies. That completes an ingenious "perpetual motion machine". In this way, AlphaGo was able to bootstrap its own capabilities and beat the human world champion, Lee Sedol, 4-1 in 2016. An AI can never become super-human just by imitating human data alone. ----- Now let's talk about Q*. What are the corresponding 4 components? 1. Policy NN: this will be OAI's most powerful internal GPT, responsible for actually implementing the thought traces that solve a math problem. 2. Value NN: another GPT that scores how likely each intermediate reasoning step is correct. OAI published a paper in May 2023 called "Let's Verify Step by Step", coauthored by big names like

: arxiv.org/abs/2305.20050 It's much lesser known than DALL-E or Whipser, but gives us quite a lot of hints. This paper proposes "Process-supervised Reward Models", or PRMs, that gives feedback for each step in the chain-of-thought. In contrast, "Outcome-supervised reward models", or ORMs, only judge the entire output at the end. ORMs are the original reward model formulation for RLHF, but it's too coarse-grained to properly judge the sub-parts of a long response. In other words, ORMs are not great for credit assignment. In RL literature, we call ORMs "sparse reward" (only given once at the end), and PRMs "dense reward" that smoothly shapes the LLM to our desired behavior. 3. Search: unlike AlphaGo's discrete states and actions, LLMs operate on a much more sophisticated space of "all reasonable strings". So we need new search procedures. Expanding on Chain of Thought (CoT), the research community has developed a few nonlinear CoTs: - Tree of Thought: literally combining CoT and tree search: arxiv.org/abs/2305.10601

- Graph of Thought: yeah you guessed it already. Turn the tree into a graph and Voilà! You get an even more sophisticated search operator: arxiv.org/abs/2308.09687 4. Groundtruth signal: a few possibilities: (a) Each math problem comes with a known answer. OAI may have collected a huge corpus from existing math exams or competitions. (b) The ORM itself can be used as a groundtruth signal, but then it could be exploited and "loses energy" to sustain learning. (c) A formal verification system, such as Lean Theorem Prover, can turn math into a coding problem and provide compiler feedbacks: lean-lang.org And just like AlphaGo, the Policy LLM and Value LLM can improve each other iteratively, as well as learn from human expert annotations whenever available. A better Policy LLM will help the Tree of Thought Search explore better strategies, which in turn collect better data for the next round.

said a while back that DeepMind Gemini will use "AlphaGo-style algorithms" to boost reasoning. Even if Q* is not what we think, Google will certainly catch up with their own. If I can think of the above, they surely can. Note that what I described is just about reasoning. Nothing says Q* will be more creative in writing poetry, telling jokes

, or role playing. Improving creativity is a fundamentally human thing, so I believe natural data will still outperform synthetic ones.

No comments:

Post a Comment