Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In terms of reasoning models like QwQ or Qwen 3 I didn't waste too much time trying to improve their results aside from coming up with various ways to constrain their reasoning token output with prompts.

Even though Gemma 3 27B QAT is not a reasoning model, it's so good at instruction following and being used in LLM chains/routes that it can be used for classifying/language optimization steps before instructing it how to reason about the prompt in the next step. You can even have it output intermediate answers interspersed between multiple think tags in the same response. In many ways for these models I just define thinking as any tokens that are helping the model arrive at the conclusion, but are not fully formed parts of the answer.

Instructing it to use certain words (tokens) and types of phrasing preferentially is something that is known to improve results in general, not just in LLMs and I've seen improved results by encouraging certain types of language to be used. AutoThink using the highest performing tokens out of a dataset _could_ be a nice way to optimize towards that in a more general way.

It seems like there's a risk of using so many pivotal tokens that it almost overfits responses to benchmark questions, though. So, while I have personally seen careful word/token selection improve result quality and also see it as a potential low cost high return optimization, I'd still want to see how AutoThink generalizes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: