those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.
this proves the issue is widespread, not fundamental.
Is “model” not defined as architecture+weights? Those models certainly don’t share the same architecture. I might just be confused about your point though
“even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve”
That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.
Not “This particular model”. Frontier LRMs s OpenAI’s o1/o3,DeepSeek-R, Claude 3.7 Sonnet Thinking, and Gemini Thinking.
The paper shows that Large Reasoning Models as defined today cannot interpret instructions. Their architecture does not allow it.
those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.
this proves the issue is widespread, not fundamental.
Is “model” not defined as architecture+weights? Those models certainly don’t share the same architecture. I might just be confused about your point though