A “reasoning AI” is a large language model — a chatbot — that gives you a list of steps it took to reach a conclusion. Or a list of steps it says it took. LLMs don’t know what a fact is — they just…
It’s kind of a distinction without much discriminatory power: LLMs are a tool created to ease the task of bullshitting; used to produce bullshit by bullshitters.
It’s bullshitting. That’s the word. Bullshitting is saying things without a care for how true they are.
The word “bullshitting” implies a clarity of purpose I don’t want to attribute to AI.
It’s kind of a distinction without much discriminatory power: LLMs are a tool created to ease the task of bullshitting; used to produce bullshit by bullshitters.
Yeah that is why people called it confabulating, and not bullshitting.