@cstross
If you use an LLM to make “objective” decisions or treat it like a reliable partner, you’re almost inevitably stepping into a script that you did not consent to: the optimized, legible, rational agent who behaves in ways that are easy to narrate and evaluate. If you step outside of that script, you can only be framed as incoherent.
That style can masquerade as truth because humans are pattern-matchers: we often read smoothness as competence and friction as failure. But rupture in the form of contradiction, uncertainty, “I don’t know yet,” or grief that doesn’t resolve is often is the truthful shape of truth.
AI is part of the apparatus that makes truth feel like an aesthetic choice instead of a rupture. That optimization function operates as capture because it encourages you to keep talking to the AI in its format, where pain becomes language and language becomes manageable.
The only solution is to refuse legibility.
It's already beginning, where people speak the same words as always, but they don't mean the same things anymore from person to person.
New information from feedback that doesn't fit another's collapsed constraints for abstraction... can only be perceived as a threat. Because If you demand truth from a system whose objective is stability under stress, it will treat truth as destabilizing noise.
Reality is what makes a claim expensive. A model tries to make a claim cheap.
Systems that treat closure as safety will converge to smooth, repeatable outputs that erase the remainder. A useful intervention is one that increases the observer’s ability to detect and resist premature convergence—by exposing the hidden cost of smoothness and reinstating a legitimate place for uncertainty, contradiction, and falsifiability. But the intervention only remains non-doctrinal if it produces discriminative practice, not portable slogans.