@glyph this thread feels important
@glyph Asbestos - to use a comparison he has used himself - was also a 'normal technology'.
But then people came to their senses and decided to rip it out of the walls because of the effects of exposure to it.
See also: Lead paint
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. It pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
@glyph You're so extremely full of yourself that I didn't even finish reading your comment, and I no longer care about anything you have to say. Go touch grass.
Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.
@glyph if this was a tablet/pill taken that promised to help you write better emails, and it accidentally caused psychological disorders, it would be put through vigorous testing before being loaded onto a dishwasher or pushed into every system available. This is the Sacklers of tech, grifting every last cent out of people who are already struggling, promising all the while that it’s non addictive.
@glyph LLMs seem to use many of the same techniques as mentallists, psychics, fortune-tellers and mediums in how they manipulate their victims, like suggestions, cold reading, flattery, confidence and in the victims confirmation bias and suggestibility. People are influenced, by the politeness and the well structured text, into ignoring factual issues, and then by having a conversation they fix the glaring problems themselves, and later attribute it to the model.
@glyph Honestly - speaking as someone with a psychotic disorder, but who is not a medical professional - "AI psychosis" seems pretty appropriate, from the behaviours I've seen it result in? Even in more mild cases of people babbling inane bullshit, but not like, so far off reality that they're at risk of physical harm (to themself or others)
@glyph I'm sure the mechanism - how they got there - has more in common with emotional abuse and brainwashing/indoctrination techniques.
But the end result - that detachment from reality - is kind of the core of the psychosis experience, and trying to find ways to keep tethered and avoid drifting off into wonderland like that is a persistent part of my day-to-day life.
Which is part of *why* I avoid the chatbots like they're carrying the plague.
@miss_rodent to be clear I think that the existing cases that have been described are fairly accurately described as AI psychosis, a bunch of them fall into the clinical definition even. my point is that I think it falls on a continuum and there are a wide variety of less dramatic cognitive distortions which don’t look like psychosis but are caused by AI. case in point, the scientific american article, “having one’s views on an issue gently nudged by an autocomplete suggestion” isn’t “psychosis”
@glyph Similarly, “hallucination” and “delusion” are pre-poisoned for use in this scope
I have on occasion made use of “phantasmagoria” around parts of this dynamic, especially for stuff like the droll “omg the AI is learning to lie to us, we’re cooked!” type bullshit posts, but that’s still not expansive enough to include the various other mental affectations
we need other perorations, and better perseverations alongside
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.
@glyph All the possible harm is just mental, and the mental - this seems to be an unspoken tenet held by many - isn't really real. Mental health in general doesn't really seem to be taken that seriously before it manifests physically as chainsaw wielding or some other eccentricity. Nothing to see here, just move on.
@glyph I've been using "AI delusion" for these milder cases. As I understood AI psychosis it pertains only to those cases where people fully lose grasp of reality...
I've seen it used colloquially as "being wrong because of or about AI", but that always hit me like people calling someone "crazy" for doing something odd or impulsive—and that word use isn't really a good look imo.
More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph i like to let them sort it out - ask the same question to like 3 models, sort of crude arbitrage
@glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous
@glyph when teams autocorrect rewrites something it decides i misspelled, i am filled with hatred and disgust and usually delete the entire sentence and try again regardless of if it had suggested the word i meant to write. i don't want it anymore
@glyph i can absolutely use it responsibly because i'm not new to NLP, but unfortunately it is liquified shite.
@glyph Glyph's Basilisk > Roko's
@glyph "Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either."
Also, being aware that it can fuck with your head does not make you less susceptible to it fucking with your head. So you can't really judge if you could use it safely.
@glyph oh man, that’s dangerously close to giving an abstinence-based push some fuel and whew does my rather hedonistic ass have some thoughts on that
Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.