A friend is pissy about Calibre adding A.I. into this ebook manager so they are creating a new fork called Clbre, because the A.I. is being stripped out.
A friend is pissy about Calibre adding A.I. into this ebook manager so they are creating a new fork called Clbre, because the A.I. is being stripped out.
"The co-degeneration thesis is not a prediction about distant futures. It describes dynamics already in motion, already documented in peer-reviewed research, already observable in the declining quality of online discourse and the increasing unreliability of AI systems that should, by simple scaling laws, only be improving.
The feedback loops are active. Engagement-optimized content degrades training data. Degraded models produce degraded outputs. Humans consuming and delegating to these systems experience cognitive effects that reduce their capacity to recognize and correct the degradation. The cycle continues.
But this is not a counsel of despair. The research also suggests intervention points. Model collapse can be prevented through data accumulation strategies that preserve genuine human content. Cognitive debt can be mitigated through usage protocols that maintain human engagement. Platform incentives can be restructured through regulation, competition, or user demand.
The question is whether institutional actors—corporations, governments, investors, educators—recognize the dynamics in time to intervene effectively, or whether they continue optimizing for metrics that accelerate the degradation."
https://substack.com/inbox/post/180851372?r=6p7b5o&utm_medium=ios&triedRedirect=true
"The co-degeneration thesis is not a prediction about distant futures. It describes dynamics already in motion, already documented in peer-reviewed research, already observable in the declining quality of online discourse and the increasing unreliability of AI systems that should, by simple scaling laws, only be improving.
The feedback loops are active. Engagement-optimized content degrades training data. Degraded models produce degraded outputs. Humans consuming and delegating to these systems experience cognitive effects that reduce their capacity to recognize and correct the degradation. The cycle continues.
But this is not a counsel of despair. The research also suggests intervention points. Model collapse can be prevented through data accumulation strategies that preserve genuine human content. Cognitive debt can be mitigated through usage protocols that maintain human engagement. Platform incentives can be restructured through regulation, competition, or user demand.
The question is whether institutional actors—corporations, governments, investors, educators—recognize the dynamics in time to intervene effectively, or whether they continue optimizing for metrics that accelerate the degradation."
https://substack.com/inbox/post/180851372?r=6p7b5o&utm_medium=ios&triedRedirect=true
@datum @pluralistic Pharmaceutical companies in the #EU are required by law to have a "qualified Person" (QP). Their sole purpose is to sign of the entire (incredible complex) manufacturing process of the drugs the company produces every day and take the blame (i.e. get fired) in case of errors or costly product recalls.
Overseeing and signing of #ai processes however, sounds even worse, as it is literally impossible for a person to understand the inner workings of the black box...
The value of this #AI critique by @pluralistic is its precision, not just the focus on the #AIbubble but the drivers and consequences of that bubble. Thank you.
https://pluralistic.net/2025/12/05/pop-that-bubble/
The value of this #AI critique by @pluralistic is its precision, not just the focus on the #AIbubble but the drivers and consequences of that bubble. Thank you.
https://pluralistic.net/2025/12/05/pop-that-bubble/
I feel like the general populace might not realize the importance of this idea that @pluralistic shares:
what Dan Davies calles an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.
this human (sometimes only nominally) in the loop is central to law as a whole not being broken
it's also of utmost importance for weapons of war, where AI actually is having life and death impact right now, with non-hypotheticals like "how do we make sure the system doesn't kill innocents without repercussions"
because if there's no repercussions the system will end up externalizing on the way to maximizing other metrics
@datum @pluralistic Pharmaceutical companies in the #EU are required by law to have a "qualified Person" (QP). Their sole purpose is to sign of the entire (incredible complex) manufacturing process of the drugs the company produces every day and take the blame (i.e. get fired) in case of errors or costly product recalls.
Overseeing and signing of #ai processes however, sounds even worse, as it is literally impossible for a person to understand the inner workings of the black box...
I feel like the general populace might not realize the importance of this idea that @pluralistic shares:
what Dan Davies calles an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.
this human (sometimes only nominally) in the loop is central to law as a whole not being broken
it's also of utmost importance for weapons of war, where AI actually is having life and death impact right now, with non-hypotheticals like "how do we make sure the system doesn't kill innocents without repercussions"
because if there's no repercussions the system will end up externalizing on the way to maximizing other metrics
I feel like the general populace might not realize the importance of this idea that @pluralistic shares:
what Dan Davies calles an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.
this human (sometimes only nominally) in the loop is central to law as a whole not being broken
it's also of utmost importance for weapons of war, where AI actually is having life and death impact right now, with non-hypotheticals like "how do we make sure the system doesn't kill innocents without repercussions"
because if there's no repercussions the system will end up externalizing on the way to maximizing other metrics
There's a new government of Canada official petition to bring the same rights to likeness that Denmark recently passed to protect people from their body or voice being used by AI without their consent.
If you're Canadian, sign and share!
https://www.ourcommons.ca/petitions/en/Petition/Details?Petition=e-7002
#DigitalPrivacy #Privacy #AI #AIethics #CApoli #Canada #CopyrightLaw #Copyright
Happy 25th anniversary to this Daily Mail article from the year 2000, proclaiming that internet "may be just a passing fad as millions give up on it".
@stefan Remember that time when the US built all those data centers for AI and then realized that LLMs didn’t work after all?
@pluralistic Such a great piece! Looking forward to the book too!
It hit so many things that are so important, I particularly like this part
#AI #AISlop
#TIL about the term #ReverseCentaur in @pluralistic piece: "The Reverse Centaur’s Guide to Criticizing #AI"
https://pluralistic.net/2025/12/05/pop-that-bubble/
Basically a very dark pattern where the human is not using the AI tool as a helper, but is kept in the loop to take the blame for the failures.
While on one level this story of a AI-generated picture of a bridge collapse (after the earthquake centred near Morecambe Bay on Wednesday night) leading to train cancelations seems relatively minor, its an example of the sort of disruption that the use of AI is already generating.... we may soon find that having no real trust in any images will cause all sorts of social problems (which is not to say that disruption is not already evident).
#TIL about the term #ReverseCentaur in @pluralistic piece: "The Reverse Centaur’s Guide to Criticizing #AI"
https://pluralistic.net/2025/12/05/pop-that-bubble/
Basically a very dark pattern where the human is not using the AI tool as a helper, but is kept in the loop to take the blame for the failures.
NVIDIA, meanwhile, has announced CUDA Tiles, a new programming paradigm which it says will make writing portable yet performant (NVIDIA) GPU-accelerated programs easier - and the first language to benefit is Python, via cuTile Python.
Right, last #Hackster round-up of the week, and we're starting with an all-in-one development board from RT-Thread which comes running an on-board local large language model, because of course it does:
Today's threads (a thread)
Inside: The Reverse Centaur’s Guide to Criticizing AI; and more!
Archived at: https://pluralistic.net/2025/12/05/pop-that-bubble/
1/
@pluralistic Such a great piece! Looking forward to the book too!
It hit so many things that are so important, I particularly like this part
#AI #AISlop