Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI is how bosses wage war on "professions" - Cory Doctorow

https://pluralistic.net/2026/01/20/i-would-prefer-not-to/#i-cant-do-that-boss]
Growing up, I assumed that being a "professional" meant that you were getting paid to do something. That's a perfectly valid definition (I still remember feeling like a "pro" the first time I got paid for my writing), but "professional" has another, far more important definition.
In this other sense of the word, a "professional" is someone bound to a code of conduct that supersedes both the demands of their employer and the demands of the state. Think of a doctor's Hippocratic Oath: having sworn to "first do no harm," a doctor is (literally) duty-bound to refuse orders to harm their patients. If a hospital administrator, a police officer or a judge orders a doctor to harm their patient, they are supposed to refuse. Indeed, depending on how you feel about oaths, they are required to refuse.
There are many "professions" bound to codes of conduct, policed to a greater or lesser extent by "colleges" or other professional associations, many of which have the power to bar a member from the profession for "professional misconduct." Think of lawyers, accountants, medical professionals, librarians, teachers, some engineers, etc.
While all of these fields are very different in terms of the work they do, they share one important trait: they are all fields that AI bros swear will be replaced by chatbots in the near future.
I find this an interesting phenomenon. It's clear to me that chatbots can't do these jobs. Sure, there are instances in which professionals may choose to make use of some AI tools, and I'm happy to stipulate that when a skilled professional chooses to use AI as an adjunct to their work, it might go well. This is in keeping with my theory that to the extent that AI is useful, it's when its user is a centaur (a person assisted by technology), but that employers dream of making AI's users into reverse centaurs (machines who are assisted by people):
https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington
A psychotherapist who uses AI to transcribe sessions so they can refresh their memory about an exact phrase while they're making notes is a centaur. A psychotherapist who monitors 20 chat sessions with LLM "therapists" in order to intervene if the LLM starts telling patients to kill themselves is a "reverse centaur." This situation makes it impossible for them to truly help "their" patients; they are an "accountability sink," installed to absorb the blame when a patient is harmed by the AI.
Lawyers might use a chatbot to help them format a brief or transcribe a client meeting (centaur)- but when senior partners require their juniors and paralegals to write briefs at inhuman speed (reverse centaur), they are setting themselves up for briefs full of "hallucinated" citations:
https://www.damiencharlotin.com/hallucinations/
I hold a bedrock view that even though an AI can't do your job, an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job:
https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
But why are bosses such easy marks for these gabby AI hustlers? Partly, it's because an AI can probably do your boss's job if 90% of your job is answering email and delegating tasks, and if you are richly rewarded for success but get to blame failure on your underlings, then, yeah, an AI can totally do that job.
But I think there's an important psychological dimension to this: bosses are especially easy to trick with AI when they're being asked to believe that they can use AI to fire workers who are in a position to tell them to fuck off.
That certainly explains why bosses are so thrilled by the prospect of swapping professionals for chatbots. What a relief it would be to fire everyone who is professionally required to tell you to fuck off when you want them to do stupid and/or dangerous things; so you could replace them with servile, groveling LLMs that punctuate their sentences with hymns to your vision and brilliance!
This also explains why media bosses are so anxious to fire screenwriters and actors and replace them with AI. After all, you prompt an LLM in exactly the same way a clueless studio boss gives notes to a writers' room: "Give me ET, but make it about a dog, give it a love interest, and put a car chase in Act III." The difference is that the writers will call you a clueless fucking suit and demand that you go back to your spreadsheets and stop bothering them while they're trying to make a movie, whereas the chatbot will cheerfully shit out a (terrible) script to spec. The fact that the script will suck is less important than the fact that swapping writers for LLMs will let studio bosses escape ego-shattering conflicts with empowered workers who actually know how to do things.
It also explains why bosses are so anxious to replace programmers with chatbots. When programmers were scarce and valuable, they had to be lured into employment with luxurious benefits, lavish pay, and a collegial relationship with their bosses, where everyone was "just an engineer." Tech companies had business-wide engineering meetings where techies were allowed to tell their bosses that they thought their technical and business strategies were stupid.
Now that tech worker supply has caught up with demand, bosses are relishing the thought of firing these "entitled" coders and replacing them with chatbots overseen by traumatized reverse centaurs who will never, ever tell them to fuck off:
https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype
And of course, this explains why bosses are so eager to use AI to replace workers who might unionize: drivers, factory workers, warehouse workers. For what is a union if not an institution that lets you tell your boss to fuck off?
https://www.thewrap.com/conde-nast-fires-union-staffers-video/
AI salesmen may be slick, but they're not that slick. Bosses are easy marks for anyone who dangles the promise of a world where everyone human and machine follows orders to the letter, and praises you for giving them such clever, clever orders.
In this other sense of the word, a "professional" is someone bound to a code of conduct that supersedes both the demands of their employer and the demands of the state. Think of a doctor's Hippocratic Oath: having sworn to "first do no harm," a doctor is (literally) duty-bound to refuse orders to harm their patients. If a hospital administrator, a police officer or a judge orders a doctor to harm their patient, they are supposed to refuse. Indeed, depending on how you feel about oaths, they are required to refuse.
There are many "professions" bound to codes of conduct, policed to a greater or lesser extent by "colleges" or other professional associations, many of which have the power to bar a member from the profession for "professional misconduct." Think of lawyers, accountants, medical professionals, librarians, teachers, some engineers, etc.
While all of these fields are very different in terms of the work they do, they share one important trait: they are all fields that AI bros swear will be replaced by chatbots in the near future.
I find this an interesting phenomenon. It's clear to me that chatbots can't do these jobs. Sure, there are instances in which professionals may choose to make use of some AI tools, and I'm happy to stipulate that when a skilled professional chooses to use AI as an adjunct to their work, it might go well. This is in keeping with my theory that to the extent that AI is useful, it's when its user is a centaur (a person assisted by technology), but that employers dream of making AI's users into reverse centaurs (machines who are assisted by people):
https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington
A psychotherapist who uses AI to transcribe sessions so they can refresh their memory about an exact phrase while they're making notes is a centaur. A psychotherapist who monitors 20 chat sessions with LLM "therapists" in order to intervene if the LLM starts telling patients to kill themselves is a "reverse centaur." This situation makes it impossible for them to truly help "their" patients; they are an "accountability sink," installed to absorb the blame when a patient is harmed by the AI.
Lawyers might use a chatbot to help them format a brief or transcribe a client meeting (centaur)- but when senior partners require their juniors and paralegals to write briefs at inhuman speed (reverse centaur), they are setting themselves up for briefs full of "hallucinated" citations:
https://www.damiencharlotin.com/hallucinations/
I hold a bedrock view that even though an AI can't do your job, an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job:
https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
But why are bosses such easy marks for these gabby AI hustlers? Partly, it's because an AI can probably do your boss's job if 90% of your job is answering email and delegating tasks, and if you are richly rewarded for success but get to blame failure on your underlings, then, yeah, an AI can totally do that job.
But I think there's an important psychological dimension to this: bosses are especially easy to trick with AI when they're being asked to believe that they can use AI to fire workers who are in a position to tell them to fuck off.
That certainly explains why bosses are so thrilled by the prospect of swapping professionals for chatbots. What a relief it would be to fire everyone who is professionally required to tell you to fuck off when you want them to do stupid and/or dangerous things; so you could replace them with servile, groveling LLMs that punctuate their sentences with hymns to your vision and brilliance!
This also explains why media bosses are so anxious to fire screenwriters and actors and replace them with AI. After all, you prompt an LLM in exactly the same way a clueless studio boss gives notes to a writers' room: "Give me ET, but make it about a dog, give it a love interest, and put a car chase in Act III." The difference is that the writers will call you a clueless fucking suit and demand that you go back to your spreadsheets and stop bothering them while they're trying to make a movie, whereas the chatbot will cheerfully shit out a (terrible) script to spec. The fact that the script will suck is less important than the fact that swapping writers for LLMs will let studio bosses escape ego-shattering conflicts with empowered workers who actually know how to do things.
It also explains why bosses are so anxious to replace programmers with chatbots. When programmers were scarce and valuable, they had to be lured into employment with luxurious benefits, lavish pay, and a collegial relationship with their bosses, where everyone was "just an engineer." Tech companies had business-wide engineering meetings where techies were allowed to tell their bosses that they thought their technical and business strategies were stupid.
Now that tech worker supply has caught up with demand, bosses are relishing the thought of firing these "entitled" coders and replacing them with chatbots overseen by traumatized reverse centaurs who will never, ever tell them to fuck off:
https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype
And of course, this explains why bosses are so eager to use AI to replace workers who might unionize: drivers, factory workers, warehouse workers. For what is a union if not an institution that lets you tell your boss to fuck off?
https://www.thewrap.com/conde-nast-fires-union-staffers-video/
AI salesmen may be slick, but they're not that slick. Bosses are easy marks for anyone who dangles the promise of a world where everyone human and machine follows orders to the letter, and praises you for giving them such clever, clever orders.
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
AI is how bosses wage war on "professions" - Cory Doctorow (Original Post)
justaprogressive
5 hrs ago
OP
I like Cory, and we've agreed enough at times that I've often retweeted him (and he's occasionally -
highplainsdem
5 hrs ago
#2
JT45242
(3,872 posts)1. The reverse centaur model is what I hear a lot from bosses
Thought they try to make it sound like increased efficiency and more rewarding work not a way to refuse to hire the people they need.
No ethics and the AI cannot do what the professionals in my industry do.
But the lie persists.
highplainsdem
(60,388 posts)2. I like Cory, and we've agreed enough at times that I've often retweeted him (and he's occasionally -
much less often - retweeted me), but he's wrong on one point here:
A psychotherapist who uses AI to transcribe sessions so they can refresh their memory about an exact phrase while they're making notes is a centaur.
Cory is using the word "centaur" to mean someone using AI with good results. But anyone wanting to be sure exactly what words were used should never depend on AI, given its error/hallucination rate. Only an actual recording will work for that. If you've ever read instant AI transcripts of talk or news shows uploaded before they were proofread, or AI-generated captions, you'll have seen how many errors they can generate.