Personas of librarians and their response to AI - part 1
Note: This is all tongue in cheek and meant to poke fun at ourselves (librarians). Any resemblance to any individual is purely coincidental.
Part 2 will roast the more “pro AI” librarian personas. Even of this 5, I have bits of 2+5 at the very least.
Group 1: The Refusenik
"I've never used it and I never will."
Proudly never uses "AI" or “Gen Ai” knowingly—or may have used once, in the stone ages (late 2022). Still opines as an expert on everything AI and on the usefulness (or more precisely lack of) of AI. Vocabulary limited to "autocomplete on steroids," "blurry JPEG of the web," "stochastic parrot." Can name-drop Emily Bender but hasn't actually read a single paper or engaged with her actual arguments and counter arguments
(Often overlaps with Group 4. May evolve into Group 2 when forced to engage.)
Group 2: The Taxonomist
"Actually, that's not really AI, that's a LLM"
Lives for Venn diagrams of ML/AI/deep learning. Conducts "research" surveying other equally clueless librarians on definitions. Plenty of spilt digital ink on definitions “What really is Gen AI” with no practical implications. Main purpose: to signal superiority they know what "AI" is and you don't.
(Shares DNA with Group 5—both gatekeep through jargon. Group 1's way to engage without using the actual technology. Often is Group 3 in disguise)
Group 3: The Gen AI Exceptionalist
"I'm not against AI—just generative AI."
Fine with AI in principle—just not generative AI. Machine learning for classification? Grand. Neural networks for relevance ranking? No problem. Automated metadata enrichment? Wonderful. But the moment an LLM is mentioned, starting foaming at the mouth as it's an existential threat. Cannot articulate why generation is categorically different from prediction, ranking, or classification—all involve probabilistic outputs. Doesn't realise LLMs can perform classification, ranking, and extraction too—so the boundary they've drawn doesn't even map onto the technology they fear. May invoke hallucination, but untroubled by false positives in traditional search. May invoke copyright, but unconcerned about training data for embeddings. The line is vibes-based but defended with conviction.
(A more politically palatable version of Group 1. Often co-presents with Group 4.)
Group 4: The Ethical Absolutist
"But have you considered the environmental cost?"
Every discussion must centre on bias, environmental cost, and exploited labour. Valid concerns deployed as conversation-enders, not factors to weigh. Cannot distinguish "limitations to consider" from "must never use." Nuance is complicity.
Rare variants will acknowledge AI is useful but question the cost; most will deny AI is useful at all, which they feel adds insult to injury.
Subvariants focus on energy cost, IP/copyright, the now-debunked water cost, and more recently, the impact on learning (cognitive offloading). They rarely mention the one thing actually driving the resistance—the sneaking fear for their own jobs.
(Provides intellectual cover for Groups 1 and 3. May privately be Group 1 but knows outright refusal looks bad.)
Group 4b variant: The Schrödinger Sceptic
"It's just a stochastic parrot that's coming for our jobs."
AI is simultaneously useless and an existential threat. Outputs are worthless slop that nobody should trust, yet somehow capable of replacing skilled professionals. The technology that can't summarise an article properly will nonetheless render librarians obsolete by Thursday.
Will pivot between positions within a single conversation depending on which better supports the current objection. Points out hallucinations as proof of fundamental brokenness; warns of mass unemployment in the next breath.
Never pauses to ask: if it's truly that bad, why would anyone pay to replace us with it?
Group 5: The Seen-it-all Pseudo Technical Sage
"Oh, this? We were doing this in the 90s."
Plays the wise sage: everything is "old hat." Name-drops neural search, Discriminative AI, BM25, SLMs, ANN—and when challenged, evokes more jargon. Pretends to understand top NLP conference papers but misinterprets them when sharing on Bluesky or LinkedIn. Cannot actually explain what any of these terms really mean, but delivers dismissals with the weary confidence of someone who's seen it all before. They haven't.
Often leverages a related but distinct domain (e.g. information literacy or evidence synthesis) to stake a claim as the “AI expert” despite not actually really studying AI.
(The credentialed cousin of Group 2. Both gatekeep; this one has a bigger CV to hide behind.)
Now let’s roast the pro-AI librarians in part 2
substack.com/profile/10…