Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I’m suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someone’s name can be used for commercial purposes.
And here is the complaint, via evacide.
OT: an interesting musing I found on fedi:

the Pentagon’s CTO has AI psychosis now. sighhhhhhhhh
The whole argument can just be countered with “if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn’t that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn’t be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??”
It just reeks of bullshit. “uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we’re doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump.”
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. …probably both.
I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones “in charge” of skynet.
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI.
That is absolutely the reason, or at least part of it. See: Pete Hegseth Got His Happy Meal and how AGI-is-nigh doomers own-goaled themselves
It’s possible the attempt to shove AI in every nook and cranny in the pentagon didn’t especially pan out and since his face was all over that project, he’s desperate for a scapegoat.
Like for sure he’d have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren’t for the wokies in anthropic acting up.
Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude
FT reports from Amazon insiders that they’re investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.
FT also links to several previous stories they’ve reported on related issues, and I haven’t had the time to breach the paywalls to read further, but the line that caught my eye was this:
The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.
To be honest, this is why I’m skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it’s only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what’s happening is, of course, a different story.
So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didn’t.
I mean don’t get me wrong, things are much worse now than 3 years ago, clearly. But they’re not like, Great Depression worse. They’re not even 2008 worse. It’s just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.
To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but it’s holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and I’m told it’s down more often, and often in “soft downtime” like notifications not working, or DMs, or some other feature, or it’s working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.
I predict AWS will 1) get slower and costlier thanks to “AI”, with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though I’m not ruling out a spiral into complete recession, either.
I somewhat agree although when the “other shoe drops” and these things start impacting the money men they may start to realise AI isn’t the magic cure they thought it was (he says kind of hopefully)
6 hours of downtime for Amazon shopping. A very simple back of a napkin calculation. They made $213.4bn in sales in q4 2025. So divide that by 90 days and then 24 hours and multiply by 6… We are talking a $0.26bn loss for 6 hours downtime… That is not an insignificant amount of money. I imagine most bosses would be screaming for heads having lost that much money in sane non-hyper-scaled businesses.
It’s also a trend that I don’t see stopping without a major structural change. I don’t think there’s a point at which they’re going to say “we’ve cut enough corners and are going to stop risking stability and service degradation.” The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.
to follow this one up: there is now a new study about AI agents being dogshit at keeping code working over the long term
Unfortunately the paper structure screams “AI senpai, notice me!”
AI coding agents seem bad at this job yet, but if you optimize for our benchmark…
Silicon Valley is buzzing about this new idea: AI compute as compensation
These people are genuinely unhinged.
As the recent harpers article says:
"…people who should be in The Hague are giving [startups] twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen…”
this is just wages paid in crypto but adapted to new era in a way that doesn’t make sense
“Selling your soul to the company store is not just fun, it is also invigorating!”
Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos you’ve ever seen.
DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead
Revealed: UK’s multibillion AI drive is built on ‘phantom investments’
Previously, on Awful, I predicted that Oracle would be all-in on the bubble:
Microsoft knows that there’s no money to be made here, and is eager to see how expensive that lesson will be for Oracle; Oracle is fairly new to the business of running a public cloud and likely thinks they can offer a better platform than Azure, especially when fueled by delicious Arabian oil-fund money.
But, uh, there’s not going to be any Arabian money while we’re dancing in the desert, blowing up the sunshine. The lawnmower is now running low on gas. Today, Oracle continues to make astoundingly bad business decisions:
Oracle is the only major player funding the AI buildout with debt, carrying over $100 billion on its books while free cash flow has gone negative.
new development in ontology: “the ontology that makes ai models valuable is american”
“Our lethal capacities. Our ability to fight war.”
These are two different things. But I fear he doesn’t get that.
Actually the race-realism use last week, combined with this one, makes me realize that for them it’s just a fancy way of saying “world-view” [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].
It’s just a way of calling biases/prejudice legitimate.
And you know what, inasmuch the models have a “world-view” it IS annoyingly american in many ways. (at least the wrong kind of american.)
I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.
I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.
you gotta give him a morsel of credit, he’s got his buzzword and he’s stickin’ to it
I was not ready
AI was going to give us all universal healthcare but we didn’t believe hard enough and now all we have is this.
Chris Stokel-Walker at Fast Company reports:
High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.
The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. […] “Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.
Just one of many reasons that the mere existence of “ChatGPT Edu” means that many people need to be tased in the nads
Systemd
Jesus.
I’ve been advocating for a hall of fame of projects that explicitly reject LLMs; ctrl+f “Gentoo” on this very comment thread for the few examples I heard about.
Eh, straight pip with venv and pip-tools for support worked fine anyway.wrong uv!As for systemd… time to look at the BSDs? Was Debian among the anti-slop projects? Would be nice if they took an interest in preventing the slopification of one of their core system.
Different UV! Libuv is the event loop/scheduler that powers node.js. could be a funky new way to compromise a whole bunch of node applications
Edit: typo - although “nose applications” being compromised sounds bad too.
Ah, thanks! My expectations of node aren’t much affected I guess. Bun.js maybe?
libuvis a very common way to get a portable event loop. If you’re logged into GH and can use their search then you can look at the over fifty packages in nixpkgs depending on it. I used it when I developed (the networking and JIT parts of) the reference implementation for Monte, to give a non-nixpkgs example.
Turns out, that uv also sucks now!
this is a good post and some of y’all may enjoy it too: https://dotart.blog/cobbles/ai-and-that-guy-at-the-bar
It was very good, and I’m glad I clicked through to the link to Robert Kingett’s story “The Colonization of Confidence”, which deserves its own highlight.
Even if the constant reminders that I’m trapped in the machine are painful.
Hey, he’s posted here before!
That story is rad as hell. I was ready to run through a wall for those folks at the end. Appreciate you, Robert!
I noticed that too which is an extra reason why I figured I’d drop the link and name in. His posts about receiving an LLM-generated happy birthday is something I think about surprisingly frequently.
I swear every time his stuff floats through here I end up standing as I read it and wildly gesticulating at my living room or ranting extemporaneously to my basement about something it made me think of or feel. After reading this piece I hope that comes off as more complimentary to his work than showing myself to be a freaking weirdo.
Starting this Stubsack off, I’ve found another FOSS project that hit the digital krokodil - ntfy.sh v2.18.0 was written by AI
I feel like at this point I want to highlight the ones that took a clear stance against LLM code. On a chardet thread, people listed:
- Gentoo
- Servo
- Loupe
- Qemu
- postmarketOS
- GoTo Social
- Zig
guess i’ll have to write my own unifiedpush provider
I’m still happy with Pushover. Hasn’t changed in a decade (and a half?! Been using that since 2012, damn), works really pretty well.
It’s not self-hosted but when there are push notification services on the path, nothing really is.
there’s also overpush, which is meant as a self-hostable drop-in replacement for pushover and does not use ai afaict.
Ooh, that’s really cool. Also, very sobering section on the various e2e methods that are a pretty thorough indictment of all the chat systems out there.
That didn’t really surprise me tbh, I follow a blog that is ostensibly meant to be a furry blog but gets repeatedly sidetracked into cryptographic stuff, and the conclusion for anything e2ee is essentially that only signal is worth using if you are looking for actual e2ee. But yeah, encryption is generally pretty bad on this kinda thing.
Yeah, soatok has been doing really good work there. It’s disappointing that the matrix folks don’t seem to take it seriously.










