Cute, no one’s really afraid of LLMs like that - it’s not going to create AGI, it’s going to waste shittons of resources to create shitty midjourney images and wreck the fucking environment. and spike the cost of computation. for what? where’s the fucking killer app already? come on it’s been YEARS, what, so I don’t have to type an email reply? that’s what all this bullshit is for? what a load of horse shit
There are definitely morons who think that LLMs are a few months away from Terminator. They don’t have much overlap with the people who complain about real issues with LLMs, like the ones you mention.
The way it’s advertised is as if it AI is there to replace people thinking, everyone in AI ads is so dumb and seemingly incapable of independent thought.
They’re desperate for people to be reliant, but there’s no real uses cases beyond basic idiation.
Yep. The big names - goog, ms, meta, x etc., flailing wildly, trying again and again to inject the shit into their products is smacking of desperation.
All this bullshit is for line go up. And its mostly working, so far.
However, the bankers heavily involved in financing AI datacenters have become nervous and started approaching insurance firms for coverage in case the projects fail… And the hedge funds have had low, 0 or negative ROI for the last ~4 years due to the prior failures of the Metaverse, NFTs, and now AI not paying off yet… So new funds are drying up on two fronts, and if they don’t magically become profitable in the next year then the line is gonna go down, hard.
Here’s a deep dive from Ed Zitron into the whole AI/LLM industry that details the heavy investment from several key banks (Deutchebank being one), and the shrinking finance availability from traditional means (bank loans, hedge funds, managed funds). It’s long but it’s really worth a read if you have a spare hour or so. https://www.wheresyoured.at/the-enshittifinancial-crisis/
A glaring tell that I don’t recall him highlighting is that the hyperscalers have largely outsourced the risk of AI investment to others. META, Google, and Microsoft are making small bets on AI comparitively - they’re using cash assets they have as profits from other business models, which are still significant (measured in low billions) but dont require them to take loans or leverage themselves. This means they are playing it very cautiously, all the while they’re shoving AI into all their products to try to make it seem like they’re all-in and it’s ‘the next big thing’, which is helping their stock prices in the investor frenzy. Most of the investment capital required for the AI boom is going into hardware, datacenters and direct investment in the software development - and that’s mostly being avoided by the big guys. This allows them to minimize risk and still having a decent win if it takes off. Conversely If/when the bubble bursts they’ll still take a hit, but they’ll also still be making money via other streams so it’ll be a bump in the road for them - compared to what will happen to OpenAI, Athropic, Stability, the datacenters and their financiers. https://archive.is/WwJRg (NYTimes article).
The issue around them is more being built on time, as they have tight contracts with the scalers that allow them to simply not pay their interval payments or even pull out of the contracts entirely if delivery dates are delayed.
They’re bespoke too - which is why they’re getting ‘AI datacenter’ builds instead of approaching existing datacenters. AI racks require up to a megawatt per rack. That’s insane. They have been custom designed and built by UPS and power companies.
Yes, they could be pivoted away from AI to host ‘something else’, but it won’t help save the companies that built them get paid, because they’ll only be using a small fraction of their power delivery, and the $20,000 AI GPUs have pretty limited use-cases. It will be a massive oversupply issue causing all the datacenters hosting prices to have to drop drastically to get any businesses even into their tenancies. This will cause those hosting companies (which are up to their gills in loans) to go under - They’re the ones taking the big risks on AI. Not Meta/Google/MS/etc.
… are you under the impression that doomers aren’t real? I mean, maybe they don’t really believe the bullshit they’re spewing, but they talk endlessly about the dangers of AI and seem to actually believe LLMs are actively dangerous. Have you just not heard of these dorks? They’re, like, near-term human extinction folks who think AGI is just around the corner and will kill us all.
There’s TONS of valid issues. You’re painting everyone who criticizes AI as a doomer, and it’s specious, lazy, and does nothing to help your argument.
Just because a tiny portion of people who despise LLMs think there’s an AGI/AI/Superintelligence risk doesn’t mean that worry is shared throughout the vast majority of AI’s critics.
Your argument is weak, and calling them ‘dorks’ doesn’t support your thesis.
No I’m not? I’m painting anyone who is a doomer as a doomer, as in, specifically the people who think AGI will kill us all. They don’t care about valid issues, they care specifically about this stupid nonsense that they read about on lesswrong.com
This is a real subset of people and the meme is making fun of them, because they’re just feeding into the AI hype bubble.
No one is saying that the valid issues surrounding this tech bubble aren’t real, but that has little to do with the doomer cohort.
Elon Musk is one of them, kinda famously. Remember when there was that whole movement to rein in OpenAI for 6 months? Elon backed that, while starting xAI.
It’s a weird intersection of promoting this idea that LLMs are a form of superintelligence and therefore harmful, while also working on your own version of it (That remains under your control of course)
Cute, no one’s really afraid of LLMs like that - it’s not going to create AGI, it’s going to waste shittons of resources to create shitty midjourney images and wreck the fucking environment. and spike the cost of computation. for what? where’s the fucking killer app already? come on it’s been YEARS, what, so I don’t have to type an email reply? that’s what all this bullshit is for? what a load of horse shit
https://bsky.app/profile/bennjordan.bsky.social/post/3mcm7wmwm3k2d
that’s ONE fucking data center. they want hundreds more.
get fucked with this “ai doomers” strawman bullshit
There are definitely morons who think that LLMs are a few months away from Terminator. They don’t have much overlap with the people who complain about real issues with LLMs, like the ones you mention.
yeah and there are people who eat paint chips, too. we don’t give them any credence either.
The way it’s advertised is as if it AI is there to replace people thinking, everyone in AI ads is so dumb and seemingly incapable of independent thought.
They’re desperate for people to be reliant, but there’s no real uses cases beyond basic idiation.
AI why is my indoor plant growing towards the window? Doesn’t it love me?
Real AI ad.
Yep. The big names - goog, ms, meta, x etc., flailing wildly, trying again and again to inject the shit into their products is smacking of desperation.
All this bullshit is for line go up. And its mostly working, so far.
However, the bankers heavily involved in financing AI datacenters have become nervous and started approaching insurance firms for coverage in case the projects fail… And the hedge funds have had low, 0 or negative ROI for the last ~4 years due to the prior failures of the Metaverse, NFTs, and now AI not paying off yet… So new funds are drying up on two fronts, and if they don’t magically become profitable in the next year then the line is gonna go down, hard.
Mind sharing some sorces? Not that I don’t believe you, I just want to read more good news.
Asking for sources is always welcome with me.
Here’s a deep dive from Ed Zitron into the whole AI/LLM industry that details the heavy investment from several key banks (Deutchebank being one), and the shrinking finance availability from traditional means (bank loans, hedge funds, managed funds). It’s long but it’s really worth a read if you have a spare hour or so.
https://www.wheresyoured.at/the-enshittifinancial-crisis/
A glaring tell that I don’t recall him highlighting is that the hyperscalers have largely outsourced the risk of AI investment to others. META, Google, and Microsoft are making small bets on AI comparitively - they’re using cash assets they have as profits from other business models, which are still significant (measured in low billions) but dont require them to take loans or leverage themselves. This means they are playing it very cautiously, all the while they’re shoving AI into all their products to try to make it seem like they’re all-in and it’s ‘the next big thing’, which is helping their stock prices in the investor frenzy. Most of the investment capital required for the AI boom is going into hardware, datacenters and direct investment in the software development - and that’s mostly being avoided by the big guys. This allows them to minimize risk and still having a decent win if it takes off. Conversely If/when the bubble bursts they’ll still take a hit, but they’ll also still be making money via other streams so it’ll be a bump in the road for them - compared to what will happen to OpenAI, Athropic, Stability, the datacenters and their financiers.
https://archive.is/WwJRg (NYTimes article).
Tbh the datacentres are least concern given how readily those structures and equipment can be pivoted to other uses. Be a helluva fire sale though.
The issue around them is more being built on time, as they have tight contracts with the scalers that allow them to simply not pay their interval payments or even pull out of the contracts entirely if delivery dates are delayed.
They’re bespoke too - which is why they’re getting ‘AI datacenter’ builds instead of approaching existing datacenters. AI racks require up to a megawatt per rack. That’s insane. They have been custom designed and built by UPS and power companies.
https://blog.se.com/datacenter/2025/10/16/the-1-mw-ai-it-rack-is-coming-and-it-needs-800-vdc-power/
Yes, they could be pivoted away from AI to host ‘something else’, but it won’t help save the companies that built them get paid, because they’ll only be using a small fraction of their power delivery, and the $20,000 AI GPUs have pretty limited use-cases. It will be a massive oversupply issue causing all the datacenters hosting prices to have to drop drastically to get any businesses even into their tenancies. This will cause those hosting companies (which are up to their gills in loans) to go under - They’re the ones taking the big risks on AI. Not Meta/Google/MS/etc.
… are you under the impression that doomers aren’t real? I mean, maybe they don’t really believe the bullshit they’re spewing, but they talk endlessly about the dangers of AI and seem to actually believe LLMs are actively dangerous. Have you just not heard of these dorks? They’re, like, near-term human extinction folks who think AGI is just around the corner and will kill us all.
There’s TONS of valid issues. You’re painting everyone who criticizes AI as a doomer, and it’s specious, lazy, and does nothing to help your argument.
Just because a tiny portion of people who despise LLMs think there’s an AGI/AI/Superintelligence risk doesn’t mean that worry is shared throughout the vast majority of AI’s critics.
Your argument is weak, and calling them ‘dorks’ doesn’t support your thesis.
No I’m not? I’m painting anyone who is a doomer as a doomer, as in, specifically the people who think AGI will kill us all. They don’t care about valid issues, they care specifically about this stupid nonsense that they read about on lesswrong.com
This is a real subset of people and the meme is making fun of them, because they’re just feeding into the AI hype bubble.
No one is saying that the valid issues surrounding this tech bubble aren’t real, but that has little to do with the doomer cohort.
Elon Musk is one of them, kinda famously. Remember when there was that whole movement to rein in OpenAI for 6 months? Elon backed that, while starting xAI.
It’s a weird intersection of promoting this idea that LLMs are a form of superintelligence and therefore harmful, while also working on your own version of it (That remains under your control of course)