It’s important to remember that a lot of things are labeled as AI that are not.
For example, I worked in health insurance claims processing. 10 years ago. For 11 years. We used software to help identify claims that were duplicates. It would compare codes, dates, diagnosis codes. A human reviews potential duplicates that weren’t exact matches. And matching information and mismatch information would be displayed for the human to review it.
Often it required a phone call to determine if the claim was a correction or if the date was wrong. Or any number of reasons. Sometimes a human could figure it out though.
Now days, such a program is being called “AI”. But it was just an program that helped identify possible duplicates, and then displayed the information a human would need to determine if it was or not. The program also could auto deny claims if certain criteria was met that flagged it for being a “almost certainly” duplicate. The program also could auto decide a claim was not a duplicate.
In that case, the duplicate review prompt didn’t even come up for a human to see. It was fully automated.
Most claims actually were processed fully with software. Only a small percentage required human intervention/review.
But yeah that was in place before I started there and was still in place when I left 10 years ago.
We didnt call it AI back then. We called it automated claims processing.
If we only count ai programs like LLMs, “machine learning algorithms”. That’s a completely different thing. And it’s kind of shitty.
In comparison to the claims software I used at the insurance company that was tailored and designed by a human at every step.
The distinction needs made when comparing how useful they are. Because it inflates the value of AI when effective software is being lumped in with it.
whether it is actually helping them do anything is a separate question
I mean, I recently/am in the middle of setting up a personal Home Assistant instance. I worked in software for years, but always in a windows shop, and never much on the networking side. Chatgpt walked me through installing a linux distro on a lenovo laptop, configuring bios and OS settings to make it a passable server, installing and configuring VM software, installing the HA os in a virtual machine and troubleshooting that installation when it didn’t work.
This is the sort of computer thing that has always been unbearably frustrating for me, and without chatgpt, I would have probably got bogged down somewhere between installing kvm and getting the HA os up and running, worked on it in my spare time for a week, and then given up and put a curse on the whole business.
ChatGPT is the sole reason I was able to switch from Windows to Linux. It walked me through the installation process and has helped me troubleshoot every issue I’ve had.
I’m anti-AI, essentially, but I think this touches on what may be an important arc in all this (very speculatively at least).
Namely, maybe humanity had ~20 years to make tech “good” (or not bad), from 1990 to 2010 say, and failed. Or maybe missed the mark.
What that would look like, I’m not sure exactly, but I wonder how much your general sentiments are distributed amongst tech people — how much the average person who’s substantially touched tech is just over all of the minutiae, yak shaving, boilerplate, poor documentation, inconsistencies, backwards incompatibilities … etc etc. Just how much we’ve all been burnt out on the idea of this as a skill and now just feel it’s more like herding cats.
All such that AI isn’t just making up for all the ways tech is bad, but a big wake up call on what we even want it to be.
deleted by creator
I can see the point you are making. But at the same time, a lot of the tech I touched is already quite mature, and is probably decently documented.
I totally understand the feeling you are describing of just hearding cats. Without an LLM, this project would have taken 10x as long, with 9/10s of that time being spent reading forum posts and github bug reports and stack overflow questions which I think might solve the problem but which actually don’t.
But at the same time, I’m in a pretty common position in software where I don’t know anything about a mature and well designed tool, but I don’t want to really learn how it works because odds are, I will only use it once - or at least, by the time I use it again, I will have forgotten everything about it. And the LLM was able to do my googling for me and tell me “do this”, which was far faster and more pleasant. So I think this use case is quite reasonable.
Bruh do you know how long it would take me to write an Excel macro? The m code for a power query? Fuck yeah it’s helping
I find it helps. Not enough to pay what they want, or even what they need to break even, but it’s not useless. It’s not in any way intelligent, but it’s good at tidying up notes and summarizing conversations and tests. It needs thorough manual review, which is annoying, but still better than doing the summary manually.
And even if there is some productivity positive, there’s also the question of whether there’s a negative that’s hidden, not understood or not spoken about. Eg - thinking you’ve done your job but it’s actually sloppy and forcing someone else to clean up after you.
And this is as good as it will ever get. The cycle of enshitification will occur with AI. It will not be free forever. They will introduce or are introducing placement ads into generative applications. Generated output will degrade as the models begin to mix AI generated content into the corpus of user generated data, etc etc.
for some it is mandatory
I don’t understand how it could be only 30% in IT. I do not fancy AI. But there is no doubt it can help a lot in some tasks. It finds my syntax errors much faster than other means, it always makes suggestions , it is an excellent formater (of anything). And if you invest in it enough time, it can help to automate entire tasks.



