• 7 Posts
  • 762 Comments
Joined 2 years ago
cake
Cake day: September 11th, 2023

help-circle








  • I fucking hate when people ask an LLM “what were you thinking” because the answer is meaningless, and it just showcases how little people understand of how they actually work.

    Any activity inside the model that could be considered any remote approximation of “thought” is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.

    All it’s going to do when you ask it that, is it’s looking over the past output and attempting to rationalize what it previously output.

    And actually, even that is excessively anthropomorphizing the model. In reality, it’s just generating a plausible response to the question “what were you thinking”, given the history of the conversation.

    I fucking hate this version of “AI”. I hate how it’s advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It’s absolute insanity.




  • I’ve long maintained that actually writing code is only a small part of the job. Understanding the code that exists and knowing what code to write is 90% of it.

    I don’t personally feel that gen AI has a place in my work, because I think about the code as I’m writing it. By the time I have a complete enough understanding of what I want the code to do in order to write it into a prompt, the work is already mostly done, and banging out the code that remains and seeing it come to life is just pure catharsis.

    The idea of having to hand-hold an LLM through figuring out the solution itself just doesn’t sound fun to me. If I had to do that, I’d rather be teaching an actual human to do it.





  • But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

    There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.


  • Technus@lemmy.ziptoProgramming@programming.devLLMS Are Not Fun
    link
    fedilink
    arrow-up
    125
    arrow-down
    6
    ·
    23 days ago

    I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.

    90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.

    Glad to see there’s other programmers out there who actually take pride in their work.