• 0 Posts
  • 183 Comments
Joined 3 months ago
cake
Cake day: October 9th, 2025

help-circle







  • Hackworth@piefed.catoMemes@sopuli.xyzCant Decide 🤖
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    4 days ago

    Usually when people share a post, it’s because the post evoked a reaction, and they want to share that with someone. Making the conversation about the provenance of the post truncates the exchange in an unsatisfying way. For a news story, propaganda, or the like, the source is important. For funny dog videos? Maybe the quality of the exchange is more important. A nice middle ground would be to react as if it were true, and then point out it’s probably AI. Videos are easier to spot, but the difference between an image that’s obviously AI and one that looks real is like 10 min of work in Photoshop. So we’re often better off saving our faculties of discernment for the stuff that matters.







  • I’ve been thinking a lot about language technologies, specifically AI. Intentional attempts to control the narrative are obvious, but there are subtler and (in some cases) unintentional manipulations going on.

    Human/AI interaction can be thought of as the meeting of two maps of meaning. In a human/human interaction, we can alter each other’s maps. But outside of some ephemeral attractors within the context, a conversation can’t alter the LLM’s map of meaning. At least until the conversation is used to train the next version of the model. But even then, how that is used is dictated by the trainer. So it is much more likely that, over time, human maps of meaning will increasingly resemble LLMs’.

    Even without nefarious conspiracies to manipulate discourse, this means our embodied maps of meaning are becoming more like the language-only maps of meaning trained in to LLMs. Essentially, if we’re not treating every meaningful chat with an AI as a conversation with the Fae Folk, we’re in danger of falling prey to glamours. (Interestingly, glamour shares an etymology with grammar. Spell and spelling.) Our attractors will look more like their’s. If we continue to lack discernment about this, I can’t imagine it’ll be good for anyone.






  • LLMs are both deliberately and unwittingly programmed to be biased.

    I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.

    The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust

    And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.

    And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.

    I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.