

Transformer architectures similar to those used in LLMs are the foundation for AlphaFold 2 and medical vision models like Med-ViT. There’s not really a clean way to distinguish “good” and “bad” AI by architecture. It’s all about the use.


Transformer architectures similar to those used in LLMs are the foundation for AlphaFold 2 and medical vision models like Med-ViT. There’s not really a clean way to distinguish “good” and “bad” AI by architecture. It’s all about the use.


The AI-generated exhibit was about the dangers of AI psychosis, though it did not address indigestion.
Many medical applications of ML do use transformer architectures, so it’s fundamentally the same technology.


Answer this quick survey to read your SMS.
Usually when people share a post, it’s because the post evoked a reaction, and they want to share that with someone. Making the conversation about the provenance of the post truncates the exchange in an unsatisfying way. For a news story, propaganda, or the like, the source is important. For funny dog videos? Maybe the quality of the exchange is more important. A nice middle ground would be to react as if it were true, and then point out it’s probably AI. Videos are easier to spot, but the difference between an image that’s obviously AI and one that looks real is like 10 min of work in Photoshop. So we’re often better off saving our faculties of discernment for the stuff that matters.


I’ve looked into it a little. If all you want to do is listen, I don’t think ya need a cert *at least around here. And the transmit one isn’t that hard to get. They removed the Morse requirement, though you can still get a higher tier certification for learning it. There are a surprising number of ham antennas and generators in my neighborhood.


Suno.com is basically this. It even allows users to comment on the songs.


I downloaded 17 years worth of my comments before overwriting and deleting my old reddit account. Been thinking about QLoRA fine-tuning Qwen on those comments. Not for use on the internet or anything, just so I can streamline the process of arguing with myself.


As a practitioner of that dark art, I fear you know not what you summon. You don’t really want Lemmy to be popular, not in a way that traditional marketing is going to make it popular.
I’ve been thinking a lot about language technologies, specifically AI. Intentional attempts to control the narrative are obvious, but there are subtler and (in some cases) unintentional manipulations going on.
Human/AI interaction can be thought of as the meeting of two maps of meaning. In a human/human interaction, we can alter each other’s maps. But outside of some ephemeral attractors within the context, a conversation can’t alter the LLM’s map of meaning. At least until the conversation is used to train the next version of the model. But even then, how that is used is dictated by the trainer. So it is much more likely that, over time, human maps of meaning will increasingly resemble LLMs’.
Even without nefarious conspiracies to manipulate discourse, this means our embodied maps of meaning are becoming more like the language-only maps of meaning trained in to LLMs. Essentially, if we’re not treating every meaningful chat with an AI as a conversation with the Fae Folk, we’re in danger of falling prey to glamours. (Interestingly, glamour shares an etymology with grammar. Spell and spelling.) Our attractors will look more like their’s. If we continue to lack discernment about this, I can’t imagine it’ll be good for anyone.


If I recall, gmail accounts were automatically integrated with gemini. You have to disable smart features to disengage gemini from gmail, and that also forces you to turn off spell check and spam filtering.


[image of Clippy]


If you put [brackets] around the word before your (parened link), it’ll make it an actual link.


LLMs are both deliberately and unwittingly programmed to be biased.
I mean, it sounds like you’re mirroring the paper’s sentiments too. A big part of Clark’s point is that interactions between humans and generative AI need to take into account the biases of the human and the AI.
The lesson is that it is the detailed shape of each specific human-AI coalition or interaction that matters. The social and technological factors that determine better or worse outcomes in this regard are not yet fully understood, and should be a major focus of new work in the field of human-AI interaction. […] We now need to become experts at estimating the likely reliability of a response given both the subject matter and our level of skill at orchestrating a series of prompts. We must also learn to adjust our levels of trust
And as I am not, Clark is not really calling Plato a crank. That’s not the point of using the quote.
And yet, perhaps there was an element of truth even in the worries raised in the Phaedrus. […] Empirical studies have shown that the use of online search can lead people to judge that they know more ‘in the biological brain’ than they actually do, and can make people over-estimate how well they would perform under technologically unaided quiz conditions.
I don’t think anyone is claiming that new technology necessarily leads to progress that is good for humanity. It requires a great deal of honest effort for society to learn how to use a new technology wisely, every time.


I talked about the way in which Plato’s concerns were valid and expressed similar fears about misuse. The linked article is about how to approach the specific technology.
Meanwhile, Anthropic in the last month:
The assistant axis: situating and stabilizing the character of large language models
Next-generation Constitutional Classifiers: More efficient protection against universal jailbreaks
Introducing Bloom: an open source tool for automated behavioral evaluations