• 18 Posts
  • 2.32K Comments
Joined 3 years ago
cake
Cake day: June 5th, 2023

help-circle
  • In a pure debate sense, this would be true, even an unpopular or suspicious person is still capable of making a valid point. It should be considered, however, that internet arguments are not formal debates. They can at times use the form and language of them, but most people are not skilled in that kind of formalized arguing, and most people are not arguing in an actual attempt to use the debate to identify stronger vs inconsistent positions (rather than just trying to push people towards ones own ideas or to put down ideas one finds reprehensible).

    Now, I dont personally tend to find much point in looking through profiles, it takes too much time for little benefit in my view, but it can sometimes tell you if an account is not worth the time and emotional investment to interact with, or if it has signs that it might not be. The nature of social media is such that there are always far more user’s trying to get your attention, than you have attention to spare. As such, if theres even a notable red-flag that an account isnt worth the time and potential frustration to engage with, it can make pragmatic sense to move on (depending on how much one is willing to put up with, I guess).

    From that perspective, telling other people what it was that seemed like a red flag to you lets them consider if that thing makes that account worth their time or not, without them having to find it too, and therefore potentially does those other people a favor. That sounds a bit harsh (at least to me) because plenty of things others might consider suspect, like a new account, cant always be helped (everyone starts off new after all), and being ignored, or having other people call out that thing as a reason they might want to ignore you, is frustrating, but that’s just the nature of giving massive numbers of people the ability to talk to everyone else; most people wont want or have the time to listen to you, and you’re not entitled to their time, however unfair their reason for dismissing you might be.










  • I sometimes find it interesting to contemplate how best to treat swear words. One the one hand, the thing that seems most obvious to me is to just say “theyre just words to express frustration or anger at something. Those are real and valid emotions, and its silly to treat expletives like theyre morally wrong to say somehow, or like its a bad thing if children learn them, etc.” On the other hand, part of the appeal to using them is that they feel at least a bit taboo, allowing a sentiment like “I feel so upset with this situation that I’m going to break this social rule to show it”. If theres no taboo at all, which you get when they’re used all the time and entirely casually, they no longer serve that purpose very well and people start looking for new things that feel a little bit transgressive to say. Which paradoxically implies that there isnt a point in saying them unless youre not “allowed” to say them, except they cant be too offensive to say either or they’ll feel disproportionate and rude.


  • The thing with comparing sci-fi AI to modern LLMs is that virtually no science fiction AI that I know of actually acts the way LLMs do. They tend to be good at things modern AI is bad at, like logical reasoning or advanced math, but bad at things that AI can already do, like generate images that at least look like they could be human art, or write text that appears emotionally charged. They also tend to be directly programmed in ways that a singular (usually genius, but still) individual can pick through and understand, rather than being trained in a black box sort of manner that is very difficult for a human to reverse-engineer.

    That isnt surprising, sci-fi writers arent oracles after all and just having AI of some kind probably makes a story more realistic than just assuming the technology never gets invented even far into the future, but in my view these kinds of sci-fi AI are basically a different, hypothetical technology going for the same end result. As such, I dont really expect even a very advanced iteration on what we have will look like star trek AI, any more than modern cars tend to fly or run off miniature nuclear reactors the way sci-fi of decades ago saw cars of the future. I dont think it will look like skynet either. I do think we might get some interesting science fiction in the coming decades exploring what a very advanced version of the technology we do have might end up like though. It probably wont be terribly accurate either, but Id bet it will be closer than works where the basis for extrapolating AI tech is “what if the calculator could talk and think”.





  • So, what you’re talking about (the past and future appearing different to different observers) sounds like a different concept to what this comic was about; if im understanding your meaning, that’s something that comes up from relativity, and as such has a bit more grounding to it than any position on if other points in time exist somewhere else along a “time axis” already or if the future is truly unwritten. Either position on the nature of time will result in a universe that looks exactly the same from our perspective, and therefore can only be speculated on, but relativity makes physically testable predictions that can be experimentally verified. I can’t really explain it adequately as I only understand the basics myself (though from what I know, nobody ever actually observes the future, its more that different observers see the time between connected events compressed compared to others).




  • This sounds like an idea called “eternalism” or “block time”. I tend to suspect it might be the case just because it requires assuming fewer unique properties for the time dimension that aren’t shared by space dimensions, but obviously that’s not really evidence for it as such. It can be an interesting idea to think through the implications of though, whether true or not.