I want to curate my feed to mostly just contain serious people. Doesn’t matter whether I agree with their views or not - it’s all about how those views are expressed. So many people on social media are just thrilled to have a megaphone and the ability to make noise. I don’t like noise.

For example, right now there are people who opened the thread just because I put “AI” in the title and they can’t wait to share their views about AI in the comments as if it’s meaningfully relevant to what I’m saying. They know what I mean, but you can’t just miss an opportunity to score a few points dunking on AI, can you?

Whatever gene makes people want to shout these thought-terminating clichés, upvote others who do it, and find some sense of belonging from it is clearly missing from me. I’d rather just not even hear about it - it’s extremely exhausting and has never achieved anything worthwhile.

It doesn’t really matter whether most people like the smell of your farts or not - you’re still poisoning the air.

  • 4am@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 days ago

    Assuming I’m not a serious person because I’m against using AI is already shortsighted and echo-chamber-y

  • foggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    7 days ago

    You’ve had it for over a decade. Go check out Facebook.

    It’s maximized for engagement, not your pleasure.

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        7 days ago

        It is AI powered content filtering which does not rely on keywords.

        The title of your post is “I want AI powered content filtering that doesn’t just rely on keywords.”

        More examples of exactly that:

        Spotify, Amazon, Instagram, YouTube, TikTok…

          • ramble81@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            Except if you’re relying on AI, you’ve already given up that level of control and are letting the system decide for you.

            In your example, how would you continue to train the model for feedback loops and false positives? That isn’t an AI question, but just a general system design question.

            At that point, would you go through a weekly recap of posts to see what may have slipped by? The real issue is going to be detecting sarcasm.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    I still feel chatbots are kinda like graphical vs non graphical and I would actually like to see a real free software version integrated into a free software operating system on free/libre hardware. It should be just enough to converse and deal with the operating system basically being trained on that. Then users should be able to add capability. For example a plugin or extension that allows it to query wikipedia or project guetnberg or such.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    7 days ago

    I want to curate my feed to mostly just contain serious people.

    So I’ll just point out that this desire is kind of the whole point of features that allow you to follow specific users on a social media system. This is better when manually curated, rather than relying on some algorithm to do a bad job of it for you. So far, every application of such models to social media tends to amplify noise rather than reduce it, and this is mostly because it does not and cannot replicate human awareness.

    The desire for such a function to be performed automatically (without your direct attention and involvement) is diametrically opposed to the desire for high-quality output. Quality requires a level of attention that cannot be automated. The program doesn’t care, it’s not capable of that.

    Reddit-imitating platforms like Lemmy aren’t really built for following specific users, the intent is more of a public square. The benefit is that it’s harder to end up in an echo chamber (as long as you avoid places like hexbear and .ml where the admins enforce echo chamber conditions intentionally). The cost is that you will always be exposed to some noise. You don’t get freedom without some chaos.

    Platforms that might provide better what you want would be Mastodon or BlueSky, where you can follow professionals who voice public opinions on topics that you find relevant, and read responses from other users.

    Whatever gene makes people want to shout these thought-terminating clichés, upvote others who do it, and find some sense of belonging from it is clearly missing from me. I’d rather just not even hear about it - it’s extremely exhausting and has never achieved anything worthwhile.

    Hmm, well I’ll point out a couple of things:

    1. Self-expression is not really about “achieving” anything.
      There is a level of attention-seeking behavior that can get… cringy? for lack of a better word… but also, like, welcome to the human race, I guess? People want to feel included in the social group, and in the conversation of the moment, and that’s entirely normal behavior. Attention-seeking behavior happens because people don’t want to feel lonely, and that’s OK.
    2. I think you’re displaying a degree of entitlement here, where you expect other people to express themselves in a way that you find agreeable.
    3. There is some utility in this overall, as a sort of barometer of public opinion, though you have to be aware of the context of the community you’re in. (e.g. a commonly expressed opinion on Lemmy does not necessarily reflect a common opinion of people in your workplace or neighborhood)

    Also, at the risk of repeating a tired cliché, be the change you want to see in the world.

    • Iconoclast@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      where you expect other people to express themselves in a way that you find agreeable.

      I’m making no demands on other people here. They can be jerks or virtue-signal as much as they want. I’d just prefer not to see it myself because I find no value in it. I’m not trying to stop them - I’m trying to exclude myself from it.

      And this is me trying to be the change. In the ideal case, more and more people would start enabling those filters and all of a sudden the people making noise would see their engagement dropping because more and more of their posts are getting caught in content filters. That would send the signal that if you want your voice to be heard, you actually need to put some effort into it.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    7 days ago

    You don’t have a problem with the “noise”.

    You have a problem with other opinions.

    You just call it “noise” because it lets you feel like you have a valid point.

  • StillAlive@piefed.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    It doesn’t really matter whether most people like the smell of your farts or not - you’re still poisoning the air.

    My farts aren’t that dangerous 🫣

  • DandomRude@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    7 days ago

    I doubt that this is (reliably) possible using LLMs, because the underlying logic essentially only calculates probabilities based on the sequence of words.

    You would need a logic that strictly defines what you want to see. However, this would likely result in a whole series of “false positives” that would be filtered out, because it will hardly be possible to know in advance what users will post and in what form.

    An example: It should be perfectly possible to set it up so that you don’t see any posts that express negative views on the topic of AI (sentiment analysis), but in doing so, comments or posts that are actually quite witty would almost certainly be filtered out as well.

    This is the crux of the matter with LLMs: In and of themselves, they have numerous, very useful applications, yet simply because of the term “artificial intelligence,” there is often a misrepresentation of what these models are capable of. They don’t think like humans do; they are merely a tool.

    • Iconoclast@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      If the tool works by just having a text field where you write the instructions for it, then yeah, it would have the same issues as other LLMs - what you ask for isn’t always what you get. There would definitely need to be some kind of logger so you could review it later and see how it’s actually performing.

      You can never get rid of false positives - at least not before we reach AGI - but that doesn’t really matter. I’m already getting a huge number of them relying on keyword filtering and blocking, which are both broad, blunt, and inaccurate tools. It doesn’t need to be perfect - just better. It’s not like I’m going to miss some life-changing information just because it falsely flagged some content. I’m already missing ALL the content on Reddit, Instagram, TikTok, and Facebook. A few Lemmy posts I might’ve potentially found interesting don’t weigh much on that scale.

      Here’s a good example of a thread where about 80% of the comments are pure noise. In the ideal case I would open that thread and only see the few civil comments written in good faith with “truth seeking” attitude. The rest of it is just dunking.

      • DandomRude@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        As I said, something along these lines should be possible for consumers as well - AI sentiment analysis certainly exists (for example, for social media managers in companies).

        However, to my knowledge, such a “content filtering” tool out of the box does not exist, which is likely due to the fact that it would consume a great deal of energy (or tokens in cloud models, which of course also equates to energy), since every single post and comment would have to be evaluated by an LLM - and this would likely apply to every user of such a tool, provided that each user can specify their own criteria for filtering.

        A service like this, which could be easily deployed as a kind of SaaS solution without technical knowledge, would therefore have to be a paid service if it were to function effectively for the user.

        I don’t think such services exist, though -perhaps for mainstream social media platforms or as browser extensions for some usecases, but likely not outside the mainstream.

        You could build something like this yourself, though - using a locally operated model or by utilizing an API. But you’d have to develop it yourself, I’m afraid.

  • Left as Center@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Isn’t that more the point of serious newspapers or at least press agencies than social media feeds?

    • Iconoclast@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 days ago

      The serious people are here already. I just want to silence the rest.

      Consuming news is passive - I’d rather engage in discussing the solutions.