• 0 Posts
  • 60 Comments
Joined 2 months ago
cake
Cake day: January 1st, 2026

help-circle




  • Cling to semantics if you need to, but the spirit of what I said was true.

    Is it? Doesn’t seem a valid argument.

    Hitler embraced the construction of the autobahn. Therefore, the autobahn is evil.

    operates the same way (guilt by association fallacy). I agree bluesky “was always going to shit” for entirely different reasons like repeating the same mistakes of twitter.

    Maybe you could offer a more logical argument for your conclusion instead of dragging the discussion into irrationality?


  • Explain “pedophiles”.

    Post needs text alternative.

    Images of text break much that text alternatives do not. Losses due to image of text lacking alternative such as link:

    • usability
      • we can’t quote the text without pointless bullshit like retyping it or OCR
      • text search is unavailable
      • the system can’t
        • reflow text to varied screen sizes
        • vary presentation (size, contrast)
        • vary modality (audio, braille)
    • accessibility
      • lacks semantic structure (tags for titles, heading levels, sections, paragraphs, lists, emphasis, code, links, accessibility features, etc)
      • some users can’t read the image due to lack of alt text (markdown image description)
      • users can’t adapt the text for dyslexia or vision impairments
      • systems can’t read the text to them or send it to braille devices
    • web connectivity
      • we have to do failure-prone bullshit to find the original source
      • we can’t explore wider context of the original message
    • authenticity: we don’t know the image hasn’t been tampered
    • searchability: the “text” isn’t indexable by search engine in a meaningful way
    • fault tolerance: no text fallback if
      • image breaks
      • image host is geoblocked due to insane regulations.

    Contrary to age & humble appearance, text is an advanced technology that provides all these capabilities absent from images.







  • Moby Dick

    Public domain.

    You could also try understanding the law

    §107. Limitations on exclusive rights: Fair use

    Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include-

    1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
    2. the nature of the copyrighted work;
    3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
    4. the effect of the use upon the potential market for or value of the copyrighted work.

    with particular attention to factors 1 (especially transformation) & 4.

    If that’s not for you, though, then you should definitely try that with a copyright work (Disney?) & report back on how that went.



  • While I agree, I don’t think the language “every operating system provider has to create” means it’s installed if you don’t want it. Parental control software exists for Linux, it’s available from the package manager, and we can opt out of installing it.

    I doubt “every operating system” is meant literally. Embedded OSs for specialized hardware (eg, routers, satellites, rockets, missiles, drones, calculators, industrial lasers) aren’t typically meant for children to browse the web. If TempleOS supported networking, it might be in trouble. Viable legislation would probably be restricted to OSs designed to allow children to access content over the internet.

    The main thrust of the suggestion is to prefer parental controls over age verification. Better ways to ensure availability of parental controls (like government services to provide the software free) fit that broad idea.

    I can make its logic gates do anything I want, as long as it’s not sending CP or malware over the Internet.

    That stipulation doesn’t need to be stated. It can be programmed to do anything, and that’s fine. Laws already exist for illegal activity. Anyone who’d fuss the absence of that stipulation lacks credibility.


  • Cool: agreed. Your objection was ambiguous.

    If we had to choose, though, I’d consider the professor’s suggestion preferable to age verification. While I disagree with mandating it, it’d pretty much do nothing, because it’s already reality: most mainstream OSs include parental controls. The “criteria” would establish standards for parental controls, which isn’t altogether a bad idea. A better idea would be to promote a standard & replace mandates with public services to provide parental control technologies free & to educate parents.

    In the late 90s, when US Congress attempted to regulate access of adult content to minors, those laws commissioned studies that drew similar conclusions even then. The studies & federal courts concluded that to meet the government’s compelling interest in “protecting minors from harmful content”, there were more narrowly tailored alternatives to criminalization & age verification that are less restrictive to fundamental rights & are at least as effective:

    • client-side filters to block content from the receiving end
    • government programs to train parents & provide them resources to “protect” their children from “harmful content”
    • public education campaigns.

    They pointed out while client-side filters may have false positives & negatives

    • they can be monitored & corrected
    • they’re a more complete solution that can restrict all internet protocols (not just web) from any geographic source (not only in legal jurisdiction) with content of any type (including dynamic such as live chat)
    • they allow restriction of other kinds of content (eg, violence, hate speech)
    • they can vary restrictions per child (eg, age-appropriateness)
    • they let parents disable them
    • they don’t obstruct access by adults.

    Criminalizing access to adult content at the source obstructs everyone’s access & burdens them with loss of privacy & with security risk.

    Despite their age, those studies’ findings remain relevant.

    • COPA Commission

      In October 1998 Congress enacted the Child Online Protection Act and established the Commission on Online Child Protection to study methods to help reduce access by minors to certain sexually explicit material, defined in the statute as harmful to minors. Congress directed the Commission to evaluate the accessibility, cost, and effectiveness of protective technologies and methods, as well as their possible effects on privacy, First Amendment values and law enforcement. This report responds to the Congressional request.

    • National Research Council

      In November 1998, the U.S. Congress mandated a study by the National Research Council (NRC) to address pornography on the Internet (Box P.1).

    COPA Commission summary

    The COPA Commission found Age Verification ID to have the highest adverse impact on cost, privacy, fundamental rights, and law enforcement and to score poorly on effectiveness and accessibility. They found other technologies & methods to be more effective & accessible with much lower adverse impact including

    • client-side filtering
    • family education programs
    • acceptable use policies
    • top-level domains for materials “not harmful” to minors
    • “greenspaces” containing only child-appropriate materials.

    Some recommendations to highlight

    Public Education:

    • Government and the private sector should undertake a major education campaign to promote public awareness of technologies and methods available to protect children online.
    • Government and industry should effectively promote acceptable use policies.

    Consumer Empowerment Efforts:

    • Resources should be allocated for the independent evaluation of child protection technologies and to provide reports to the public about the capabilities of these technologies.
    • Industry should take steps to improve child protection mechanisms, and make them more accessible online.
    • A broad, national, private sector conversation should be encouraged on the development of next-generation systems for labeling, rating, and identifying content reflecting the convergence of old and new media.
    • Government should encourage the use of technology in efforts to make children’s experience of the Internet safe and useful.

    Industry Action:

    • The ISP industry should voluntarily undertake “best practices” to protect minors.
    • The online commercial adult industry should voluntarily take steps to restrict minors’ ready access to adult content.
    NRC summary

    The NRC found “no single or simple answer”, agreed on the capabilities of filters in preventing inadvertent or unhighly-motivated exposure, but also stressed social & educational strategies in addressing motivation, coping, & responsible behavior.

    Social and educational strategies are intended to teach children how to make wise choices about how they behave on the Internet and to take control of their online experiences: where they go; what they see; what they do; who they talk to. Such strategies must be age-appropriate if they are to be effective. Further, such an approach entails teaching children to be critical, skeptical, and self-reflective of the material that they are seeing.

    An analogy is the relationship between swimming pools and children. Swimming pools can be dangerous for children. To protect them, one can install locks, put up fences, and deploy pool alarms. All of these measures are helpful, but by far the most important thing that one can do for one’s children is to teach them to swim.

    Perhaps the most important social and educational strategy is responsible adult involvement and supervision.

    Internet safety education is analogous to safety education in the physical world, and may include teaching children how sexual predators and hate group recruiters typically approach young people, how to recognize impending access to inappropriate sexually explicit material, and when it is risky to provide personal information online. Information and media literacy provide children with skills in recognizing when information is needed and how to locate, evaluate, and use it effectively, irrespective of the media in which it appears, and in critically evaluating the content inherent in media messages. A child with these skills is less likely to stumble across inappropriate material and more likely to be better able to put it into context if and when he or she does.

    Education, supervision, & parental controls/filters seem a more compelling solution. However, bring that up in regard to legislation to age-restrict social media & the tune at lemmy dramatically changes: seems inconsistent.




  • Words can get someone involuntarily committed to a mental hospital. Words can be used to take away rights. Words can affect national policy. Words were what Adolf Hitler used to send people to the concentration camps, and they’re what Donald Trump is using to do the same thing today. Words are extraordinarily dangerous.

    Nah, none of those. All instances of harm require unnecessary action taken by choice. Words can be disregarded. Acting on words is the actor’s choice.

    When we legitimise words that dehumanise the mentally ill

    They’re not doing that. Moreover, using such words alone doesn’t do what you claim. There are a number of steps between a word you find offensive & adverse action: that argument is a slippery slope. Unless the words incite imminent action, people have an unbounded amount of time to think & arrive to a decision before taking action. Any amount of discussion can occur during that time to influence & inform decisions. Rather than an overgeneralized attack on using a word, a more focused & coherent argument to support human rights could be raised.

    Over relying on offense & emotion to steer their judgement discounts people’s capacity to reason & infantilizes them, which is condescending. Offense & emotion are not reliable guides of judgement. Speculation that it would promote better outcomes is not a valid argument. That such an approach would work better than reason is poorly supported. We could at least as plausibly appeal to reason rather than to offended emotion with the bonus of not irrationally overgeneralizing.

    People can interpret context to draw distinctions & you’re overgeneralizing. The overgeneralization underpinning your offended opinion isn’t a valid argument. Neither is the speculation offered to support it. Telling people their words mean something they do not, disrespecting their moral agency & ability think, & insulting their intelligence to discern meaning is unpersuasive. Promoting a rational argument more specifically supporting the outcomes you favor would be more persuasive.