Now that the web is rich in garbage I try to think of how the UX can be made tolerable again. Consider this scenario:

Bob sees an article of interest and decides to share it. I have no idea why Bob’s experience was decent enough that he feels the article is worthy of sharing, but I’m getting tor-hostility, CAPTCHAs, popups, dysfunction that requires JavaScript perms, etc. In short, Bob’s link goes to a shit hole.

So how can we fix this?

What if Bob copies the full text of the article and creates an archive of sorts in the fediverse? That solves the enshitification problem but it risks harassment from copyright police. Or does it not? The fair use doctrine specifically permits a work to be quoted for the purpose of commentary. It’s also easily justified because the web has become so exclusive (e.g. Tor blocking) that a case can be made for including a copy of the article along with Bob’s commentary. Because what happens now? Alice the Tor user gets blocked from the page and can only read people’s comments which have no context because the web is broken. Bob copying the original text enables Alice to appreciate Bob’s work (his commentary).

I also wonder if bilingual people can go a step further in mitigating copyright harassment. Suppose Juan reads the English article, machine translates it into Spanish, then corrects the flaws because he’s fluent in Spanish, and then posts the Spanish version. Do copyrights survive translation? If Juan comments in Spanish, then surely the Spanish translation is critical to non-English speakers understanding Juan’s post.

I think this idea would benefit the permacomputing movement because avoiding web enshitification is a way to access content with less resources. The original poster may have to run a shit ton of heavy JavaScript to reach the text, but then everyone attending his thread can function with a simple text client.

#askFedi #lawFedi

  • cerement
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    8 months ago

    a couple early mitigations before the full text-conversion step

    • archiving
      • archive.today/archive.is/archive.ph (ie. sites designed for getting around paywalls)
      • archive.org – if you want to ensure revenue doesn’t go back to the original site
    • extracting text – Firefox Reader View, Chrome Reader Mode
    • advertising and annoyances – uBlock Origin – in this day and age, ad-blockers are as necessary to your computer’s health as antivirus
    • text-based browsers – ex. Lynx (“hey Bob, the site you shared is completely blank, was there something there?”)
      • (as an aside, you can also run your work Outlook in text-only mode as well, don’t have to put up with co-workers’ color choices and font selections)
    • and remember that is perfectly acceptable to yell at “Bob” for sharing such a shit hole in the first place (“this meeting could’ve been an email, a short email”)
    • activistPnkOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      Those are quite useful tools overall. But I would certainly nix archive today because it’s just another jail (Cloudflare).

      archive.org – if you want to ensure revenue doesn’t go back to the original site

      #InternetArchive has become a critical indispensable resource for me. Though I’m curious what you mean by stopping revenue to the original site. The mirroring is so complete (javascript and all) that often the ads and popup garbage from the original site still appears in the archive which stands to help the original site get revenue.

      It’s worth noting an anti-archive tactic I’ve recently discovered: a newspaper detects the archive.org crawler and feeds it an /abstract/ of the article. I was revolted. So the news site was abusing archive.org to effectively place bait that ultimately pushes visitors to go to the original site. There needs to be an effort to recognize these shenanigans and remove their false mirrors.

      and remember that is perfectly acceptable to yell at “Bob” for sharing such a shit hole in the first place

      I’ve done that but it’s a bit tricky because one man’s shit hole is another man’s treasure and the culprits who share bad sites are often naïve low-tech people who would just be demoralized without really being able to grasp the tech problem or know how to see the world through the lens of advanced tools (like uMatrix & Tor or lynx). If the link is posted by someone who presents themselves as technical or as an authority on privacy, then I’ll say something. But I don’t want to chew out someone’s grandma for sharing a tor-hostile link, so overall the system has to be designed so that low-tech users are not discouraged.

  • m-p{3}@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    An option I like to use to share articles that are bloated and behind a paywall is to run it through 2read, which will read-ify and upload a copy to IPFS, but the issue of copyright still apply, as the copyright holder could go after the IPFS nodes hosting the content.

    Now if there was a way to hide those nodes behind Tor that could solve that issue.

    • activistPnkOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Thanks for the tip! I think integration of IPFS is the right idea. The 2read extension is a great approach which could inspire how a possible solution would work. In the end what I had in mind was clients and servers that integrate a solution which ideally would not constrain people to using a browser.