• 3 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle




  • I was curious as well as to the definition. So I looked up the published opinion. You can find it on the official website: https://www.supremecourt.gov/opinions/slipopinion/22 Look for “Twitter, Inc. v. Taamneh” Or a direct link to it is here: https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf

    Basically it looks like most of the case was working around figuring out the definition of “Aiding and Abetting” and how it applied to Facebook, Twitter, and Google. It’s worth reading, or at least skipping to the end where they summarize it.

    When they analyzed the algorithm they found that:

    As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.

    The only way I could see them liable for the algorithm is if any big tech company had tweaked the algorithm so that it specifically recommended the terrorist content more than it should have.

    The code doesn’t have a concept of what is right or what is wrong, it doesn’t even understand what the content is that it’s recommending. It just sees that users watching this video also typically watch that other video and so it recommends that.

    if you took out algorithm and put in employee and it would be that, then slotting in algorithm should not be a defense.

    Alright let me try a hypothetical here. Let’s say I hosted a public billboard in a town square and I used some open source code to program a robot to automatically pick up fliers from a bin that anyone could submit fliers to. People can tag the top part of their flier with a specific color. The robot has an algorithm that reads the color and then puts up the fliers on a certain day of the week corresponding with that color.
    If someone slipped some terrorist propoganda into the bin, who is at fault for the robots actions?

    Should the developer who published the open source code be held liable for the robots actions?

    Should the the person that hosts the billboard be liable for the robots actions?

    Edit: fixed a grammatical error and added suggestion to read summary.



  • Ah, I see what you’re getting at.

    Maybe that was critical to the Supreme Court case, but it wasn’t presented in the news that way (that I saw)

    Yeah that’s the problem with a lot of news organizations. They like to spin stories to support whatever agenda/narrative they want to push rather than what the case was actually about.

    I would suggest this video by Steve Lehto: https://youtu.be/2EzX_RdpJlY
    He’s a lawyer who mostly comments on legal issues that end up in the news and his insight is invaluable. He talks about these 2 cases specifically in this video.

    #2 was very specific towards whether you would be considered as aiding and abetting terrorists in a terror attack if the algorithm pushed it to others.

    It sounds like there are a ton of other cases that have been submitted to the supreme court, so I’m sure there’s one that may address your concerns.

    And frankly, I’m tired of Big Tech getting a pass on problems they have / create “because it’s too hard” to be responsible where no one else gets the “too hard” defense.

    I get your frustration, I mean I’m assuming that most everyone that’s here is here because we’re fed up with what they’ve done with social media.
    But in this case a loss for big tech would have had even worse repercussions for smaller platforms like Lemmy.


  • That’s fine, but let’s dig into it a bit more.

    Where do you draw the line for what’s considered “terrorist content” vs what is just promoting things that terrorists also promote.

    And how do you implement a way to fix the algorithm so that absolutely nothing that crosses that line?

    Just look at how well email filters work against junk mail and scams.

    Now let’s apply this to Lemmy and federated instances. If you personally are hosting an instance, of course you’re going to do your best to keep it free from content like that. Let’s say you’re running some open source code that has an algorithm for highlighting posts that align with the user’s previously liked content.
    If someone posts something that crosses the line and it gets around your filters and somehow gets highlighted to other users before you can remove it, you are suggesting that the person in charge of that instance should be directly held responsible and criminally charged for aiding and abetting terrorism.






  • This is much easier said than done. Around large parts of the United States you can’t reliably commute by public transit. For me personally, without a car, a one way 40 mile trip to the major city near me would take 5 hours. That’s 2 different trains and 2 different busses.

    Add that to the fact that the station closest to me only has a few trains a day and my options are very limited.

    Even if we ignore the current train schedule and assume that trains come by every 5 min, it would still be a 2 hour trip that costs me $20 for one way. I could then bike the rest of the way and avoid the last 2 buses.

    There are rail passes I could get, but those would cost $477/month. It’s cheaper to lease a Tesla at that point.

    Owning a car is pretty much the only reasonable way of getting around for many parts of the U.S.


  • Project Zomboid! Easily one of the most feature rich zombie games I have ever played. It’s basically “the Sims” on steroids with zombies.

    Your laptop should be able to handle it easily. It takes a while to figure things out, but you can tweak the zombie settings to your preference. It also has a multiplayer option if you’re looking for a community while playing.








  • Yeah, another use that I know I’ll be using it for (or at least Bing’s Chat) will be summarizing large documents (especially in the sense of becoming a more informed voter).

    I don’t have time to read through the thousands of pages of legalese that our lawmakers come up with. But instead of having to wait or only rely on summaries from others I can run it through an AI to give summaries of each section and then read into anything that piques my interest.

    It might be interesting to even train a smaller LLM that does this more efficiently.

    The next step would be a LLM that pays more attention to unintended consequences of laws due to the way they’re written. But for something really effective I imagine that would require the assistance of a large number of experts in the field… And/Or a lot of research on laws being overturned, loopholes fixed, etc.

    Even then it’s important that we understand that these tools are far from perfect. And we should question results rather than accepting them at face value.



  • Glad someone mentioned the lawyer that screwed up by including ChatGPT’s fake cited sources. It will be interesting to see what comes from this.

    Additionally not a lot of people realize that they’ve signed an indemnification clause when using ChatGPT (or what that means).

    Basically OpenAI can send you the legal bills for any lawsuits that come from your use of ChatGPT. So if you “jailbroke” ChatGPT and posted an image of it telling you the recipe to something illegal. OpenAI could end up with a lawsuit on their hands and they would bill that user for all of the legal fees incurred.

    Possibly the first case of this we’ll see will be related to the defamation case that a certain Mayor from Australia could have against OpenAI. https://gizmodo.com/openai-defamation-chatbot-brian-hood-chatgpt-1850302595

    Even if OpenAI wins the lawsuit they will most likely bill the user who posted the image of ChatGPT defaming the mayor.