• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • I’m not going to go point by point because I think it’s not productive to act as if this kind of argument has only two sides. When we talk about subjects in a persuasive fashion, where we’re trying to win someone over to our side, it frequently has the opposite effect, entrenching is into our already polarized views.

    We need to concern ourselves with moral relativism to make appropriate decisions. In an ethical sense, I believe sexual assault of a human is at least an order of magnitude worse than milking a cow. But that opinion comes largely from the fact that I’m a human and I’m not a cow.

    If we want to sway someone’s opinion, I think we should focus less on absolutes and more on quantities. We should meet people where they are. Maybe instead of driving home all the disturbingly true reasons we should never milk or even breed cattle, we should use those same arguments to highlight the absurdly destructive impact of doing those things at the scale which we are.

    If half of society has a burger and a milkshake once a month, there is a significant environmental impact on milking those cows and raising those cattle to be slaughtered, as well as a very real moral cost. There is also some emotional benefit to the human of consuming fats and proteins from those sources. And both positive and negative nutritional effects as well.

    It’s already difficult to compare costs and benefits from such wildly different categories when it’s just one burger a month. Humans are emotional beings and even a well-reasoned argument may not trump the emotional feeling one gets from a hamburger and a shake.

    But consider the changing of factors if those same people go from one beef product and one dairy product a month to one every other day. Or even more frequent. How much more land it takes, how much more suffering the livestock go through in conditions designed for maximum profit and minimum concern for moral costs. The additional methane production, the deforestation, the added risk of heart attacks. All the bad parts multiplied wholesale, while the good parts all experience diminishing returns.

    If you take one of those semi-daily beef and dairy consumers, and give them a hard line, where any consumption of beef or dairy is unacceptable, is that going to generate a positive or a negative effect on the system as a whole? Some may be convinced to quit consuming, but I feel their difference will be swallowed by those who feel called out in such a way that they would rather consume even more out of principle than face the hard truth that their lifestyle is wrong. It’s easy for humans to build walls of cognitive dissonance, where we know what we’re doing is harmful, but we make excuses for ourselves to avoid facing that reality.

    If you want the masses to face their collective reality, we need to meet people where they are. Maybe burgers and milkshakes will always be part of your life. But there are alternatives that can be a different part of a life rich in variety. If someone currently eats a burger every other day, maybe they can strive for once a week. And if that goes well, once a month. And then, once they have a greater familiarity with the culinary variety that’s possible, they may start to forget to eat that meal entirely.

    We should remember that we’re all just people. We don’t need to be on different sides. You don’t need to be wrong and neither do I. We’re just earthly passengers connecting electronically in a wide cosmos. Our lives are all so different and yet uncannily familiar. So we’ll get more mileage out of sharing our experiences than prescribing them to others. Because if we feel we’re being talked down to, we’ll decide we’ve already picked a side. But if we’re just sharing, then we’re all on the same endless side. In that spirit, none of what I’m saying is meant to invalidate anything you’ve said. Only add to it.

    And just to add, I don’t mind if there’s a bit of feces in my milk. It looks perfectly white, so I imagine it’s in low enough quantity that it’s not a health risk after pasteurization, and as far as I know, the quantity is also low enough that it doesn’t effect taste. But I think cows should have good lives even at the expense of productivity, and milking should be a voluntary behavior, perhaps in exchange for appropriate compensation, rather than something that’s forced on them. Just my two cents (plus about a buck fifty).





  • With the platform at the size that it is currently, I’m inclined to agree with you. But I think in the future, lemmy may become large enough that having a public tagging system would be useful.

    Ideally, the two preferences can coexist. The multireddit equivalent would just be a private tag, exclusive to your account. But you could make it public, either anonymously or posted to your account, e.g. tag@pyrojoe@lemmy.world.

    Then, all the public tags can be merged at will, so if I make a new account and want to see all communities about birds, I can select the bird tag. If I want to make edits to the tag list without affecting the public tag, I would even have the ability to copy the public tag to my own private tag and prune the communities I don’t like without decreasing their public rankings.

    I think this would provide flexible levels of functionality to those who want it, but there may also be hidden consequences of this method that I’m currently missing.


  • I think that makes a lot of sense and it’s exactly the kind of stuff we should be considering at this stage. I also agree that humans are the ideal source of empathy and the best way to get around systems of secret code words and other methods that are used to circumvent algorithmic control.

    But I also think AI-generated algorithms have their place. By design, content moderation is an unpaid task. Many volunteers are very good at moderation, but the work takes up a lot of their time and some of the best minds may decide to step away from moderation if it becomes to burdensome. On reddit, I saw a lot of examples of moderators who, as flawed humans, made choices that were not empathetic, but rather driven by a desire for power and control. Of course, if we make mistakes during the algorithm training process and allow our AI to be trained on the lowest common denominator of moderators, the algorithm may end up being just as power hungry - or even worse, considering that bots do not ever tire or log off.

    But I do think there are ways to get past that, if we’re careful about how we implement such systems. While depending on your definition, bots may not be capable of empathy, based on some conversations with AI chatbots, I think AI can be trained to very closely simulate empathy. But as you mentioned about secret messages, bots will likely always be behind the curve when it comes to recognizing dog whistles and otherwise obfuscated hate speech. But as long as we always have dedicated empathetic humans taking part, the AI should be able to catch up quickly whenever a new pattern emerges. We may even be able to tackle these issues by sending our own bots into enemy territory and learning the dog whistles as they’re being developed, though there could be negative side effects to this strategy as well.

    I think my primary concern when pushing for these kinds of algorithms is to make sure we don’t overburden moderation teams. I’ve worked too long in jobs where too much was expected for too little pay, and all the best and brightest left for greener pastures. I think the best way to make moderation rewarding is to automate the most obvious choices. If someone is blasting hate speech, a bot can be very certain that the comment should be hidden and a moderator can review the bot’s decision at a later time if they wish. I just want to get the most boring repetitive tasks off of moderators’ plates so they can focus on decisions that actually require nuance.

    Something I really like about what you said was the idea of promoting choice. I was on a different social media platform lately, one which has a significant userbase of minors and therefore needs fast over-tuned moderation to limit liabilities (Campfire, the communication tool for Pokémon Go). I was chatting with a friend and a comment I thought was mundane got automatically blocked because it contained the word “trash.” Now, I think this indicates they are using a low quality AI, because context clues would have shown a better AI that the comment was fine. In any case, I was immediately frustrated because I thought my friend would get the impression that I said something really bad, because my comment was blocked. Except I soon found out that you can choose to see hidden comments by clicking on them. Without the choice of seeing the comment, I felt hate towards the algorithm. But when presented with the choice of seeing censored comments, my opinion immediately flipped and I actually appreciated the algorithm because it provides a safe platform where distasteful comments are immediately blocked so the young and impressionable can’t see them, but adults are able to remove the block to see the comments if they desire.

    I think we can take this a step further and have automatically blocked comments show categories of reasons why they were blocked. For example, I might never want to click on comments that were blocked due to containing racial slurs. But when I see comments blocked because of spoilers, maybe I do want to take a peek at select comments. And maybe for general curse words, I want to remove the filter entirely so that on my device, those comments are never hidden from me in the first place. This would allow for some curating of the user experience before moderators even have a chance to arrive on the scene.

    On the whole, I agree with you that humans are the ideal. But I am fearful of a future where bots are so advanced, we have no way to tell what is a human account and what is not. Whether we like it or not, moderators may eventually be bots - not because the system is designed that way but because many accounts will be bots and admins picking their moderation staff won’t be able to reliably tell the difference.

    The most worrisome aspect of this future, in my mind, will be the idea of voting. A message may be hidden because of identified hate speech, and we may eventually have an option for users to vote whether the comment was correctly hidden or if the block should be removed. But if a majority of users are bots, a bad actor could have their bot swarm vote on removing blocks from comments that were correctly hidden due to containing hate speech. Whether it happens at the user level or at the moderator level, this is a risk. So, in my mind, one of the most important tasks we will need AI to perform is identifying other AI. At first, humans will be able to identify AI by the way they talk. But chatbots will become so realistic that eventually, we will need to rely on clues that humans are bad at detecting, such as when a swarm of bots perform similar actions in tandem, coordinating in a way that humans do not.

    And I think it’s important we start this work now, because if the bots controlled by the opposition get good enough before we are able to reliably detect them, our detection abilities will always be behind the curve. In a worst case scenario, we would have a bot that thinks the most realistic swarms of bots are all human and the most fake-sounding groups of humans are all bots. This is the future I’m most concerned about heading off to make sure it doesn’t happen. I know the scenario is not palatable, and at this stage it may feel better to avoid AI entirely, but I think bots taking over this platform is a very real possibility and we should do our best to prevent it.



  • Mod work in general is going to be a tough issue for everyone to solve. Different places will have different norms they want to enforce, and a limited volunteer staff to push that agenda. But there’s nothing that can’t be automated. Automate the creation of AI mods, automate the selection of user mods, automate the banning of objectionable comments and users using a combination of both humans and AI to both handle the workload and adhere to community regulations. If these tools can be developed as part of lemmy, automated moderation can become an available option for all instances, which hopefully will mean that moderation here will be better quality and lower cost than moderation on that other social media site, I’m forgetting the name.




  • AI is going to mess with that process so fast I’d be surprised if that hasn’t happened already. While it seems unavoidable, still probably a good idea to have the personal question text box for now. But it seems like only a stopgap. We’ll need something better.

    But how do you proceduralize moderation? Even though it will raise operation costs, it might be necessary to host our own AI on the back end of each opted-in instance, and provide the tools to train it on content that the admins of that instance find objectionable.

    There would be growing pains of course, where some of our comments are held to be reviewed by participating moderators, who are themselves selected by an AI trained on content the admins of the instance find to be exceptional. And it would help to label and share the tensors we mine from this, so a new instance could gain access to a common model and quickly select a few things they don’t want in their instance, even giving them the ability to automatically generate a set of rules based on the options they selected when building the AI for their instance.

    It would take some time for all the instances to figure out which groups they do and don’t want to connect with, both in terms of internal users and external instances. I think you’d end up with two distinct clumps of communities that openly communicate within their clump, with a bigger blurrier clump between them, of centrists, with whom most communities communicate. But on either side there would almost certainly be tiny factions clumped together, who don’t communicate with most of the centrist groups, on the basis that they communicate with the other side. And there will always be private groups as well, some of which may choose their privacy on the basis that they refuse to communicate with any group that communicates with the centrist cloud.

    And in most of our minds, the two groups in question are probably political, but I think a similar pattern will play out in any sufficiently large network of loosely federated instances, even if the spectrum is what side of a sports rivalry you’re on. If we get to the point where there’s an instance or more in almost every household, we may be able to see these kinds of networks form in realtime.

    But the question I can’t seem to answer: Is it good? Or rather, is it good enough?

    People always think of what they would do if they had a time machine and could go back and “change things.” But in terms of federated social media, we already are back, almost at the start. So, if we’re going to think of a better way, now would be a good time.

    If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that? To all turn our backs, take our content and go home, make sure they have to have accounts on our side to see it, and if they ever make a subversive comment on our side of the fence, it’s removed before a human can ever see it, only spot-checked occasionally to make sure the bot is not being too harsh? Because that is one way of doing it, and maybe it’s the right way. If we train the AI well enough. Which depends on many of us doing that well enough across many instances. Maybe that is how you defeat Nazis, to make sure they can only talk about Nazi things in a boring wasteland of their own design.

    But I worry. Once instances are better networked, becoming more about quantity than size, and billionaires are able to set up “instance farms” where AI bots try to influence the rest of the fediverse en masse, will we be ready to head it off? Or similar to how we can’t see the Nazis crawling out from their wasteland to get higher quality memes, will we end up paling around with the bots designed to make our society trend toward slavery while their energy consumption raises the cost of the electricity we have to work for? Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?




  • This splintering of communities can be a drawback, but it can also be a blessing. Instead of having one account where I do all my social media things, I’ve been categorizing the types of social media I enjoy and creating an account for each category, on the instance that feels closest to that type of media. It’s kind of nice because I know exactly what kind of content subscriptions I’m going to see when I switch to each account. It’s also nice to be able to comment on things and know that people who look at my history will see comments on similar topics. Someone’s opinion on my comments about politics, for example, won’t be colored by my recent comments about extraterrestrials in a different community.

    There is some risk of being part of a community that might disappear someday, or become something you don’t like, but that’s a risk present in all social media. As another commenter mentioned, the advantage here is that you can set up your own instance where you can control your own data. It’s actually going to be beneficial that a lot of people do this, so that the fediverse as a whole can handle everyone’s traffic without operation costs ballooning beyond control for any individual instance.

    But a consequence of this is the creation of many small communities about the same topics, spread across many instances. I think we will need to create some method of federating many communities across many instances in a categorical way. For example, if I want to see all communities about cooking across all instances, there would need to be some decentralized method of tagging communities by topic. That way you don’t have to decide which community is most representative of what you want to see. And there could be many tags for each community, so if I want to see only videos about only cooking, where only vegan food is shown, there may be a community that ranks high in all those tags.

    Instead of subscribing to the community itself, you would just subscribe to the tag, creating a virtual subscription to all the contained communities. You’d be able to see all the communities for your selected topic(s) across the whole lemmyverse. And if you see a community that you think does not belong to something that it’s been tagged with, you can unsubscribe it from the tag so it doesn’t show in that list for you. If more people do the same, that community would fall in ranking on that tag list until eventually it is taken off. But if people upvote content from that community more than communities higher in the ranking, that community would rise in the tag list.

    I’m not sure if others would be interested in a system like this, but in my mind, it is the kind of thing we need to have rich curated content at low cost. Okay, I’m done now.