• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • AIs don’t judge, don’t remember and don’t hold anything against me, so I’d rather have an AI screening my stuff than a human - especially my superiors.

    And yes, I trust an AI I run myself. I know they don’t phone home (because they literally can’t) and don’t remember anything unless I go through the effort to connect something like a Chroma or Weaviate vector database, which I then also host and manage myself. The beauty of open source. I would certainly never accept using GPT-4 or Bard or some other 3rd party cloud solution for something this sensitive.


  • The idea is to monitor internal communications and do sentiment analysis to check if developers are toxic, too stressed or burned out. While the tech in general could of course be abused, the general idea sounds pretty good, as long as the AI is on-prem for privacy reasons and the employer is transparent and honest about it. Making sure employees are healthy, happy and productive sounds like a worthwhile goal. I wouldn’t want a human therapist monitoring communications to look for negative signs, but the AI can screen stuff, focus exclusively on what it was told to, and forget everything on command.


  • SDXL 0.9 seems absolutely amazing so far. It’s so much better at following instructions than any other SD foundation model it’s not even funny, and it can to tons of stuff out-of-the-box that would require at least an embedding with SD1.5. One thing I immediately noticed is that it handles color instructions properly most of the time. You can define tons of object colors, and it’ll usually only color the specified or undefined objects. I also tried things like character in a dirty environment. SD1.5 and its finetunes would often make the character dirty, SDXL follows the instruction properly. Incredible potential.

    When it comes to the refiner, I found that the recommended(?) 0.25 strength works well for environments and such, but for characters, it should be dialed way down. I still use it, at around 0.05, and that seems to do the trick. It still does what it’s supposed to at such a low strength, it still has a profound effect on fine detail like hair, but it doesn’t completely change the base generation nearly as much.



  • “Indeed, when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works,” the complaint reads.

    Or, hear me out for a minute, if critiques, summaries or discussions about the works were in the training data. Unless the authors want to claim nobody ever talks about their works on the internet…

    That’s the thing with AI: Unless the model creator provides a complete breakdown of the training material, as Llama, RedPajama or Stable Diffusion do for example, it’s basically impossible to prove what exactly is or isn’t in the training dataset.




  • There are also Servo and WebKit. Servo was kinda dead for a while, but the project was recently transferred to the Linux Foundation and revived by Igalia, with funding from Futurewei. Not suitable for daily use yet, but worth keeping an eye on. WebKit is of course used by Safari (which I guess makes it the second most used browser engine after Chromium), but also Epiphany on Linux. I’m not aware of any Windows browsers using WebKit. Fun fact: Chromium was forked from WebKit, which in turn was forked from KDE’s KHTML and KJS engines.