In an essay on the current justification for authorities in the EU and around the globe seeking to break end-to-end-encryption to fight child sexual abuse and exploitation, researcher Susan Landau discusses the issue in historical context, and explains why breaking encryption leads us in the wrong direction.

“Think differently. Think long term. Think about protecting the privacy and security of all members of society—children and adults alike. By failing to consider the big picture, the U.K. Online Safety Act has taken a dangerous, short-term approach to a complex societal problem. The EU and U.S. have the chance to avoid the U.K.’s folly; they should do so. The EU proposal and the U.S. bills are not sensible ways to approach the public policy concerns of online abetting of CSAE [Child Sexual Abuse and Exploitation]. Nor are these reasonable approaches in view of the cyber threats our society faces. The bills should be abandoned, and we should pursue other ways of protecting both children and adults.”

[Edit typo.]

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      8 months ago

      Cool! Can you point out the falsifiable hypothesis, and the experiment conducted in this article?

      • Natanael
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        8 months ago

        False positives—images that look nothing alike but have very similar or even the same perceptual hashes—are also possible. This leaves an opening for mischief, and worse. It is unfortunately too easy to arrange for, say, a candidate for elective office, to receive a photo that looks innocuous, store it, and only later learn that the photo triggered a law enforcement alert because its perceptual hash was the same as that of known CSAM. Damage would be high and may not go away (recall Pizzagate).

        Would such an “attack” be feasible? Yes. Shortly after a researcher published the code used in Apple’s NeuralHash, an Intel researcher produced a hash “collision”: two images that look nothing alike but have the same perceptual hashes. Such capabilities are present for researchers—and others, especially those with an incentive to cause problems. As computer scientists Carmela Troncoso and Bart Preneel observed, “In the arms race to develop such detection technologies, the bad guys will win: scientists have repeatedly shown that it is easy to evade detection and frame innocent citizens."

        Other proposed techniques to recognize CSAE, including previously unknown examples, include machine learning. But as my co-authors and I discussed in “Bugs in our Pockets,” false positives and false negatives are a problem here too.

        This is grounded in information theory. Without a perfect CONTEXT AWARE classifier (i.e. not one that will report you for sending your family doctor a photo of your child’s medical condition) with perfect integrity protection it’s impossible to solve. All other solutions means it either can be evaded or that it can be abused to spy on innocent people. No circumvention of this basic fact is possible.

        • jet@hackertalks.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          8 months ago

          Pointing out opinions, and flaws of a plan according to opinions is useful. And it’s a good exercise. But it’s not science.

          Getting access to the world’s data, isn’t about protecting anybody, it’s about getting access to the data, the excuses just an excuse. But that’s just my opinion.