• CodeMonkey@programming.dev
    link
    fedilink
    arrow-up
    59
    arrow-down
    2
    ·
    3 months ago

    About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf

    Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.

    Even if an attacker kidnaps the user and sends in a body double, with your user’s id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.

    The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.

    • Klear@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      3 months ago

      Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text.

      Did you forget the word “teach”? Or even the concept?

    • BluesF@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 months ago

      Smart. I like the idea of replacing biometrics with something that can’t easily be cloned - learned behaviour. Perhaps with a robust ML approach you could use analysis of gait, expressions, and other subtle behavioural tics rather than or in addition to facial/fingerprint/iris recognition. I suspect that would be very hard to fake - although perhaps vulnerable to, idk, having a bad day and acting “off”.

      • milicent_bystandr@lemm.ee
        link
        fedilink
        arrow-up
        6
        ·
        3 months ago

        Ah, so only employ posh people.

        “Hi, I’m definitely Henry. My turn to take the RSA key sentry duty today.”

        “Henry, why are you acting like a commoner? You’re not like yourself at all!”

    • oatscoop@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      3 months ago

      Having read the paper, there seems to be a glaring problem: Even though the user can’t tell an attacker the password, nothing is stopping them from demonstrating the password. It doesn’t matter if it’s an interactive sequence – the user is going to remember enough detail to describe the “prompts”.

      A rubber hose and a little time will get enough information to make a “close enough” mock-up of the password entry interface the trusted user can use to reveal the password.

  • 018118055@sopuli.xyz
    link
    fedilink
    arrow-up
    22
    ·
    3 months ago

    There are some cases involving plausible deniability where game theory tells you should beat the person until dead even if they give up their keys, since there might be more.

    • MotoAsh@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      I mean, I’d definitely do it to SBF if his crap wasn’t cleaned out already. Though admittedly I’d largely keep going just because this world DESPERATELY needs fewer SBF types in it…

    • MentalEdge@sopuli.xyz
      link
      fedilink
      arrow-up
      9
      ·
      3 months ago

      I know veracrypt has a form of this. You can set up two different keys, and depending on which one you use, you decrypt different data.

      So you can encrypt your stuff, and if anyone ever compels you to reveal the key, you can give the wrong key, keeping what you wanted secured, secure.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        won’t they know there are files they haven’t decrypted?

        if it could hide or delete the remaining files encrypted that would be nifty.

        • Ookami38@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          3 months ago

          If you set it up correctly, this is essentially what it does. You have a disc that is, say, 1tb. It’s encrypted, so without a key, it’s just a bunch of random noise. 2 keys decrypt different vaults, but they each have access to the full space. The files with the proper key get revealed, but the rest just looks like noise still, no way to tell if it’s empty space or if it’s a bunch of files.

          This does have an interesting effect. Since both drives share the same space, you can overfill one, and it’ll start overwriting data from the second. Say you have a 1tb drive, and 2 vaults with 400gb spent. If you then go try to write like, 300gb of data to one vault, it’ll allow you to do so, by overwriting 200gb of what the drive thinks is empty space, but is actually encrypted by another key.

          It’s been a while since I’ve messed with this tech, and I’m mostly a layman, but this should be a fairly accurate depiction of what’s actually happening.

        • milicent_bystandr@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Full disk (/partition) encryption means you don’t know what files there are until you decrypt. Additionally for that sort of encryption scenario you fill the partition with random data first so you can’t tell files from empty space (unless the attacker can watch the drive over time).

    • CosmicTurtle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      There was an encryption system a few years ago that offered this out of the box.

      I can’t remember the name of it but there was a huge vulnerability and basically made the software unusable.

      Crypt box or something like that.

      • perviouslyiner@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        The prominent one was called Marutukku - and the developer turned out to be someone who might actually need the feature.

    • 018118055@sopuli.xyz
      link
      fedilink
      arrow-up
      6
      ·
      3 months ago

      As referred in other comment, the counter counter is to just keep beating to get further keys/hidden data.

      • Ookami38@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Game theory would lead you, as the tortured, to realize that they’re just going to beat you until death to extract any keys you may or may not have, so the proper answer is to give them 1 and no more. You’re dead anyway, may as well actually protect what you thought was worth protecting. Giving 1 key that opens a dummy vault may get the torturers to stop at you, thinking this lead is a dead one.

        • 018118055@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Probably best to avoid systems with known deniable encryption methods, and keep your dummy data there. Then hide your secrets e.g. in deleted space on a drive, in the cloud, or a well-hidden micro-sd card. All have risks, maybe it’s best of all to not keep your secrets with you, and make sure they can’t be associated with you.

  • JoYo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    This always sounded like parallel construction.

    Fine then, keep your secrets.

  • heavy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    3 months ago

    Where is this from? I don’t think exposing the key breaks most crypto algorithms, it should still be doing its job.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      The private key, or a symmetric key would break the algorithm. It’s kind of the point that a person having those can read it. The public key is the one you can show people.

      • heavy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        3 months ago

        Doesn’t break the algorithm though, you would just have the key and then can use the algorithm (that still works!) to decrypt data.

        Also you’re talking about one class of cryptography, the concept of key knowledge varies between algorithms.

        My point is an attacker having knowledge of the key is a compromise, not a successful break of the algorithm…

        “the attacker beat my ass until I gave them the key”, doesn’t mean people should stop using AES or even RSA, for example.

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          The purpose is to access the data. This is a bypass attack, rather than a mathematical one. It helps to remember that encryption is rarely used in the abstract. It is used as part of real world security.

          There are actually methods to defend against it. The most effective is a “duress key”. This is the key you give up under duress. It will decrypt an alternative version of the file/drive, as well as potentially triggering additional safeguards. The key point is the attacker won’t know if they have the real files, and there is nothing of interest, or dummy ones.

          • heavy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            3 months ago

            I appreciate the explaination, that’s a cool scheme, but what I saying is the human leaking the key is not the fault of the algorithm.

            Everyone and everything is, on a very pedantic level, weak to getting their ass beat lol

            That doesn’t make it crypt analysis

            • cynar@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              An encryption scheme is only as strong as its weakest link. In academic terms, only the algorithm really matters. In the real world however, implementation is as important.

              The human element is an element that has to be considered. Rubber hose cryptanalysis is a tongue and cheek way of acknowledging that. It also matters since some algorithms are better at assisting here. E.g. 1 time key Vs passwords.

              • heavy@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 months ago

                Very informative, I think people will learn from what you’re saying, but it doesn’t really matter to what I’m saying.

                Yes, absolutely, consider the human element in your data encryption and protection schemes and implementations.

                Beating someone with a pipe is a joke, but not really defeating an algorithm.