Knapsacks

The knapsack problem—sometimes called the subset-sum problem—is to determine whether or not (or how) you can sum a subset of a set of integers (called “weights”) to reach some target s exactly. The idea is that s represents the size or capacity of the knapsack, and you're trying to figure out if it's possible to exactly fill the knapsack with various objects that have the given weights.

For example, suppose you have these parameters:

weights: [1, 2, 3, 4, 5]
s: 10

This particular problem has solutions, and they are:

[1, 2, 3, 4]
[1, 4, 5]
[2, 3, 5] 

All of these subsets sum to 10, that's what makes them solutions. This is called the 0-1 knapsack problem (really a special case of the 0-1 knapsack problem), because we either allow 0 or 1 of each weight. Other versions of the problem allow reusing weights. Forget about those versions for now.

Another way of thinking about this problem is that instead of looking for some “subset” of the weights that sum to the target, we're looking for a list of 0s and 1s to multiply by the weights—that is, we can “choose” a weight by multiplying by 1 and “leave out” a weight by multiplying by 0. In the above example, one of our solutions was [1, 4, 5]. Here's the other way of viewing that solution:

weights:  [1, 2, 3, 4, 5]
solution: [1, 0, 0, 1, 1]

result: (1*1) + (2*0) + (3*0) + (4*1) + (5*1) = 10 yippee

It's just another way of expressing the solution, but it will come in handy later. Also, I should mention: the weights aren't constrained to be in any particular order; I've sorted the example weights for clarity. The similar problem of weights: [1, 3, 5, 2, 4], s: 10 has solution [1, 0, 1, 0, 1], for example.

The knapsack problem was considered Very Difficult™ in the 70s, and was proven to be NP-complete (i.e. “easy to confirm whether a proposed solution is valid, but may be prohibitively difficult to determine in the first place whether any solution exists”). The general method to solve it for a while was to try every combination of the weights to see if one combination sums to s (that's in the order of 2^n, where n is the number of weights). A ton of really clever algorithms have been thought-up since then, but generally it's safe to say that the problem is still Pretty Darn Difficult™ when the number of weights and s are both big.

Solutions as Messages

There's a really interesting application (or attempted application) of the knapsack problem: cryptography! Remember above when we represented the solution as a list of 0s and 1s? 0s and 1s make up any binary data, so consider that the “solution” could actually be a secret message!

Let's unpack that a little. We start with a message: “x” will work (keep it short so the math doesn't get overwhelming). “x” in binary is one byte (8 bits): 01111000. This is just like a solution to some knapsack problem. Now imagine we chose some set of “weights”, like [59, 24, 96, 67, 79, 93, 21, 51]. We can do the knapsack problem backwards:

weights:  [59, 24, 96, 67, 79, 93, 21, 51] <-- our made up weights
solution: [0,  1,  1,  1,  1,  0,  0,  0] <-- "x" in binary

result: (59*0) + (24*1) + (96*1) + (67*1) + (79*1) + (93*0) + (21*0) + (51*0) = 266

The idea is that if Alice generates these weights and publicizes them, Bob could encode a message (“x”) using the above method, and send 266 to Alice. An eavesdropper would see 266 and have to solve the knapsack problem to find the message/solution: 01111000 (which is “x”). Assuming the knapsack problem is hard, the eavesdropper should be stuck and Bob's message to Alice should remain secret.

But, obviously, this means it would also be hard for Alice to decrypt the message :( The knapsack problem doesn't magically become easier for Alice to solve just because she is the intended recipient. So this scheme by itself is a non-starter. Cryptosystems have to be easy to use (for Alice), but hard to break (by eavesdroppers).

As it turns out, some knapsack problems are Pretty Easy™ to solve. If the weights form a superincreasing sequence—that is, each number of the sequence is larger than the sum of all the preceding elements—then it's actually really easy to solve it.

Here is the classic example of a “superincreasing sequence”: powers of two.

[1, 2, 4, 8, 16, 32, 64]

(Sanity check: it's a superincreasing sequence because 2 > 1, 4 > (1 + 2), 8 > (1 + 2 + 4), etc.)

If our weights look like that, you can work out whether or not the last weight is included or dropped by asking “is the target greater than the sum of all the preceding weights?” If yes, the last weight must be included. If no, the last weight must not be included. Since you know the last weight's inclusion/exclusion for sure, you can ask the same question about the second-to-last weight, and so on, with a smaller and smaller target. Confusing in English... here's an example:

weights: [1, 2, 4, 8]
s: 13

1. Is 13 > 1+2+4 ? Yes, so 8 must be included. New target is 13-8.
  Solution so far: [?, ?, ?, 1]
2. Is 13-8 > 1+2 ? Yes, so 4 must be included. New target is 13-8-4.
  Solution so far: [?, ?, 1, 1]
3. Is 13-8-4 > 1 ? No, so 2 must not be included. Target doesn't change.
  Solution so far: [?, 0, 1, 1]
4. Is 13-8-4 > 0 ? Yes, so 1 must be included. New target is 13-8-4-1.
  Solution so far: [1, 0, 1, 1]

The last target (13-8-4-1) is 0, which means we're done. 
solution: [1, 0, 1, 1]
result: (1*1) + (2*0) + (4*1) + (8*1) = 13

(Eagle eyes will have noticed that when the weights are powers of two, the solution is simply the binary representation of the target: 1011 is 13 in base 2).

If the knapsack used to encrypt information had superincreasing weights, Alice would be able to decrypt the message easily. Of course that means the eavesdropper would be similarly empowered. So the idea of knapsack cryptosystems is to generate a private, easily-solved set of weights (superincreasing), and then apply some transformations to the weights, resulting in a hard-to-solve (not superincreasing) public set. People that want to send you a message encode it using the public set of weights; you then “reverse” the transformations and solve the easy knapsack problem to get the message.

An easy-ish to learn, famous one (also broken one) is the Merkle/Hellman knapsack described in their Hiding Information and Signatures in Trapdoor Knapsacks. Here is how the transformations of the weights work: 1. Alice chooses big numbers M and W (where the GCD of M and W is 1) 2. She generates a superincreasing sequence of pretty huge numbers (b1, b2, ..., bn) 3. She computes a1, a2, ..., an by solving an = bn * W (mod M)

And that's it. Bob can now encrypt the bits of some message using weights a1, a2, ..., an, and it will result in a hard-to-solve knapsack problem— the sum s, and the public set of weights [a1, a2, ..., an]. When Alice receives this, she can apply a reverse-transformation on the sum s (s' = s*W^-1 (mod M)), and solve an easy knapsack problem: s' and [b1, b2, ..., bn] (weights that we already know to be superincreasing).

This just blows my mind. It sucks that Merkle and Hellman's transformations have been demonstrably compromised; they recommended a couple of further things, like mixing up the order of the weights and adding some multiple of M to the each weight, but that doesn't help. Others have followed up with more secure transformations, and I think there are a couple schemes out there that have held up well enough to scrutiny. Regardless, it's just still so cool to hide a message inside what's basically a software developer whiteboard interview problem. (Also, that's too many nouns in a row, and 8 9 10 9 7 is interesting).

I made a small tool implementing Merkle and Hellman's transformations here: stripedpajamas/knapsack. As a follow up, I might dig into the method that trounced Merkle and Hellman's cryptosystem from A Polynomial Time Algorithm for Breaking the Basic Merkle-Hellman Cryptosystem.

✌️

credit to Odlyzko for The Rise and Fall of Knapsack Cryptosystems; this post is basically the first couple pages of that paper