There are 100 prisoners in solitary cells. There's a central living room with one light bulb; this bulb is initially off. No prisoner can see the light bulb from his or her own cell. Everyday, the warden picks a prisoner equally at random, and that prisoner visits the living room. While there, the prisoner can toggle the bulb if he or she wishes. Also, the prisoner has the option of asserting that all 100 prisoners have been to the living room by now. If this assertion is false, all 100 prisoners are shot. However, if it is indeed true, all prisoners are set free and inducted into MENSA, since the world could always use more smart people. Thus, the assertion should only be made if the prisoner is 100% certain of its validity. The prisoners are allowed to get together one night in the courtyard, to discuss a plan. What plan should they agree on, so that eventually, someone will make a correct assertion?
Would you have chosen Python, like this guy here, as the language of choice for playing with this puzzle?
Let's switch this problem into computer terms:
Given 100 data items in memory and a 1-bit register, create a process that chooses each of the data items at random from memory. This process can only toggle the register; it has no other storage. When all 100 data items have been chosen, call the exit routine.
Or perhaps, we can make this a little bit simpler:
Count to 100 using a 1-bit register.
I would have to be a pretty dedicated Python fan to think it could help me with this one... ;-)
(Addendum: having looked at his solution, the trick seems to be that there is hidden storage, the prisoners' memories, to make it work. But I think that is cheating. ;-) )
The prisoners, obviously, have their own memory, and have reasoning ability.
Is this obvious? Compare this with a similar problem such as the Dining Philosophers.
If we assume the philosophers have normal communicative abilities, the problem goes away, because they can simply negotiate a solution with their neighbour.
But the point of the problem is to model a computational situation where this solution is not possible or adds further complexity of its own.
A personal pet peeve: I think these kinds of problems should clearly state all of their assumptions. Relying on the hearer to decide what other facts are allowable or not brings in "human factors". ;-)
I would have chosen a constraint language like Oz or SICStus Prolog.
See, e.g., the "Self-Referential Aptitude Test" in section 8.3 of the Mozart Finite Domain Tutorial for a good example of the kinds of weird problems that constraints can solve easily.
the problem is that you read the question and assume that is all the state available. so immediately, you (i) think - this is carzy! impossible with so little state!
but of course the other processes have state. nothing in the problem says otherwise. it is only the communication channel that is restricted.
abstractions like the idea of state make life so much simpler for us, in so many cases, that we take it for granted that we can always apply the abstraction (in its obvious form). sometimes things are more complex.
someone who knew nothing of computer science would not immediately discard this problem as insoluble. this is while i felt so stupid :o(
have you heard the one about the surgeon that is so upset they cannot operate on a boy to save his life? the boy is the surgeon's son. yet the boy was injured in the same accident that killed his father...
if you'd only been born 25 years earlier. you could have solved the frame problem, prevented the ai winter, and we would all be happily using lisp!
in a situation like that, it's quite likely that prisoners will become depressed, confused and possibly suicidal. an incorrect switching on - either accidental or not - would be disasterous.
is there a more robust solution? or does a single bit of communication imply no possibility error correction?
Non-counters might get suicidal and decide to end it all, and all their comrades by saying that everyone has been in room. But I don't think that is calculable in the context of the problem.
Boy, if I let that worry me, I'd never post in public. ;-)
Even when I think I have a solid case, there is always someone who thinks it is dumb... ;-)
Well, I don't know about the Lisp part, but it has seemed rather obvious to me for a long time that if the AI gang had paid a little more attention to the great logicians of the early 20th century (in both their sucesses and failure), they wouldn't have been so sanguine about HAL 9000 being just around the corner. ;-)
The problem doesn't mention using a computer or a programming language.
Michael (also takes apart Rubik's cubes and puts them together again)
type Bulb = Bool type Step = Inttype Prisoner = Step -> Bulb -> Action data Action = Announce | Continue Bulb Prisoner
Prisoner behaviours can be specified concisely and cleanly, and when prisoners change roles, they just modulate to a different prisoner function.
In Haskell, you get the benefit that it's impossible to write a solution which accidentally cheats -- the type system enforces the rules of the prison (modulo unsafePerformIO).
# Return true iff the prisoners built by makePrisoner successfully
# solve the puzzle.
def puzzle(makePrisoner :Frozen, rng) :boolean {
# A list of 100 prisoners
def prisoners := map(def _(_) :any { makePrisoner() }, 1..100)
# The light bulb
var bulbOn := false
def bulb {
to isOn() :boolean { bulbOn }
to toggle() :void { bulbOn := !bulbOn }
}
# Set of prisoners not yet taken to the room
var unselected := prisoners.asSet()
while (true) {
# Pick a random prisoner.
def prisoner := rng.pick(prisoners)
unselected := unselected.remove(prisoner)
# Have them visit the lightbulb and tell whether they think we're done.'
if (callWithOneShot(prisoner, bulb)) {
return unselected.isEmpty()
}
}
}
# Return fn(argument), but supplying a one-use proxy instead of the
# argument directly. Revoking the proxy corresponds to taking the
# prisoner back to their cell.
def callWithOneShot(fn, argument) :any {
def [proxy, revoker] := makeRevoker(argument)
def result := fn(proxy)
revoker.revoke("Attempt to reuse one-shot argument")
result
}
I ended up considering a similar approach to extend my Haskell framework. While implementing some experimental prisoners, I started using (functional) arrays for internal state. It occured to me that it would be more efficient to allow prisoners to keep their state in the ST monad (for fast array update), but then the ST type 'leaks' out into the type of the prisoner, and if used naively could allow the prisoners to share a covert channel.
It occured to me to use rank-2 polymorphism, just as in runST. If the run_prison function requires a list of polymorphic (in the state token) make_prisoner functions, then the generated prisoners cannot communicate, even if they are all threaded into the same ST monad.
There have been some interesting recent papers exploring the relationships between cryptographic/capability security and parametric polymorphism -- it's neat to see it come up again in a concrete context.
I'd be interested in the Haskell with rank-2 polymorphism -- sounds fancier than anything I've done in that language yet.