Showing posts with label logic. Show all posts
Showing posts with label logic. Show all posts

09 January 2009

Meta-disclaimers

From Michael Filaseta's home page:
DISCLAIMER: "The views and opinions expressed in this page are strictly those of the page author. The contents of the page have not been reviewed or approved by the University of South Carolina."

DISCLAIMER OF DISCLAIMER: "The views and opinions expressed in the disclaimer above are strictly those of another entity different from the author of this page. Its presence on this page does not imply approval of its contents by the page author. The page author has however provided a link to a review of the above disclaimer."
Thanks, Kate!

03 December 2008

Logic as machine language

Gil Kalai mentions a metaphor I hadn't heard of before about the foundations of mathematics:
To borrow notions from computers, mathematical logic can be regarded as the “machine language” for mathematicians who usually use much higher languages and who do not worry about “compilation.”
Of course there would be analogues to the fact that certain computer languages are higher-level than others as well. To take an example dear to me, the theory of generating functions might be at a higher level than the various ad hoc combinatorial arguments it's often introduced to students as a replacement of. I don't want to press this metaphor too hard because it'll break -- I don't think there are analogues to particular computer languages. But feel free to disagree!

02 December 2008

You can't say you're a liar

From Family Guy:
"Chris, everything I say is a lie. Except that. And that. And that. And that. And that. And that. And that. And that."

11 April 2008

Two quotes from Principia Mathematica (Russell and Whitehead)

"From this proposition it will follow, when arithmetical addition has been defined, that 1+1=2." -- a comment to proposition 54.43.
"The above proposition is occasionally useful." -- a comment to proposition 110.643, which proves that 1+1=2.

Quite the understatement, don't you think?

24 January 2008

A list of logical fallacies

A list of logical fallacies. (Via Terence Tao's homepage.)

Most of these are extra-mathematical fallacies. For example, in mathematics we don't often see a relation "X causes Y", although it is quite common in ordinary discourse. Even in probability and statistics, we don't see causation nearly as often as correlation. However, they may be familiar to mathematicians as "false methods of proof". For example, what this list calls appeal to force is the well-known "proof by intimidation" -- the Important Person at the front of the room says it's true, so it must be!

A lot of these fallacies are essentially statistical in nature, as any reasoning about the real world must be. In mathematics we either know things or we do not; we don't attach probabilities to our knowledge. (However, we can attach probabilities to other people's knowledge -- or to our own extra-mathematical knowledge -- and then reason about those probabilities. This is the basis of Bayesian reasoning.) Many others are fallacies that exploit the ambiguitiy of natural language. Perhaps the power of mathematics is that it allows us to know things surely, which can never happen in the Real World. But on the flip side, mathematicians know fewer things than other people, because we insist on such certainty in our knowledge.

19 October 2007

The axiom of choice and the wisdom of crowds

(The two things in the title aren't related, but this post is about both of them.)

I just came across this essay on the Axiom of Choice by Eric Schecter. I'm not a logician, so I don't have any deep commentary. It contains two amusing quotes, though, one by Bertrand Russell:
"To choose one sock from each of infinitely many pairs of socks requires the Axiom of Choice, but for shoes the Axiom is not needed."

(the idea here is that you can distinguish between left shoes and right shoes, but not left and right socks), and

The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma?

where all three of those statements are equivalent.

A classmate of mine, though, just expressed the opinion that logicians "should" be showing that certain problems are "too hard" for current mathematics, by, say, showing that they're equivalent to problems we already know we have no hope of solving with the current state of affairs.

I wonder to what extent it would be possible to use the "wisdom of crowds" to quantify how difficult a given problem is. If one could poll people in a given area and have them say "I think Problem X is harder than Problem Y", you could probably use that information to rank open problems within a given subfield; people's intuitions wouldn't be perfect but if you had enough of them you'd come up with some interesting results. The hard part would be collating the results from various parts of mathematics; if I know Algebra Problem X is harder than Algebra Problem Y, and Analysis Problem Z is harder than Analysis Problem W, then how do you compare X and Z? You'd need links between the various areas to know how to put all that information together, and to get all areas on the same scale. Perhaps what we need is more people offering financial bounties for problems, because hard currency is a scale that everyone can agree on the value of. (Too bad Erdos is dead, and the Clay Foundation only has million-dollar prizes. If some entity were giving out a prize of 1000ドル for problem A and 2000ドル for problem B, then that would let me know that the community as a whole considers problem B to be harder than problem A. i don't know about "twice as hard", though; what does that mean?)

The usual "prediction market" methods wouldn't immediately seem to work; people could bet on which problems they expect will be solved first, but just because a problem is solved later doesn't mean it's harder.

Some other interesting things on Schechter's web page include Common Errors in College Math (which I probably should point my students to, come to think of it) and an explicit statement of the "cubic formula", i. e. a solution by radicals to the equation ax3 + bx2 + cx + d. (This formula is a couple lines long significantly longer than the quadratic formula, of course, which is why you don't have it memorized. The "quartic formula" is, if I remember correctly, a couple pages, although it contains a large number of the same subexpressions; in both cases there is a shorter way to write down the formula than the actual formula itself.)

10 September 2007

How many well-formed formulas are there?

The following is exercise 1.1.2 of "A Mathematical Introduction to Logic", by Herbert Enderton:
Show that there are no [well-formed formulas] of length 2, 3, or 6, but that any other positive length is possible.


Enderton defines a wff as follows: every sentence symbol An is a wff. If α and β are wffs, then so are (¬ α), (α ∧ β), (α ∨ β), (α → β), and (α ↔ &beta); furthermore, that's all of the wffs. (The parentheses count as symbols. The sentence symbol An counts as a single symbol.)

The problem is pretty straightforward. A, (A ∧ B), and (A ∧ (B ∧ C)) are wffs of length 1, 5, and 9 respectively. If α is a wff of length n, then (¬ α) is a wff of length n+3; thus we can generate wffs with lengths in the following sequences: {1, 4, 7, ...}, {5, 8, 11, ...}, {9, 12, 15, ...}. This is all the integers except 2, 3, and 6.

Now, each of the ways of forming new wffs from old takes one or two wffs, adds them together, and adds three new symbols. So we can't get a wff of length 2 or 3 because it would have to come from a wff of length -1 or 0. Furthermore, there are no wffs of length 6, because they'd either have to be of the form (¬ α) where α has length 3, or (α * β) where * is ∧, ∨, →, or ↔ and α and β have total length 3, which would mean that one of α and β has length 2.

The natural follow-up question, to me at least, is "how many wffs are there"? This is silly to ask because if we have infinitely many sentence symbols, there are of course infinitely many wffs of any length for which there is at least one wff. So let's restrict ourselves to one sentence symbol, A. Let fn be the number of wffs of length n. Let F(z) = Σn fn zn be the generating function of this sequence. Then we have

f(z) = z + z3f(z) + 4z3f(z)2

where the first term on the right-hand side corresponds to the formula which consists of just the sentence symbol, the second term to all those formulas of type (¬ α), and the third term to all those forms of type (α * β). (The coefficient 4 comes up because there are four things that "*" could be.)

Solving this for f, we get

f(z) = {1-z^3 - \sqrt{1-2z^3+z^6-16z^4} \over 8z^3}

and so we get

f(z) = z + z4 + 4z5 + z7 + 12z8 + 32z9 + z10 + 24z11 + 160z12 + 321z13 + 40z14 + 480z15 + ...

which is a bit surprising; the coefficients don't grow "smoothly". This is because, for example, wffs of length 9 are of the form (A * (A * A)) or ((A * A) * A) where * is as above (and yes, these two forms are different), so there are 32 of these; the only wff of length 10 is (¬ (¬ (¬ A))), since any other such formula would come from two formulas with total length 7, and one of those would have to be of length 2, 3, or 6 which is impossible. But I'm more interested in the asymptotics.

It's a general principle that the coefficients of a generating function grow roughly like fn ~ (1/ρ)n, where ρ is the absolute value of the singularity of f(z) = Sigma;n fn zn; this is true up to a "subexponential factor" which has to do with the nature of that singularity (polar, algebraic, etc.) The singularities of f(z) in this case arise from taking the square root of numbers near 0; thus they are located at the roots of the polynomial 1 - 2z3> +z6 -16z4. The smallest root is about ρ = 0.4728339090, so the number of wffs of length n grows like (1/ρ)n, or about 2.11n, times some subexponential factor. If you take larger coefficients they do have a ratio of about 2.11.

The subexponential factor is probably something like n-3/2 -- that is, the number of wffs of length n is probably about 2.11n n-3/2 -- because that particular subexponential factor often arises when one analyzes the asymptotics of tree-like structures. Don't ask me to explain that.

The interpretation this brings to mind is that if you write a wff by just writing symbols at random, at any given step there are in some sense "on average" 2.11 symbols that you could write down which would keep the wff grammatical. There are eight possible symbols, so this suggests that the best special-purpose compression algorithm for this formal language would shorten formulas to log(2.11)/log(8), or about 36%, of their original length.

25 July 2007

Propp's self-referential aptitude test

Check out the Self-referential aptitude test by Jim Propp. The test consists of twenty questions, each with five possible answers; the answer to each question depends in various ways on the other questions! There are various routes to deriving the answers. I'd say to see if you can solve it, but that takes quite some time!

I tried to come up with the answers by first assuming that the answers were a random string of letters from A to E and then changing the answers to various questions in order to agree with each other; unfortunately this didn't work, because the way in which the answers depend on each other is too complicated. I expected this process to converge to a solution fairly quickly, but it didn't.

There is a probabilistic solution, though, via genetic algorithms; Mark Van Dine writes about it. The idea is that we can tell how many of the questions are correct, and then "mutate" the answers a bit hoping that we gradually find the right answer. This took thousands of generations, though, which is why I couldn't do it by hand.

I give a (deterministic) solution below. First, I want to give a couple thoughts on it. Presenting a solution like this is tricky; one wants to minimize the number of "intermediate results" where we know, say, that a question can have one of two or three answers but we don't know which it is. These sorts of results are hard to hold in one's head. (My original path to the solution had more such intermediate results; I don't claim that this way of writing the solution is optimal.) Another thing one wants to reduce is contradictions that take a long time to derive; there are a couple points in the solution where I know that a question has two possible answers, so I assume one of them and seek a contradiction. If it takes a long time to find the contradiction, then one forgets which assumptions need to be "undone" when the contradiction is finally reached. These two principles that I've given here are probably good principles for the writing of mathematical proofs, as well. It's a good idea to minimize the number of things your reader has to remember at any given time, because remembering things is hard.

It looks to me like there's no trick to finding a solution, from looking at the other solutions given at Propp's web site; the other solutions given there seem about as complicated as mine.

Anyway, here we go.

First, the answer to #12 has to be (A): the question is "the number of questions whose answer is a consonant is: even, odd, a perfect square, prime, divisible by 5". So I assume on that there is only one "right" answer to this question; therfore we need a number between 0 and 20 which is exactly one of those things. Clearly any number is even or odd, but not both, so we need a number which isn't a square, prime, or divisible by 5. Thus the number of questions whose answer is a consonant is 6, 8, 12, 14, or 18, and the answer is (A), for "even".

I also let knowledge from the outside world enter; the answer to #20 is (E). #12 and #15 have the same answer, so #15 is (A). This means that #13 must be (D). And the answer to #5 must be (E), by the principle that questions have only one correct answer and (E) is always a correct answer to that question.

Next, #6 and #17 only refer to each other, so we can hope to work out the answers to them without reference to the other questions. First, neither one can have the answer (E), since "all of the above" doesn't make sense in this context. Thus neither can have answer (C), because if either 6 or 17 has answer (C) then the other must have answer (E). Reasoning similarly, neither can have answer (A). So both #6 and #17 have either (B) or (D) as the answer; they must be one of each but we can't determine the order.

Similarly, #10 and #16 only refer to each other; they're (A) and (D), respectively.

So far, then, we have the answer string

____E *___D _AD_A D*__E

where the two *s are one B and one D. Furthermore, we know that 6, 8, 12, 14 or 18 answers are consonants. So we move on to problem 8, which asks how many answers are vowels; this must be 2, 6, 8, 12, or 14. The only ones of these which are among the choices are (C) 6 or (E) 8. Looking at #7, then, we must have DC, DE, or EE for the answers to #7 and #8.

Next, we consider #3, #4, and #8. The numerical answers to the first two must sum to the numerical answer to the third. Furthermore, we've already fixed two answers of (E), so the answer to #3 is (C), (D), or (E). From #4 we know there are at least four A's. The answer to #8 is C or E, as we've already shown. Since there are at least two E's and at most eight vowels, there are at most six A's, so #4 is A, B, or C. The possible combinations of answers for these three questions are: CAC, CCE, DBE, EAE. If we take EAE here, then there are four E answers (#3, #5, #8, #20) already and no other E answers are possible; in particular #7 is D. Thus this are only six possibilities for the answers to #3, #4, #7, #8: CADC, CCDE, CCEE, DBDE, DBEE, EADE.

#9 can't be C (because #12 isn't C). And it can't be (E), because #13 is (D) and so the "next question with the same answer as this one" can't be #14.

#1 asks: "The first question whose answer is B, is question: (A) 1 (B) 2 (C) 3 (D) 4 (E) 5". It can't be (A) (because then the question would contradict itself) or (E) (because the answer to #5 isn't (B).) It can't be (C), because we just showed the answer to #3 isn't (B). So it must be either (B) or (D). Now, it seems that we have to make an arbitrary choice. We say #1 is (B). Thus #2 is also (B). So #7 and #8 must be the same; we see that they both have to be (E). Thus #5, #7, #8, and #20 are all (E); this is four (E)s, so #3 must be (E). But now there are five (E)'s, which isn't possible by examining the choices to #3. So our assumption that #1 had answer (B) is wrong; #1 must be (D).

Thus #2 isn't (B), meaning that #7 and #8 aren't the same. And #4 is (B). Thus the answers to #3, #4, #7, and #8 must be (D), (B), (D), (E) respectively, by looking at our previous list of possible answers to those four questions. The answers so far are

D_DBE *DE_A _AD_A D*__E

and we can at least see the solution taking shape.

Next, we look at #2 again. The answer to #10 is (A). Now, #11 can't be (A), because one of the questions preceding it has answer (B). Thus the answer to #2 isn't (E). Next, #2 can't be (C), because then #8 and #9 would be the same; but #8 is (E) and #9 can't be (E). So #2 is (A) or (D). But #2 can't be (D), because #2 refers to "the only two consecutive questions with identical answers" and #1 is already (D). So #2 is (A). Ths means #6 and #7 have the same answer, (D). Thus #17 is (B). The answer string so far is

DADBE DDE_A _AD_A DB__E.

There are five answers (A); we know this from #4. But there are less than five answers (C), since there are only five blanks left and #9 is not (C). So #18 isn't (B). Similarly, there are more than five answers (D) from the choices to #14, and exactly three answers (E) from #3. So #18 isn't (C) or (D) either; thus #18 is (A) or (E). But we've already accounted for the three answers (E) -- they're #5, #8, and #20. So #18 is (A). We now know there are the same number of answers (A) and (B), namely five of each. We've identified the five (A) answers, and two of the answers (B). The answer string so far is

DADBE DDE_A _AD_A DBA_E

and there are only two answers (B) and four blanks left. Three of the blanks have the answer (B).

Now, #11 must be (B) or (C), since there are 1 or 2 (B) among the first ten answers. Say it's (C); this means that there are two (B)s among the first ten answers, so #9 must be (B). But this means that #9 and #11 have the same answer, and we just said they didn't. So #11 is (B). This means that there is one (B) among the first ten answers, so #9 is not (B). (A) and (E) have already been ruled out for #9 (as the answers to the adjacent questions), and (C) was previously ruled out. So the answer to #9 is (D). The answer string is now

DADBE DDEDA BAD_A DBA_E

with three (B)s and two blanks; there are five answers (B), so the blanks must be (B), and we have

DADBE DDEDA BADBA DBABE

which is the answer. We know this is right because it spells the sentence "Dad bedded a bad bad babe".
Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /