Friday, November 29, 2013
The Third Edition of the Machinery of Freedom
I have just finished drafts of the chapters for the new edition and webbed them for comment.
Wednesday, November 27, 2013
The Rush to Judgement
A friend or acquaintance comes to you with a story of how badly he has been mistreated by someone—his employer, his girlfriend, a store, an airline. He expects you to agree with his complaint, take his side, despite the fact that you have not heard the other side of the argument and so, unless you happen to have some other source of information, have no way of knowing whether his side is correct. Your honest response would be to point that out—at which point he will get mad at you too.
Seen from a sufficiently cynical point of view the pattern makes sense. Agreeing with him makes him your ally, allies are useful, and the target of his attack is far away and, with any luck, will never know you have sided against him, her or it. Agreeing is also stupid if it does not occur to you that you have heard only one side of the story or if you have not yet learned how dangerous it is to reach conclusions on that basis, dishonest if you have.
I was reminded of this particular recurrent irritation by recent news stories about a waitress who claimed to have been stiffed by a couple she served, given a note criticizing her (lesbian) life style in lieu of a tip. Her original account did not identify the couple, but it provided sufficient information for them to identify themselves—at which point they provided what looks like convincing evidence that she was lying, including the visa charge for their dinner, tip included. The most recent story I have seen includes comments by friends and former colleagues of the waitress reporting a history of minor lies designed to provoke sympathy on the basis of invented stories.
What struck me was not the behavior of the waitress but the behavior of the large number of people who took her side, including reporters who took the waitress's initial story as gospel, reporting it as something that happened, not as something someone claimed happened, despite no evidence beyond a digital image of what purported to be the check with note and without tip. Judging at least by reports, thousands of people on Facebook condemned the supposed behavior of the couple—with no evidence beyond the news stories—and many sent donations to the purported victim.
Their behavior was stupid and unjust. The behavior of the reporters was also professional incompetence.
One question about the story that nobody else seems to have commented on occurred to me. All of the reports describe the waitress as an ex-marine. She is also described as 22 years old, and the most recent story mentions "a day care center where she once worked." The minimum age of enlistment for the marines is 17. The usual terms of enlistment are for three to five years of active service. Marine corps training requires an additional three months. It is not impossible that someone could have enlisted at 17 on the shortest terms, left the corps at 20 and by 22 have worked first at a day care center and then at a restaurant, but the timing is sufficiently tight to be at least mildly suspicious, especially when combined with evidence that the person in question is a habitual liar.
It would be nice to know if any of the reporters checked with the marine corps to make sure that "ex-marine" was not another fabrication.
Sunday, November 24, 2013
Obama, Silicon Valley, and Learning by Testing
There were PhDs working as low paid data managers during Obama’s ’08 campaign and top product managers developing interactive during ’12 campaign. There are many talented developers/product managers/data modelers who would take a pay cut to work on something they believe in. Especially for those with enough life experience to know how important the Affordable Care Act is, even if it’s not an ideal solution.
The quote is from a comment on a very interesting essay about the failure of the Healthcare.gov website. Part of the essay's point is the danger, in IT projects and elsewhere, of a particular approach to doing large projects:
The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.
Like all organizational models, waterfall is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work.
It occurred to me that the comment, combined with that point, raised an issue that had probably not occurred to the commenter. The Silicon Valley people who worked to reelect Obama were acting on their view of Obama and his policies. The arguments of the essay imply that they ought to be willing to revise that view and alter their political activities accordingly as further evidence comes in.
If, as many sources seem to suggest, Obama did not realize that healthcare.gov was not going to work, and if the reason he did not realize it was that he had created a culture around him in which people did not feel free to pass on bad news to their boss, then he is not, and was not, competent to be President. If, as Obama himself implied in contrasting the failure of healthcare.gov to the success of the IT efforts of his reelection campaign, government is very bad at doing this sort of thing, that is at least some evidence that the ACA was a mistake, likely to make health care worse rather than better.
If, as many sources seem to suggest, Obama did not realize that healthcare.gov was not going to work, and if the reason he did not realize it was that he had created a culture around him in which people did not feel free to pass on bad news to their boss, then he is not, and was not, competent to be President. If, as Obama himself implied in contrasting the failure of healthcare.gov to the success of the IT efforts of his reelection campaign, government is very bad at doing this sort of thing, that is at least some evidence that the ACA was a mistake, likely to make health care worse rather than better.
I wonder how willing his supporters in Silicon Valley will be to apply the "test and revise accordingly" approach to their own political views.
Saturday, November 23, 2013
The Second Amendment in the 21st Century
A recent facebook post pointed me at an entertaining video in favor of gun control. The point of the video, surely correct, is that mass shootings were a lot less practical with 18th century firearms than with modern firearms. Its conclusion: "Guns have changed. Shouldn't our gun laws?"
There are two problems with the argument. The first is that gun laws have changed quite a lot over the past two hundred plus years. The second is that, while mass shootings get a lot of publicity, they represent only a tiny fraction of all killings.
There is, I think, a better argument to be made for the effect of technological change on the argument for the right to bear arms. As I interpret the Second Amendment, it was intended as a solution to a problem that worried eighteenth century political thinkers, the problem of the professional army. As had been demonstrated in the previous century, a professional army could beat an army of amateurs. As was also demonstrated, a professional army could seize power. Oliver Cromwell and the New Model Army won the first English Civil War for parliament and then won the second English Civil War for itself, with the result that Cromwell spent the rest of his life as the military dictator of England.
The Second Amendment, as I interpret it, was intended to solve that problem by combining a small professional army with an enormous amateur militia. In time of war, the size of the militia would make up for its limited competence. In time of peace, if the military tried to seize power or if the government supported by the military became too oppressive, the professionals would be outnumbered a thousand to one by the amateurs. It was an ingenious kludge.
It depended, however, on a world where the weapons possessed by ordinary people for their own purposes, mostly hunting, were as effective as the weapons possessed by the military. We are no longer in such a world. The gap between military weapons and civilian weapons is very much larger now than then. One result is that the disorganized militia, the population in general, no longer plays any role in military defense. Another is that, if there ever was a military coup in the U.S., ordinary civilians would be much less able to oppose it with force than they would have been two hundred years ago.
Civil conflict in a modern developed society is much more likely to be carried on with information than with guns—a government that wants to oppress its population does it by controlling what people say and know. It follows, in my view, that the modern equivalent of the Second Amendment, the legal rule needed to make it possible for the population to resist the government, has nothing to do with firearms. The 21st century version would be a rule forbidding government regulation of encryption. A government that has no way of knowing what who is saying to whom lacks the most powerful weapons for winning an information war.
There remains a strong argument for the right to bear arms, different from but related to its original function. People who are unable to protect themselves are dependent for protection on the police. The more dependent people are on the police, the more willing they are to tolerate, even support, increased police power. Hence disarming the population makes possible increased levels of government power and the misuse thereof, although for a somewhat different reason than in the 18th century.
Which is an argument against restrictions on the private ownership of firearms.
Wednesday, November 20, 2013
The Killer App for Google Glass
I can remember large amounts of poetry, but people's names, faces and the information associated with them are a different matter. For the most part, I successfully conceal my handicap by a policy of never using names if I can help it, but once in a while the tactic fails. I still remember, as perhaps my most embarrassing moment, recommending Larry White's work on free banking to someone who looked vaguely familiar—and turned out to be Larry White.
Help, however, is on the way. I first encountered the solution to my problem in Double Star, a very good novel by Robert Heinlein. It will be made possible, in a higher tech version, by Google glass. The solution is the Farley File, named after FDR's campaign manager.
A politician such as Roosevelt meets lots of people over the course of his career. For each of them the meeting is an event to be remembered and retold. It is much less memorable to the politician, who cannot possibly remember the details of ten thousand meetings. He can, however, create the illusion of doing so by maintaining a card file with information on everyone he has ever met: The name of the man's wife, how many children he has, his dog, the joke he told, all the things the politician would have remembered if the meeting had been equally important to him. It is the job of one of the politician's assistants to make sure that, any time anyone comes to see him, he gets thirty seconds to look over the card.
My version will use more advanced technology, courtesy of Google glass or one of its future competitors. When I subvocalize the key word "Farley," the software identifies the person I am looking at, shows me his name (that alone would be worth the price) and, next to it, whatever facts about him I have in my personal database. A second trigger, if invoked, runs a quick search of the web for additional information.
I am told that Google itself has a rule against building face recognition into glassware, so my Farley file software may not appear in the immediate future. But it is the killer app, and someone will build it.
Monday, November 18, 2013
What Should Replace Obamacare
A recent post on the Forbes site offers a convincing explanation of what was wrong with the current system of health insurance before Obama, hence what both it and Obamacare ought to be replaced by. Its central point is that what we call medical insurance is in part actual insurance, protection against low probability/high cost risks, in part prepayment of ordinary medical expenditures. The reason insurance policies take that form, also the reason that most of them are provided by the employer and so not portable, is that employer provided health insurance is bought with pre-tax dollars, ordinary medical care with after tax dollars.
One result is that individual consumers have little incentive to be careful shoppers for health care services, since for the most part they are not the ones paying for them. A second is that insurance companies, in order to provide a substitute for careful shopping by customers, require a lot of paperwork from providers, driving up their costs. Costs are also driven up by state regulations that require insurance companies to cover things that the customers might prefer not to pay to have covered—the same problem that Obamacare produces on a national scale. In my state, California, for example, health insurance must cover acupuncture, and in Connecticut it must cover hair prosthesis.
One implication is that tax law should be changed to put employer provided insurance, privately purchased insurance and payments for uninsured medical expenditures on the same footing. To get the economics right, all should be treated as ordinary consumption expenditures. From the standpoint of the relevant politics, however, what the Republicans ought to propose is to make all three tax deductible, at least up to the level of what most people now pay. It's a lot easier to sell a tax cut than a tax increase.
A second implication is that insurance companies should be allowed to sell policies interstate. That would eventually eliminate inefficient regulatory requirements, since state insurance regulators would have to compete with each other to provide regulations that generated the policies consumers wanted to buy. In this case as in many others, competition is a good thing.
A well written and informative article by someone I am pretty sure I interacted with online many years ago. It's a small world.
One implication is that tax law should be changed to put employer provided insurance, privately purchased insurance and payments for uninsured medical expenditures on the same footing. To get the economics right, all should be treated as ordinary consumption expenditures. From the standpoint of the relevant politics, however, what the Republicans ought to propose is to make all three tax deductible, at least up to the level of what most people now pay. It's a lot easier to sell a tax cut than a tax increase.
A second implication is that insurance companies should be allowed to sell policies interstate. That would eventually eliminate inefficient regulatory requirements, since state insurance regulators would have to compete with each other to provide regulations that generated the policies consumers wanted to buy. In this case as in many others, competition is a good thing.
A well written and informative article by someone I am pretty sure I interacted with online many years ago. It's a small world.
Sunday, November 17, 2013
Multitasking or Parallel vs Serial Thinking
It is useful to know what one is good at, but also what one is bad at.
The example I am thinking of is multitasking, doing and thinking about several things at once. The first clear evidence of my inability to do it well appeared decades ago in the context of my medieval hobby, which included combat with medieval weapons done as a sport. I was much worse at melee combat—one group of fighters against another—than at single combat. In single combat I only had to focus on the opponent I was fighting. In melee, I had to be, or at least should have been, simultaneously keeping track of everyone else near me. And I wasn't.
The same problem showed up much later in the context of World of Warcraft. Group combat there, a raid with a group of from five to forty people, requires the player to keep track of what he is doing, what other people in the group are saying—in the form of typed messages on the screen—and other things going on around him. I focused on what I was doing and frequently missed important things other people were saying. Interestingly enough, that was less of a problem if the group was using software that permitted voice communication, so that one kind of information was coming in mostly through my ears, another through my eyes.
The same problem showed up much later in the context of World of Warcraft. Group combat there, a raid with a group of from five to forty people, requires the player to keep track of what he is doing, what other people in the group are saying—in the form of typed messages on the screen—and other things going on around him. I focused on what I was doing and frequently missed important things other people were saying. Interestingly enough, that was less of a problem if the group was using software that permitted voice communication, so that one kind of information was coming in mostly through my ears, another through my eyes.
It is not just that paying attention to multiple things is hard. My daughter, playing the same game, can not only pay attention to everything in the game, she can also conduct one or two independent conversations, in typed text, while doing so. Pretty clearly, it is a real difference in abilities, whether innate or learned I do not know.
Thinking about it, it occurred to me that I had observed the same pattern in an entirely different context, the difference between how I think and how Richard Epstein, a friend and past colleague, thinks. I usually describe the difference as my thinking in series, Richard in parallel. It shows up when he is sketching the argument for some conclusion.
A implies B. B implies C. C ...
At which point I demonstrate that B doesn't really imply C, that there is a hole in the argument. That is no problem for Richard, who promptly points out that A also implies B', a somewhat different proposition than B, which implies C', from which he can eventually work his way back to D, or perhaps E or F, and so to the conclusion that the original line of argument was intended to establish. Pretty clearly, he is running a network of multiple lines of argument in his head and only has to find some set of links in the network that gets him where he is going. I am focusing on running a single line of argument. Hence parallel vs series.