Showing posts with label caltech. Show all posts
Showing posts with label caltech. Show all posts

Thursday, April 04, 2024

Casey Handmer: Terraform Industries and a Carbon-Neutral Future — Manifold #57

[フレーム]

Casey Handmer (PhD Caltech, general relativity) is the founder of Terraform Industries. He is one of the most capable and ambitious geo-engineers on planet Earth!

Terraform Industries is scaling technology to produce cheap natural gas with sunlight and air. Using solar energy, they extract carbon from the air and synthesize natural gas, all at the same site.

March 2024: "Terraform completes the end to end demo, successfully producing fossil carbon free pipeline grade natural gas from sunlight and air. We also achieved green hydrogen at <2ドル.50/kg-H2 and DAC CO2 at <250ドル/T-CO2, two incredible milestones."

Links:

Casey Handmer’s website: https://www.caseyhandmer.com/

Terraform Industries: https://terraformindustries.com/

Nerds on Patrol [Episode 3] - Terraform Industries:

Steve and Casey discuss:

0:00 Introduction
00:31 Casey's early life and background, from Australia to Caltech
07:55 The academic path and transition to tech entrepreneurship 10:40 Terraform Industries
15:21 Solar costs, efficiency, and global Impact
24:25 A world powered by Terraform methane
31:27 The entrepreneurial journey: challenges and insights
35:01 Investor dynamics and strategic decisions for Terraform
41:28 The hard Reality of manufacturing and innovation
44:11 Navigating intellectual property and strategic partnerships
45:49 The moral and technical challenges of carbon neutrality
55:48 Looking ahead: Terraform's next milestones and the solar revolution

Transcript and Audio-only version:

Thursday, October 21, 2021

PRC Hypersonic Missiles, FOBS, and Qian Xuesen




There are deep connections between the images above and below. Qian Xuesen proposed the boost glide trajectory while still at Caltech.








Background on recent PRC test of FOBS/glider hypersonic missile/vehicle. More from Air Force Secretary Frank Kendall. Detailed report on PRC hypersonic systems development. Reuters: Rocket failure mars U.S. hypersonic weapon test (10/21/21)

The situation today is radically different from when Qian first returned to China. In a decade or two China may have ~10x as many highly able scientists and engineers as the US, comparable to the entire world (ex-China) combined [1]. Already the depth of human capital in PRC is apparent to anyone closely watching their rate of progress (first derivative) in space (Mars/lunar lander, space station, LEO), advanced weapons systems (stealth jets, radar, missiles, jet engines), AI/ML, alternative energy, materials science, nuclear energy, fundamental and applied physics, consumer electronics, drones, advanced manufacturing, robotics, etc. etc. The development of a broad infrastructure base for advanced manufacturing and R&D also contributes to this progress, of course.

[1] It is trivial to obtain this ~10x estimate: PRC population is ~4x US population, a larger fraction of PRC students pursue STEM degrees, and a larger proportion of PRC students reach elite levels of math proficiency, e.g., PISA Level 6.



"It was the stupidest thing this country ever did," former Navy Secretary Dan Kimball later said, according to Aviation Week. "He was no more a Communist than I was, and we forced him to go." ...
Qian Xuesen, a former Caltech rocket scientist who helped establish the Jet Propulsion Laboratory before being deported in 1955 on suspicion of being a Communist and who became known as the father of China's space and missile programs, has died. He was 98. ...
Qian, a Chinese-born aeronautical engineer educated at Caltech and the Massachusetts Institute of Technology, was a protege of Caltech's eminent professor Theodore von Karman, who recognized him as an outstanding mathematician and "undisputed genius."

Below, a documentary on Qian and a movie-length biopic (English subtitles).

Saturday, May 22, 2021

Feynman Lectures on the Strong Interactions (Jim Cline notes)


Professor James Cline (McGill University) recently posted a set of lecture notes from Feynman's last Caltech course, on quantum chromodynamics. Cline, then a graduate student, was one of the course TAs and the notes were meant to be assembled into a monograph. Thanks to Tim Raben for pointing these out to me.

The content seems a bit more elementary than in John Preskill's Ph230abc, a special topics course on QCD taught in 1983-4. I still consider John's notes to be one of the best overviews of nonperturbative aspects of QCD, a rather deep subject. However as Cline remarks there is unsurprisingly something special about the lectures: Feynman was an inspiring teacher, presenting everything in an incisive and fascinating way, that obviously had his own mark on it.

The material on QFT in non-integer spacetime dimensions is, as far as I know, original to Feynman. Dimensional regularization of gauge theory was popularized by 't Hooft and Veltman, but the analytic continuation to d = 4 - ε is specifc to the loop integrals (i.e., concrete mathematical expressions) that appear in perturbation theory. Here Feynman is, more ambitiously, exploring whether the quantum gauge theory itself can be meaningfully extended to a non-integer number of spacetime dimensions.
Feynman Lectures on the Strong Interactions
Richard P. Feynman, James M. Cline
These twenty-two lectures, with exercises, comprise the extent of what was meant to be a full-year graduate-level course on the strong interactions and QCD, given at Caltech in 1987-88. The course was cut short by the illness that led to Feynman's death. Several of the lectures were finalized in collaboration with Feynman for an anticipated monograph based on the course. The others, while retaining Feynman's idiosyncrasies, are revised similarly to those he was able to check. His distinctive approach and manner of presentation are manifest throughout. Near the end he suggests a novel, nonperturbative formulation of quantum field theory in D dimensions. Supplementary material is provided in appendices and ancillary files, including verbatim transcriptions of three lectures and the corresponding audiotaped recordings.
The image below is from some of Feynman's handwritten notes (in this case, about the Gribov ambiguity in Fadeev-Popov gauge fixing) that Cline included in the manuscript. There are also links to audio from some of the lectures. As in some earlier notebooks, Feynman sometimes writes "guage" instead of gauge.

Sunday, May 02, 2021

40 Years of Quantum Computation and Quantum Information


This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders.
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role.
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War.
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time.
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ...
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles.
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves.
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics.
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ]

Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...

Feynman's 1981 lecture Simulating Physics With Computers.

Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.



Related:


My paper on the Margolus-Levitin Theorem in light of gravity:

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle.

Participants in the 1981 meeting:

Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)

Monday, September 28, 2020

Feynman on AI

Thanks to a reader for sending the video to me. The first clip is of Feynman discussing AI, taken from the longer 1985 lecture in the second video.

There is not much to disagree with in his remarks on AI. He was remarkably well calibrated and would not have been very surprised by what has happened in the following 35 years, except that he did not anticipate (at least, does not explicitly predict) the success that neural nets and deep learning would have for the problem that he describes several times as "pattern recognition" (face recognition, fingerprint recognition, gait recognition). Feynman was well aware of early work on neural nets, through his colleague John Hopfield. [1] [2] [3]

I was at Caltech in 1985 and this is Feynman as I remember him. To me, still a teen ager, he seemed ancient. But his mind was marvelously active! As you can see from the talk he was following the fields of AI and computation rather closely.

Of course, he and other Manhattan project physicists were present at the creation. They had to use crude early contraptions for mechanical calculation in bomb design computations. Thus, the habit of reducing a complex problem (whether in physics or machine learning) to primitive operations was second nature. Already for kids of my generation it was not second nature -- we grew up with early "home computers" like the Apple II and Commodore, so there was a black box magic aspect already to programming in high level languages. Machine language was useful for speeding up video games, but not everyone learned it. The problem is even worse today: children first encounter computers as phones or tablets that already seem like magic. The highly advanced nature of these devices discourages them from trying to grasp the underlying first principles.

If I am not mistaken the t-shirt he is wearing is from the startup Thinking Machines, which built early parallel supercomputers.

Just three years later he was gone. The finely tuned neural connections in his brain -- which allowed him to reason with such acuity and communicate with such clarity still in 1985 -- were lost forever.


[フレーム]


[フレーム]

Thursday, April 23, 2020

Vineer Bhansali: Physics, Tail Risk Hedging, and 900% Coronavirus Returns - Manifold Episode #43

[フレーム]

Steve and Corey talk with theoretical physicist turned hedge fund investor Vineer Bhansali. Bhansali describes his transition from physics to finance, his firm LongTail Alpha, and his recent outsize returns from the coronavirus financial crisis. Also discussed: derivatives pricing, random walks, helicopter money, and Modern Monetary Theory.

Transcript

LongTail Alpha

LongTail Alpha’s OneTail Hedgehog Fund II had 929% Return (Bloomberg)

A New Anomaly Matching Condition? (1992)
https://arxiv.org/abs/hep-ph/9211299

Added: Background on derivatives history here. AFAIK high energy physicist M.F.M. Osborne was the first to suggest the log-normal random walk model for securities prices, in the 1950s. Bachelier suggested an additive model which does not even make logical sense. See my articles in Physics World: 1 , 2


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Monday, January 27, 2020

Robert Christy and Nuclear Electromagnetic Pulses (EMP)


I always wondered who first worked out the theory of Electromagnetic Pulses (EMP) produced by nuclear weapons. That an EMP would result from a nuclear explosion was known from the beginning:
During the first United States nuclear test on 16 July 1945, electronic equipment was shielded because Enrico Fermi expected the electromagnetic pulse. The official technical history for that first nuclear test states, "All signal lines were completely shielded, in many cases doubly shielded. In spite of this many records were lost because of spurious pickup at the time of the explosion that paralyzed the recording equipment."[2] During British nuclear testing in 1952–1953, instrumentation failures were attributed to "radioflash", which was their term for EMP.
But it's far from obvious that: prompt gamma rays from the nuclear explosion lead to Compton effect ionization, and the resulting Compton current interacts with the Earth's magnetic field to produce coherent synchrotron radiation forming a dangerous EM pulse.



From Achieving the Rare: Robert F. Christy's Journey in Physics and Beyond:
During Cold War years in the 1950’s, a number of mysterious communication disruptions occurred. It was feared that the communications had been sabotaged in some way by the Soviet Union. Robert was at Caltech at the time, but was also a consultant for the Rand Corporation, and became aware of this phenomenon.

For years Robert had been outspoken in his opposition to atmospheric testing of nuclear weapons, and had put a good deal of effort into understanding the effects. At that time the U.S. was still performing atmospheric tests of nuclear weapons. One test involved exploding an atomic bomb at a very high altitude, roughly 20 miles.

It had been known that atomic bombs could sometimes cause problems with electronics in the vicinity, but it was Robert who single-handedly worked out the physics by which atomic explosions in the upper atmosphere would produce an electromagnetic pulse (EMP) that could have catastrophic effects on circuits on the ground at very great distances, and could thereby disrupt communications. He was thus the first to connect the disruption of communications with the high- altitude nuclear explosions. He wrote this up as a classified report. It should be noted, however, that the warning in this report did not prevent the U.S. from carrying out the very-high-altitude “Starfish Prime” test of 1962. In this test a 1.4 megaton bomb was exploded over the Pacific Ocean at an altitude of 250 miles, causing electrical damage in Hawaii (about 900 miles away). The Soviets conducted similar high-altitude tests over Kazakhstan in the same year. These caused even more extensive damage since they were above an inhabited area rather than over the ocean.

The EMP effect of high-altitude atomic explosions is now widely known, but it was Robert Christy who first brought this phenomenon to the attention of the U.S. government. ...

[ See also articles by Longmire and Pfeffer. Perhaps the Soviets were ahead of Christy? Kompaneets, A. S., Radio Emission from an Atomic Explosion, Institute for Chemical Physics, Academy of Sciences, USSR, Journal of Experimental and Theoretical Physics (USSR), English translation in volume 35, 1538-1544 (December 1958); Original article in Russian in JETP, 8, 1076-1080 (1954). ]
The first implosion atomic bomb (Fat Man) was known as the Christy Gadget:
... the Los Alamos team discovered that the interface between the detonating explosives and the hollow sphere could become unstable and ruin the crushing power of the blast wave.

Dr. Christy, while studying implosion tests, realized that a solid core could be compressed far more uniformly, and he worked hard in the days that followed to convince his colleagues of its superiority. He succeeded, and the hollow core was replaced with one made of solid plutonium metal.

... Robert Frederick Christy was born May 14, 1916, in Vancouver and studied physics at the University of British Columbia. He was a graduate student at the University of California, Berkeley, under J. Robert Oppenheimer, a leading theoretical physicist who became known as the father of the atomic bomb.

After completing his studies in 1941, Dr. Christy worked at the University of Chicago before being recruited to join the Los Alamos team when Oppenheimer became its scientific director.

After the war, Dr. Christy joined Caltech in theoretical physics and stayed at the university for the rest of his academic career, serving as a faculty chairman, vice president, provost (from 1970 to 1980) and acting president (1977-78). He was elected to the National Academy of Sciences.

Saturday, March 02, 2019

Kip Thorne on Caltech and Black Holes

[フレーム]

See LIGO Detects Gravity Waves and The Christy Gadget.
Techno-pessimists should note that detecting gravity waves is much, much harder than landing on the moon. LIGO measured a displacement 1/1000 of a neutron radius, in a noisy terrestrial background, accounting even for quantum noise.

https://www.ligo.caltech.edu/: 9/14/15 detection of BH-BH (~ 30 solar masses) merger at distance 1.3 Gy. The energy in the gravitational wave signal was ~3 solar masses!

Here is the paper http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102

When I was an undergraduate, I toured the early LIGO prototype, which was using little car shaped rubber erasers as shock absorbers. Technology has improved since then, and the real device is much bigger.
As Kip makes clear in his talk, the detection of gravity waves was a ~50 year project involving large numbers of very smart physicists and engineers, with the sustained support of some of the most impressive scientific institutions in the world (Caltech, MIT, NSF, Moscow State University). Entirely new technologies and areas of theoretical and experimental physics had to be developed to bring this dream to fruition.

I learned general relativity from Kip when I was at Caltech. The photo below was taken in Eugene, Oregon. Physics as a Strange Attractor.

Thursday, January 24, 2019

On with the Show


Our YouTube / podcast show is live!

Show Page

YouTube Channel

Podcast version available on iTunes and Spotify.

Our plan is to record a new one every 1-2 weeks. We're in the process of scheduling now, so if you have contacted me to be on the show, or have suggested a guest, please bear with us as we get going.
Manifold man·i·fold /ˈmanəˌfōld/ many and various

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve and Corey have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

[フレーム]

[フレーム]

Monday, December 18, 2017

Quantum Computing near a Tipping Point?

I received an email from a physicist colleague suggesting that we might be near a "tipping point" in quantum computation. I've sort of followed quantum computation and quantum information as an outsider for about 20 years now, but haven't been paying close attention recently because it seems that practical general purpose quantum computers are still quite distant. Furthermore, I am turned off by the constant hype in the technology press...

But perhaps my opinion is due for an update? I know some real quantum computing people read this blog, so I welcome comments.

Here's part of what I wrote back:
I'm not sure what is meant by "tipping point" -- I don't think we know yet what qubit technology can be scaled to the point of making Shor's Algorithm feasible. The threat to classical cryptography is still very far off -- you need millions* of qubits and the adversary can always just increase the key length; the tradeoffs are likely to be in favor of the classical method for a long time.

Noisy quantum simulators of the type Preskill talks about might be almost possible (first envisioned by Feynman in the Caltech class he gave in the 1980s: Limits to Computation). These are scientifically very interesting but I am not sure that there will be practical applications for some time.

* This is from distant memory so might not be quite right. The number of ideal qubits needed would be a lot less, but with imperfect qubits/gates and quantum error-correction, etc., I seem to remember a result like this. Perhaps millions is the number of gates, not qubits? (See here.)
These are the Preskill slides I mentioned -- highly recommended. John Preskill is the Feynman Professor of Theoretical Physics at Caltech :-)



Here's a summary of current and near-term hardware capability:

Sunday, July 30, 2017

Like little monkeys: How the brain does face recognition

[フレーム]

This is a Caltech TEDx talk from 2013, in which Doris Tsao discusses her work on the neuroscience of human face recognition. Recently I blogged about her breakthrough in identifying the face recognition algorithm used by monkey (and presumably human) brains. The algorithm seems similar to those used in machine face recognition: individual neurons perform feature detection just as in neural nets. This is not surprising from a purely information-theoretic perspective, if we just think about the space of facial variation and the optimal encoding. But it is amazing to be able to demonstrate it by monitoring specific neurons in a monkey brain.

An earlier research claim (which, four years ago, she recapitulates @8:50min in the video), that certain neurons are sensitive only to specific faces, seems not to be true. I always found it implausible.

On her faculty web page Tsao talks about her decision to attend Caltech as an undergraduate:
One day, my father went on a trip to California and took a tour of Caltech with a friend. He came back and told me about a monastery for science, located under the mountains amidst flowers and orange trees, where all the students looked very skinny and super smart, like little monkeys. I was intrigued. I went to a presentation about Caltech by a visiting admissions officer, who showed slides of students taking tests under olive trees, swimming in the Pacific, huddled in a dorm room working on a problem set... I decided: this is where I want to go to college! I dreamed every day about being accepted to Caltech. After I got my acceptance letter, I began to worry that I would fall behind in the first year, since I had heard about how hard the course load is. So I went to the library and started reading the Feynman Lectures. This was another world…where one could see beneath the surface of things, ask why, why, why, why? And the results of one’s mental deliberations actually could be tested by experiments and reveal completely unexpected yet real phenomena, like magnetism as a consequence of the invariance of the speed of light.
See also Feynman Lectures: Epilogue and Where Men are Men, and Giants Walk the Earth.

Thursday, February 16, 2017

Management by the Unusually Competent



How did we get ICBMs? How did we get to the moon? What are systems engineering and systems management? Why do some large organizations make rapid progress, while others spin their wheels for decades at a time? Dominic Cummings addresses these questions in his latest essay.

Photo above of Schriever and Ramo. More Dom.
... In 1953, a relatively lowly US military officer Bernie Schriever heard von Neumann sketch how by 1960 the United States would be able to build a hydrogen bomb weighing less than a ton and exploding with the force of a megaton, about 80 times more powerful than Hiroshima. Schriever made an appointment to see von Neumann at the IAS in Princeton on 8 May 1953. As he waited in reception, he saw Einstein potter past. He talked for hours with von Neumann who convinced him that the hydrogen bomb would be progressively shrunk until it could fit on a missile. Schriever told Gardner about the discussion and 12 days later Gardner went to Princeton and had the same conversation with von Neumann. Gardner fixed the bureaucracy and created the Strategic Missiles Evaluation Committee. He persuaded von Neumann to chair it and it became known as ‘the Teapot committee’ or ‘the von Neumann committee’. The newly formed Ramo-Wooldridge company, which became Thompson-Ramo-Wooldridge (I’ll refer to it as TRW), was hired as the secretariat.

The Committee concluded (February 1954) that it would be possible to produce intercontinental ballistic missiles (ICBMs) by 1960 and deploy enough to deter the Soviets by 1962, that there should be a major crash programme to develop them, and that there was an urgent need for a new type of agency with a different management approach to control the project. Although intelligence was thin and patchy, von Neumann confidently predicted on technical and political grounds that the Soviet Union would engage in the same race. It was discovered years later that the race had already been underway partly driven by successful KGB operations. Von Neumann’s work on computer-aided air defence systems also meant he was aware of the possibilities for the Soviets to build effective defences against US bombers.

‘The nature of the task for this new agency requires that over-all technical direction be in the hands of an unusually competent group of scientists and engineers capable of making systems analyses, supervising the research phases, and completely controlling experimental and hardware phases of the program… It is clear that the operation of this new group must be relieved of excessive detailed regulation by existing government agencies.’ (vN Committee, emphasis added.)

A new committee, the ICBM Scientific Advisory Committee, was created and chaired by von Neumann so that eminent scientists could remain involved. One of the driving military characters, General Schriever, realised that people like von Neumann were an extremely unusual asset. He said later that ‘I became really a disciple of the scientists… I felt strongly that the scientists had a broader view and had more capabilities.’ Schriever moved to California and started setting up the new operation but had to deal with huge amounts of internal politics as the bureaucracy naturally resisted new ideas. The Defense Secretary, Wilson, himself opposed making ICBMs a crash priority.

... Almost everybody hated the arrangement. Even the Secretary of the Air Force (Talbott) tried to overrule Schriever and Ramo. It displaced the normal ‘prime contractor’ system in which one company, often an established airplane manufacturer, would direct the whole programme. Established businesses were naturally hostile. Traditional airplane manufacturers were run very much on Taylor’s principles with rigid routines. TRW employed top engineers who would not be organised on Taylor’s principles. Ramo, also a virtuoso violinist, had learned at Caltech the value of a firm grounding in physics and an interdisciplinary approach in engineering. He and his partner Wooridge had developed their ideas on systems engineering before starting their own company. The approach was vindicated quickly when TRW showed how to make the proposed Atlas missile much smaller and simpler therefore cheaper and faster to develop.

... According to Johnson, almost all the proponents of systems engineering had connections with either Caltech (where von Karman taught and JPL was born) or MIT (which was involved with the Radiation Lab and other military projects during World War 2). Bell Labs, which did R&D for AT&T, was also a very influential centre of thinking. The Jet Propulsion Laboratory (JPL) managed by Caltech also, under the pressure of repeated failure, independently developed systems management and configuration control. They became technical leaders in space vehicles. NASA, however, did not initially learn from JPL.

... Philip Morse, an MIT physicist who headed the Pentagon’s Weapons Systems Evaluation Group after the war, reflected on this resistance:
‘Administrators in general, even the high brass, have resigned themselves to letting the physical scientist putter around with odd ideas and carry out impractical experiments, as long as things experimented with are solutions or alloys or neutrons or cosmic rays. But when one or more start prying into the workings of his own smoothly running organization, asking him and others embarrassing questions not related to the problems he wants them to solve, then there’s hell to pay.’ (Morse, ‘Operations Research, What is It?’, Proceedings of the First Seminar in Operations Research, November 8–10, 1951.)



The Secret of Apollo: Systems Management in American and European Space Programs, Stephen B. Johnson.

Sunday, December 25, 2016

Time and Memory

Over the holiday I started digging through my mom's old albums and boxes of photos. I found some pictures I didn't know existed!

Richard Feynman and the 19 year old me at my Caltech graduation:



With my mom that morning -- hung-over, but very happy! I think those are some crazy old school Ray Bans :-)



Memories of Feynman: "Hey SHOE!", "Gee, you're a BIG GUY. Do you ever go to those HEALTH clubs?"

This is me at ~200 pounds, playing LB and RB back when Caltech still had a football team. Plenty of baby fat! I ran sprints for football but never longer distances. I dropped 10 or 15 pounds just by jogging a few times per week between senior year and grad school.




Here I am in graduate school. Note the Miami Vice look -- no socks!



Ten years after college graduation, as a Yale professor, competing in Judo and BJJ in the 80 kg (176 lbs) weight category. The jiujitsu guys thought it was pretty funny to have a professor on the mat! This photo was taken on the Kona coast of the big island in Hawaii. I had been training with Egan Inoue at Grappling Unlimited in Honolulu.



Baby me:

Friday, October 07, 2016

Where Nobel winners get their start (Nature)

Nature covers some work by Jonathan Wai and myself. See here for a broader ranking of US schools, which includes Nobel, Turing, Fields awards and National Academies membership.
Where Nobel winners get their start (Nature)

Undergraduates from small, elite institutions have the best chance of winning a Nobel prize.

There are many ways to rank universities, but one that’s rarely considered is how many of their graduates make extraordinary contributions to society. A new analysis does just that, ranking institutions by the proportion of their undergraduates that go on to win a Nobel prize. [ Note: includes Literature, Economics, Peace, as well as science prizes. ]

Two schools dominate the rankings: École Normale Supérieure (ENS) in Paris and the California Institute of Technology (Caltech) in Pasadena. These small, elite institutions each admit fewer than 250 undergraduate students per year, yet their per capita production of Nobelists outstrips some larger world-class universities by factors of hundreds.

“This is a way to identify colleges that have a history of producing major impact,” says Jonathan Wai, a psychologist at Duke University in Durham, North Carolina, and a co-author of the unpublished study. “It gives us a new way of thinking about and evaluating what makes an undergraduate institution great.”

Wai and Stephen Hsu, a physicist at Michigan State University in East Lansing, examined the 81 institutions worldwide with at least three alumni who have received Nobel prizes in chemistry, physiology or medicine, physics and economics between 1901 and 2015. To meaningfully compare schools, which have widely varying alumni populations, the team divided the number of Nobel laureates at a school by its estimated number of undergraduate alumni.

Top Nobel-producing undergraduate institutions

Rank School Country Nobelists per capita (UG alumni)
1 École Normale Supérieure France 0.00135
2 Caltech US 0.00067
3 Harvard University US 0.00032
4 Swarthmore College US 0.00027
5 Cambridge University UK 0.00025
6 École Polytechnique France 0.00025
7 MIT US 0.00025
8 Columbia University US 0.00021
9 Amherst College US 0.00019
10 University of Chicago US 0.00017

Small but mighty

Many of the top Nobel-producing schools are private, and have significant financial resources. Among the more surprising high performers were several very small US liberal-arts colleges, such as Swarthmore College in Pennsylvania (ranked at number 4) and Amherst College in Massachusetts (number 9).

“What these smaller schools are doing might serve as important undergraduate models to follow in terms of selection and training,” says Wai, who adds that, although admission to one of the colleges on the list is no guarantee of important achievements later in life, the probability is much higher for these select matriculates.

To gauge trends over time, Wai cut the sample of 870 laureates into 20-year bands. US universities, which now make up almost half of the top 50 list, began to dominate after the Second World War. Whereas French representation in the Nobel ranks has declined over time, top-ranked ENS has remained steady in its output.

Hsu and Wai had previously performed two similar, but broader, analyses of the rate at which US universities produce winners of the Nobel prize, Fields Medal (in mathematics) or Turing Award (in computer science), as well as members of the US National Academies of Sciences, Engineering, and Medicine. These studies produced rankings of US institutions that are similar to the new, global Nobel rankings.

Lessons learned
Santo Fortunato, a theoretical physicist at Indiana University Bloomington who has researched trends in Nobel prizewinners, deems the analyses “quite interesting”, but cautions that the methodology cannot produce a highly accurate or predictive ranking. “There is a high margin of error due to the low numbers of prominent scholars,” says Fortunato. [ See here for a broader ranking of US schools, which includes Nobel, Turing, Fields awards and National Academies membership. ]

Wai and Hsu agree that there are statistical uncertainties in their rankings, owing to the small number of prizes awarded each year. The two are confident that the ENS and Caltech lead the pack, but statistical fluctuations could change the order of schools placed from third to ninth, Hsu says.

The researchers say that their findings suggest that more attention should be paid to the role that undergraduate institutions have in their graduates’ outstanding accomplishments. They also argue that quantifiable achievements are a better gauge of the quality of universities than factors such as reputation, graduation rate, faculty and financial resources and alumni donations.

Says Wai, “Our findings identify colleges that excel at producing impact.”
Regarding statistical fluctuations, if one takes the data as an estimator of a school-related probability for each graduate to win a Nobel, then at 95 percent confidence level ENS and Caltech are the top two schools, but fluctuations could (for example) change the order among #3 (Harvard) through #9 (Amherst). In other words, we can't be >95 percent confident that Harvard grads have a higher probability than Amherst grads, although the central value of the estimated probability is higher for Harvard.


Regarding ENS and their elitist method of selecting students, see below. Two years of preparation for the entrance exam! Also: Les Grandes Ecoles Chinoises and The Normaliens.
Wikipedia: The school, like its sister grandes écoles the École Polytechnique and the École Nationale d'Administration, is very small in size: its core of students, who are called normaliens, are selected via either a highly competitive exam called a concours (Baccalauréat + 2 years) ... Preparation for the "concours" takes place in preparatory classes which last two years ... Most students come from the prépas at the Lycée Louis-le-Grand, the Lycée Henri-IV, and a few other elite establishments in France. Two hundred normaliens are thus recruited every year ...
Lycee Henri-IV is in the Latin Quarter on the left bank, one of my favorite parts of Paris. The book shops and cafes are filled with serious looking young students. Vive la France! :-)

Monday, July 25, 2016

The Mendel of Cancer Genetics


See also earlier post Where Men are Men and Giants Walk the Earth.
NYTimes: Dr. Alfred G. Knudson, the ‘Mendel of Cancer Genetics,’ Dies at 93

Dr. Alfred G. Knudson, who deduced how certain cancers strike a family generation after generation, died on Sunday at his home in Philadelphia. He was 93.

... “Funny as it may sound, heritable cancer was hardly discussed in the 1960s and 1970s,” Dr. Albert de la Chapelle, a professor in the human genetics program at Ohio State University, said in an email.

Dr. Knudson, trained as a pediatrician, tackled the issue by looking at retinoblastoma, a cancer of the eye that strikes children, even newborns. Childhood cancers would be easier to understand, he reasoned, because there would be fewer confounding factors, like the random mutations that accumulate over a lifetime.

“It had been known for a long time that there were inherited forms of retinoblastoma, that it would run in families,” said Dr. Jonathan Chernoff, the chief scientific officer at Fox Chase. “And then there were, on the other hand, sporadic cases that didn’t run in families. Some child would randomly get retinoblastoma.”

Dr. Knudson analyzed the records of retinoblastoma patients and found that the inherited form struck children at a younger age and often in both eyes, while the sporadic cases usually involved older children and just one eye.

That led him to his “two-hit” hypothesis, and his insight that cancer sometimes results not from a particular cause, but rather from the disabling of something known today as the tumor suppressor gene.

... Dr. Chernoff said Dr. Knudson was in some ways “the Mendel of cancer genetics,” referring to Gregor Mendel, the 19th-century monk who demonstrated, through the crossbreeding of pea plants, how traits are passed from one generation to the next.

“He provided the conceptual framework for how we think about cancer now,” Dr. Chernoff said.

Dr. Knudson published his hypothesis in 1971. “Knudson’s hypothesis was conceived before we had a clue about the underlying molecular genetic events,” Dr. de la Chapelle said. “I believe Knudson’s work stimulated retinoblastoma researchers so strongly that this led to an early breakthrough.”

Dr. Knudson’s theory was proved in 1986, when researchers figured out the gene and the mutations that led to the disease.

... Alfred George Knudson Jr. was born on Aug. 9, 1922, in Los Angeles. He went to the California Institute of Technology thinking he would major in physics.

“I had never had any biology in high school,” he recalled in a 2013 interview. “Then, after two years of physics at Caltech, I thought: ‘Oh, they know everything in physics. Why do I want to go into physics?’”

The quantitative aspects of genetics appealed to him, he said: “It has some of the features I admire about physics, so I’ll study that.”

He finished his bachelor of science degree at Caltech in 1944 and went on to receive a medical degree from Columbia in 1947. He returned to Caltech to earn a doctorate in biochemistry and genetics in 1956. He served in the Navy during World War II and the Army during the Korean War.

Thursday, February 11, 2016

LIGO detects gravity waves

Live-blogging the LIGO announcement of detection of gravity waves. Detection of an event in 2015 (initial science run of advanced LIGO) is good news for the future use of gravity waves as an astrophysical probe -- it suggests a fairly high density of NS-NS, NS-BH, and BH-BH binaries in the universe. Each time astronomers have developed a new probe (radio waves, x-rays, etc.) they have discovered new cosmic phenomena. The future is promising!

Techno-pessimists should note that detecting gravity waves is much, much harder than landing on the moon. LIGO measured a displacement 1/1000 of a neutron radius, in a noisy terrestrial background, accounting even for quantum noise.
https://www.ligo.caltech.edu/: 9/14/15 detection of BH-BH (~ 30 solar masses) merger at distance 1.3 Gy. The energy in the gravitational wave signal was ~3 solar masses!

Here is the paper http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.061102
When I was an undergraduate, I toured the early LIGO prototype, which was using little car shaped rubber erasers as shock absorbers. Technology has improved since then, and the real device is much bigger.



Kip Thorne (from whom I learned General Relativity) has been one of the driving forces behind the effort to detect gravity waves for over 40 years. The picture below was taken during a conference in Eugene back in 2005.


Tuesday, October 27, 2015

Where men are men, and giants walk the earth

In this earlier post I advocated for cognitive filtering via study of hard subjects
Thought experiment for physicists: imagine a professor throwing copies of Jackson's Classical Electrodynamics at a group of students with the order, "Work out the last problem in each chapter and hand in your solutions to me on Monday!" I suspect that this exercise produces a highly useful rank ordering within the group, with huge differences in number of correct solutions.
In response, a Caltech friend of mine (Page '87, MIT PhD in Physics) sent this old article from the Caltech News. It describes Professor William Smythe and his infamous course on electromagnetism, which was designed to "weed out weaklings"! The article lists six students who survived Smythe's course and went on to win the Nobel prize in Physics. (Click for larger version.)

Vernon Smith, a "weakling" who deliberately avoided the course, went on to win a Nobel prize in Economics. Smith wrote
The first thing to which one has to adapt is the fact that no matter how high people might sample in the right tail of the distribution for "intelligence," ... that sample is still normally distributed in performing on the materials in the Caltech curriculum.
I remind the reader of the Page House motto: Where men are men, and giants walk the earth :-)

See also Colleges ranked by Nobel, Fields, Turing and National Academies output.


Note added: The article mentions George Trilling, a professor at Berkeley I knew in graduate school. I once wrote an electrodynamics solution set for him, and was surprised that he had the temerity to complain about one of my solutions 8-)

Thursday, September 10, 2015

Colleges ranked by Nobel, Fields, Turing and National Academies output

This Quartz article describes Jonathan Wai's research on the rate at which different universities produce alumni who make great contributions to science, technology, medicine, and mathematics. I think the most striking result is the range of outcomes: the top school outperforms good state flagships (R1 universities) by as much as a thousand times. In my opinion the main causative factor is simply filtering by cognitive ability and other personality traits like drive. Psychometrics works!
Quartz: Few individuals will be remembered in history for discovering a new law of nature, revolutionizing a new technology or captivating the world with their ideas. But perhaps these contributions say more about the impact of a university or college than test scores and future earnings. Which universities are most likely to produce individuals with lasting effect on our world?

The US News college rankings emphasize subjective reputation, student retention, selectivity, graduation rate, faculty and financial resources and alumni giving. Recently, other rankings have proliferated, including some based on objective long-term metrics such as individual earning potential. Yet, we know of no evaluations of colleges based on lasting contributions to society. Of course, such contributions are difficult to judge. In the analysis below, we focus primarily on STEM (science, technology, engineering and medicine/mathematics) contributions, which are arguably the least subjective to evaluate, and increasingly more valued in today’s workforce.

We examined six groups of exceptional achievers divided into two tiers, looking only at winners who attended college in the US. Our goal is to create a ranking among US colleges, but of course one could broaden the analysis if desired. The first level included all winners of the Nobel Prize (physics, chemistry, medicine, economics, literature, and peace), Fields Medal (mathematics) and the Turing Award (computer science). The second level included individuals elected to the National Academy of Sciences (NAS), National Academy of Engineering (NAE) or Institute of Medicine (IOM). The National Academies are representative of the top few thousand individuals in all of STEM.

We then traced each of these individuals back to their undergraduate days, creating two lists to examine whether the same or different schools rose to the top. We wanted to compare results across these two lists to see if findings in the first tier of achievement replicated in the second tier of achievement and to increase sample size to avoid the problem of statistical flukes.

Simply counting up the number of awards likely favors larger schools and alumni populations. We corrected for this by computing a per capita rate of production, dividing the number of winners from a given university by an estimate of the relative size of the alumni population. Specifically, we used the total number of graduates over the period 1966-2013 (an alternative method of estimating base population over 100 to 150 years led to very similar lists). This allowed us to objectively compare newer and smaller schools with older and larger schools.

In order to reduce statistical noise, we eliminated schools with only one or two winners of the Nobel, Fields or Turing prize. This resulted in only 25 schools remaining, which are shown below ...
The vast majority of schools have never produced a winner. #114 Ohio State and #115 Penn State, which have highly ranked research programs in many disciplines, have each produced one winner. Despite being top tier research universities, their per capita rate of production is over 400 times lower than that of the highest ranked school, Caltech. Of course, our ranking doesn’t capture all the ways individuals can impact the world. However, achievements in the Nobel categories, plus math and computer science, are of great importance and have helped shaped the modern world.

As a replication check with a larger sample, we move to the second category of achievement: National Academy of Science, Engineering, or Medicine membership. The National Academies originated in an Act of Congress, signed by President Abraham Lincoln in 1863. Lifetime membership is conferred through a rigorous election process and is considered one of the highest honors a researcher can receive.
The results are strikingly similar across the two lists. If we had included schools with two winners in the Nobel/Fields/Turing list, Haverford, Oberlin, Rice, and Johns Hopkins would have been in the top 25 on both. For comparison, very good research universities such as #394 Arizona State, #396 Florida State and #411 University of Georgia are outperformed by the top school (Caltech) by 600 to 900 times. To give a sense of the full range: the per capita rate of production of top school to bottom school was about 449 to one for the Nobel/Fields/Turing list and 1788 to one for the National Academies list. These lists include only schools that produced at least one winner—the majority of colleges have produced zero.

What causes these drastically different odds ratios across a wide variety of leading schools? The top schools on our lists tend to be private, with significant financial resources. However, the top public university, UC Berkeley, is ranked highly on both lists: #13 on the Nobel/Fields/Turing and #31 on the National Academies. Perhaps surprisingly, many elite liberal arts colleges, even those not focused on STEM education, such as Swarthmore and Amherst, rose to the top. One could argue that the playing field here is fairly even: accomplished students at Ohio State, Penn State, Arizona State, Florida State and University of Georgia, which lag the leaders by factors of hundreds or almost a thousand, are likely to end up at the same highly ranked graduate programs as individuals who attended top schools on our list. It seems reasonable to conclude that large differences in concentration or density of highly able students are at least partly responsible for these differences in outcome.

Sports fans are unlikely to be surprised by our results. Among all college athletes only a few will win professional or world championships. Some collegiate programs undoubtedly produce champions at a rate far in excess of others. It would be uncontroversial to attribute this differential rate of production both to differences in ability of recruited athletes as well as the impact of coaching and preparation during college. Just as Harvard has a far higher percentage of students scoring 1600 on the SAT than most schools and provides advanced courses suited to those individuals, Alabama may have more freshman defensive ends who can run the forty yard dash in under 4.6 seconds, and the coaches who can prepare them for the NFL.

One intriguing result is the strong correlation (r ~ 0.5) between our ranking (over all universities) and the average SAT score of each student population, which suggests that cognitive ability, as measured by standardized tests, likely has something to do with great contributions later in life. By selecting heavily on measurable characteristics such as cognitive ability, an institution obtains a student body with a much higher likelihood of achievement. The identification of ability here is probably not primarily due to “holistic review” by admissions committees: Caltech is famously numbers-driven in its selection (it has the highest SAT/ACT scores), and outperforms the other top schools by a sizeable margin. While admission to one of the colleges on the lists above is no guarantee of important achievements later in life, the probability is much higher for these select matriculants.

We cannot say whether outstanding achievement should be attributed to the personal traits of the individual which unlocked the door to admission, the education and experiences obtained at the school, or benefits from alumni networks and reputation. These are questions worthy of continued investigation. Our findings identify schools that excel at producing impact, and our method introduces a new way of thinking about and evaluating what makes a college or university great. Perhaps college rankings should be less subjective and more focused on objective real world achievements of graduates.
For analogous results in college football, see here, here and here. Four and Five star recruits almost always end up at the powerhouse programs, and they are 100x to 1000x more likely to make it as pros than lightly recruited athletes who are nevertheless offered college scholarships.

Saturday, August 08, 2015

Caltech crushes Harvard, MIT, and all the rest

[ See updated version. ]

A few years ago I posted a list of number of Nobel prizes aggregated by undergraduate institution of the winner. A social science researcher who reads this blog got interested in the topic and has compiled much more complete information, which he is preparing to publish.

He reports that the school with the most Nobel + Fields + Turing prizes, normalized to size of (undergraduate) alumni population, is Caltech, which leads both Harvard and MIT (the next highest ranked schools) by a factor of 3 or 4. Caltech beats Michigan by a factor of ~50, and Ohio State (typical of good public flagships) by a factor of ~500!

To obtain a higher statistics measurement of exceptional achievement, he aggregated living members of the National Academy of Science, National Academy of Engineering, and Institute of Medicine, and normalized to size of alumni population over the last 100 years or so. Caltech again comes out first, beating both Harvard and MIT by a factor of about 1.5. Caltech beats Yale and Princeton by a factor of ~4, and Stanford by a factor of ~5. Swarthmore and Amherst are the leading liberal arts colleges. (See list below.) Caltech beats very good public universities by factors ~100 and more typical public universities by factors ~1000.

Berkeley is the best public university in both the Nobel+ and National Academies rankings. Berkeley is roughly tied with Stanford in Nobels+ per alum, but behind in academicians per capita.

As you might expect, correlation of rank order in these lists with average SAT score is pretty high. Likelihood ratios of ~500 or 1000 for high end achievement suggest that 1. psychometric scores used in college admissions have significant validity and 2. high end achievement is correlated to unusually high ability: two schools with very different mean SAT have very different population fractions above some threshold, such as +3 SD. For example at Caltech perhaps half the students are above +3 SD in ability, whereas at an average university only 1 in ~500 are at that level, leading to ratios as large as 100 or 1000!
Colleges ranked by per capita production of National Academy (Science, Engineering, Medicine) members:

California Institute of Technology
Massachusetts Institute of Technology
Harvard University
Swarthmore College
Yale University
Princeton University
Amherst College
Stanford University
Oberlin College
Columbia University
Haverford College
Cooper Union
Dartmouth College
See also Annals of Psychometry: IQs of eminent scientists, and Vernon Smith at Caltech.


##########################


Correction! The original post quoted results using an estimate of alumni population derived from recent US News data. However, some schools have changed over time in enrollment, so more precise estimates are required. The lists below use graduation numbers reported to IPEDS from 1966-2013 and probably yield more accurate rankings than what was reported above. The main difference on the Nobel+ list is that the University of Chicago jumps to #3 and MIT falls several notches. On the NAS/NAE/IOM list MIT is #2 and Harvard #3.


Undergraduate Institution | Nobel+ | Bachelor's degrees awarded (1966-2013) | Prize per capita ratio

California Institute of Technology 11 9348 0.001176722

Harvard University 34 81553 0.000416907

University of Chicago 15 37171 0.000403540

Swarthmore College 5 15825 0.000315956

Columbia University 20 68982 0.000289931

Massachusetts Institute of Technology 14 52891 0.000264695

Yale University 13 60107 0.000216281

Amherst College 4 18716 0.000213721

[ For comparison: Penn State and Ohio State ~ 0.0000028 and 0.0000026 ; many schools have zero Nobel+ winners. ]



Undergraduate Institution | NAS+NAE+IOM | Bachelor's degrees awarded (1966-2013) | ratio

California Institute of Technology 78 9348 0.0083440308

Massachusetts Institute of Technology 255 52891 0.0048212361

Harvard University 326 81553 0.0039974005

Swarthmore College 49 15825 0.0030963665

Princeton University 109 50633 0.0021527462

Amherst College 35 18716 0.0018700577

Yale University 112 60107 0.0018633437

University of Chicago 56 37171 0.0015065508

Stanford University 117 79683 0.0014683182

[ For comparison, Arizona State and Florida State ~ 0.000013 ; University of Georgia ~ 0.000008 ]

Subscribe to: Comments (Atom)

Blog Archive

Labels

AltStyle によって変換されたページ (->オリジナル) /