Locklin on science

Things that should be considered essential vitamins but aren’t

Posted in health by Scott Locklin on November 8, 2025

If you look at the history of vitamins, they were all discovered between 1910 and 1948. Casimir Funk invented the idea and called them “vital amines.” The thing is they ain’t all amines: Vitamin C isn’t, nor is B5 or B7 (I think they’re amides; I never took organic chemistry, sorry not sorry). We’ll stick with the “your body needs it” definition. Vitamins were originally discovered to prevent disease, but also to optimize health. I don’t think there is any acute disease associated with Vitamin-D deficiency (which most indoors people have), though it makes you pretty unhealthy if you don’t have enough of it. Or, conversely, helps you to remain pretty healthy if you have a good amount of it. The fact that there exists such powerful vitamins without acute deficiency syndrome rather indicates that there may be many such cases. Here are a few inexpert suggestions for consideration by people who get paid to think about such things. These are all well-known substances with well-documented effects; I’m not suggesting anything esoteric. Some of ’em I take myself, because spending a few bucks at the supplement shop is cheaper than massive medical intervention later in life, and there’s no obvious downsides other than purity/adulterant concerns (which are ubiquitous with anything you put in your mouth anyway).

Spermidine: yes, shut up about the name, Beavis. This is a substance naturally occurring in many healthy foods: wheat germ, meats, cheeses, mushrooms, various kinds of beans, oats. People who get a lot of it live longer: high levels of it have an antiaging effect in animals and human; especially heart and brain tissue. You can do pretty well here without supplements; my diet contains large amounts of it (I like beans, fruits and oats). On the other hand I’m willing to bet people who survive on candy and pizza rolls are lacking in it. It wouldn’t be a bad thing to have a RDA in. The Swedes are ahead of the US here, and actually do have a RDA of around 25-30mg which is considerably more than most people get.

CoE-Q10. This is a substance with strong healthspan effects; it does useful stuff for your mitochondria, in particular for those in your heart. Like many non-essential amino acids and other nutrients, the body can synthesize this itself, and it’s in lots of foods (beef, peanuts, organ meats), but as people get older some supplementation may be helpful. Funk would be pleased as it appears to be an amine. It appears to be cardioprotective, has antioxidant effects, is anti-inflammatory, is acutely cardioprotective, increases sperm motility, might have an anti-cancer effect, and has strong anti-aging properties. Considering most people drop dead of heart disease, it’s a no-brainer addition to your supplement stack as you get older: ubiquinol is more easily absorbed than ubiquinone, so make sure you get that one. If your quack has you on statins, you should take this stuff as statins prevent your body from making its own.

Menaquinone-7: yes, it’s part of the K vitamin family and other K vitamins might get turned into this in your gut, but it’s another one that has acute health benefits. K vitamins are good for the hormones, longevity, cancer prevention, fat loss, bone health and over all vitality. MK-7 is particularly exciting as it appears to reverse atherosclerosis, and probably has all the other benefits observed in other K variants. Japanese people live a long time, particularly if they eat this very strange food called natto (rotten soybeans) which is extremely high in it. Nattokinase is an unrelated enzyme from natto which may also have cardioprotective effects. I take ’em both because they’re cheap and easily found.

Ergothioneine: this is one I found searching the scholarly research. It’s an oddball non-essential amino acid related to histidine. You can find it in oats, beans, mushrooms and organ meats. It’s non-essential, but it can’t be made in the body, and your tissues seem to require it. It appears to have anti-aging properties: good for skin, joints, circulatory system, aerobic capacity and brain. It might even be good for your testicles. Best source of it is mushrooms; a daily mushroom or two will give you plenty of the stuff. There are also some oddball asian fermented foods with significant quantities. Good review here.

Boron: there is no USRDA on this mineral, but there probably should be, at least for older people. Try it and see; you will almost certainly notice if you’re older and short of it. This paper lists some of the benefits. Good for the brain, the hormones, antioxidant effects, magnesium absorption (everyone who sweats on occasion from exercise or Sauna should probably get some extra magnesium), bone health, wound healing. Its present in some fruits, but topping up with a pill or two on occasion is a good idea.

Sunlight: since they created a cheap vitamin-D blood test, pretty much everyone gets scolded for being short of it, generally from being out of the sun. In high latitudes it can’t be helped in wintertime, so people should supplement, but there’s other things about the sun which make us healthy. At one point in my sojurn in New Hampshire I realized I wasn’t getting much bright light in my day to day, so I bought myself a “corn cob light” -a sort of artificial sun. I assume the body’s melatonin system needs some bright light from time to time to regulate properly; the effect was dramatic. A few minutes in the morning wakes me up properly. Probably other effects as well.

L-Carnosine: this is a simple dipeptide which is present in meats. So, this is mostly something for the vegetarians to consider. Also made in the human body, but rate limited by beta-alanine which is relatively rare in food (there’s 3.2g of beta-alanine in a kilo of beef: it’s higher in organ meats). This is a very cool dipeptide; it has a sort of buffering effect on muscular exertion the way creatine does. It also is anti-inflammatory, anti-oxidant, cardio-protective, anti-diabetic, anti-cancer, heavy-metal chelating, neuroprotective and overall anti-aging effects. One of the interesting effects is anti-glycation: glycation is protein degradation by having excessive sugar hanging around: why fat people often have terrible skin, but it is a problem which is more than skin-deep: other organs are also negatively effected by glycation. They’re developing an eyedrop form of this substance for the treatment of cataracts and other eye diseases, which is amazing to me. L-carnosine can also be used as a food preservative, which begs the question of “why isn’t it being used as a food preservative?”

Not vitamins, but cool anyway:

Astaxanthin -pretty difficult to get from your diet unless you like salmon or flamingo meat, but very good for you. This is the red coloring in salmon and crab and other seafoods; it comes from algae eaten by sea beasties. Like many of the things listed here, it is an antioxidant, which seems to have decent healthspan effects on skin, cardiovascular system, anti-inflammatory, anti-cancer effects, and may help immune response, neural health, eye health and exercise recovery. I don’t know why they don’t just make this the new FDC-red food coloring; seems to be few downsides. As a nice side effect, it’s a decent 5-alpha reductase inhibitor, and it’s a lot better for you than finasteride. I suppose supplementation with it could be an issue if they made it chemically (since racemic mixtures don’t occur in nature), but it mostly seems to be extracted from yeast. This one is kind of a pain in the ass to source where I live, but it’s one I really like for the broad spectrum effects and I started taking it daily this year.

Curcumin -again, unless you like Indian food, you’re probably not getting much of this from a normal diet. Considering the insane things that OTC painkillers do to people, folks should consider this as an alternative. It apparently works just as well, and doesn’t upset your stomach. Unlike most OTC painkillers, this stuff is actively good for you. Much like the litanies above: anti-oxidant, anti-diabetic, anti-inflammatory, cancer protective, cardio protective, neuro-protective, and general anti-aging properties. Downside to the stuff is it isn’t real bioavailable (as you can tell the next day after a late night curry). Eating it with pepper helps; most supplements include piperine from pepper. I don’t take this regularly, but I do take it for aches and pains and it has the required effects.

Betaine is an old school supplement mostly touted as a digestive aid in HCL form. It’s a methyl donor, like B-vitamins and creatine (sort of). Some people have a genetic condition where they lack methylation capacity. As it turns out this supplement by itself may be anabolic and increase testosterone. Might also help burn fat. Something I occasionally take after a heavy meat meal the way old time bodybuilders took it. If nothing else it seems to aid digestion.

5 comments

Stochastic computing

Posted in non-standard computer architectures by Scott Locklin on October 31, 2025

I’ve wanted to write about this topic since I started this blerg, but various things have kept me from it. Stochastic computing is a subject I’ve been somewhat aware of since I went through the “Advances in Computers” series in the LBNL library while procrastinating over finishing my PhD thesis. It’s an old idea, and the fact that sci-hub exists now means I can look up some of these old papers for a little review. The fact that there is a startup reviving the idea (their origin paper here) gives a little impetus to do so.

Von Neuman had a vague idea along the lines of stochastic computing in the 1950s, but the first actual stochastic computing architectures were done more or less independently by Brian Gaines, and Wolfgang (Ted) Poppelbaum in the mid-1960s. I first read about this in “Advances in Computers” in the LBNL library while procrastinating on writing my dissertation, and while I didn’t take any notes, it always remained stuck in my noggin. The once-yearly journal started in 1960 and I at least skimmed the abstracts all the way to 2002 or whenever it was I was procrastinating (physics libraries are way better than the internet/wikipedia for this sort of thing). My takeaway was that Tukey’s FFT was godawful important, there were a few cool alternatives to normal computer architectures, and climatological calculations have always been more or less the same thing (aka lame) with bigger voxels in the old days.

Stochastic computing was one of the cool alternatives; it was competitive with supercomputers back in the 1960s, and it was something a small research group could physically build. As a reminder, a 25ドル million (current year dollars) CDC6600 supercomputer of 1964 was two megaflops with space for 970 kilobytes of RAM. They made about 100 of them total for the world. An NVIDIA titan RTX has 24gb and runs at something like 130 teraflops. Your phone is probably 2 teraflops. Something like stochastic gradient descent was, therefore, not worth thinking about even on a supercomputer in those days. Fitting a perceptron or a Hidden Markov model was a considerable technical achievement. People would therefore use various kinds of analog computer for early machine learning research. That includes stuff involving matrices which were modeled by a matrix of resistors. Analog computers can do matrix math in O(1); it only takes as long as the voltage measurement takes. Analog used to be considered parallel computation for this reason.

One can think of stochastic computing as a way of digitally doing a sort of analog computing. Instead of continuous voltage levels, stochastic computing used streams of random (known distribution) bits and counted them. This has many advantages; counting bits is pretty easy to do reliably, where measuring precise voltages and reliably converting them back into bits requires much more hardware. It’s also more noise immune in that a noise event is an extra bit or two which doesn’t affect the calculation as much as some unknown voltage would affect an analog machine. Thinking about this a different way, if you’re dealing with machine learning, the numbers being piped around represent probabilities. A bit of noise hitting a stochastic stream of bits might add a couple of bits and is unlikely to make the overall probability greater than 1. If your probability is analog voltage levels, any old noise is very likely to botch your probability and turn it into some ad-hoc Dempster-Shafer thing (which hadn’t been invented yet) where probabilities sum to greater than 1. While I’m talking about probabilities here, let it be known that standard mathematical operations are defined in these computers: addition, multiplication (including for matrices), calculus: just like with analog computers.

Noise immunity is quite a useful property for 1960s era computers, especially in noisy “embedded” environments. TTL was available, but noisy and not particularly reliable for digital calculations. It was contemplated that such machines would be useful in aircraft avionics, and it was actually used in a small early autonomous neural net robot, tested in a British version of LORAN, a radar tracker and PID system.

The early stochastic computers were generally parts of some kind of early neural net thing. This was a reasonable thing to do as back then it was known that animal neural nets were pulse rate encoded. The idea lives on in the liquid state machine and other spiking neural nets, a little-loved neural net design which works sort of like actual neurons, but which is quite difficult to simulate on an ordinary computer that has to calculate serially. Piece of cake for stochastic computers, more or less, as stochastic computers include all the parts you need to build one, rather than simulating pulses flying around. Similarly, probabilistic graphical models are easier to do on this sort of architecture, as you’re modeling probabilities in a straightforward way, rather than the indirect random sampling of data in used PGM packages. Brian Gaines puts the general machine learning problem extremely well, “the processing of a large amount of data with fairly low accuracy in a variable, experience-dependent way.” This is something stochastic machines are well equipped to deal with. It’s also a maxim repeatedly forgotten and remembered in the machine learning community: drop-out, ReLu, regularization, stuff like random forests are all rediscoveries of the concept.

This isn’t a review article, so I’m not mentioning a lot of the gritty details, but there are a lot of nice properties for these things. The bit streams you’re operating on don’t actually need to be that random: just uncorrelated, and there is standard circuitry for dealing with this. Lots of the circuits are extremely simple compared to their analogs on a normal sequential machine; memory is just a delay line for example. And you can fiddle with accuracy of digits simply by counting fewer bits.

Remember all this 1960s stuff was done with individual transistors and early TTL ICs with a couple devices on a chip. It was considered useful back then and competitive with and a lot cheaper than ordinary computers. Now we can put down billions of far more reliable transistors down on a single chip. Probably more unreliable transistors, which is what stochastic computers were designed around. Modern chips also have absurdly fast and reliable serial channels, like the things that AMD chiplets use to talk to each other. You can squirt a lot of bits around, very quickly. Much more quickly than in the 1960s. Oh yeah and it’s pretty straightforward to map deep neural nets onto this technology, and it was postulated back in 2017 to be much more energy efficient than what we’re doing now. You can even reduce the voltage and deal with the noise byproduct much more gracefully than a regular digital computer.

I assume this is what the Extropic boys are up to. I confess I looked at their website and one of the videos and I had no idea WTF they were talking about. Based Beff Jezos showing us some Josephson gate prototype that has nothing to do with anything (other than being a prototype: no idea why it had to be superconducting), babbling about quantum whatevers did nothing for me other than rustling my jimmies. Anyway, if their “thermodynamic computing” is stochastic computing, it is a very good idea. If it is something else, stochastic computing is still a very good idea and I finally wrote about it. They’re saying impressive things like 10,000x energy efficiency, which of course is aspirational at the moment, but for the types of things neural nets are doing, it seems doable to me, and very much worth doing. In my ideal world, all the capital that has been allocated to horse shit quasi-scams like OpenAI and all the dumb nuclear power startups around it (to say nothing of the quantum computard startups which are all scams) would be dumped into stuff like this. At least we have a decent probability of getting better computards. It’s different enough to be a real breakthrough if it works properly, and there are no obviously impossible lacunae as there are with quantum computards.

Brian Gaines publications:

https://gaines.library.uvic.ca/pubs/

Good recent book on the topic:

Extropic patents:
https://patents.google.com/?assignee=extropic&oq=extropic

8 comments

Pre-Dreadnaughts: an aesthetic appreciation

Posted in big machines by Scott Locklin on October 3, 2025

Continuing my fascination with transitional designs, I’ve been looking at Pre-Dreadnought battleships. Battleships are ridiculous, but also awesome. The things that came before the classical battleship were even more ridiculous. The Dreadnought, what we think of as a battleship, was the culmination of years of design thought on the topic of sticking a metal boat with lots of cannons on it on the ocean. The basic change of the Dreadnaughts/modern battleships from the earlier idea is using the new wonder technology of steam turbines, and making the armament “all big guns.” Pre-Dreadnought battleships can be thought of as “battleships” through their use of turrets and steel armor. But they often had a bewildering array of different caliber cannons; some heavies, some rapid fire and closer to the waterline. The idea being that small torpedo boats were a real threat, better dealt with using smaller rapid fire guns, rather than the big guns. Quite a few militaries thought capital ships were obsolete thanks to the invention of the torpedo; remember the Caio Duilio torpedo boat carrier? The French even stopped battleship production all together in favor of torpedo boats and other small vessels. The Jeune Ecole group of military intellectuals came to this conclusion. They were early innovators in Submarines and Destroyers as a result of this, though they later went back to battleships as seen below. Jeune Ecole thinking is probably relevant again today: drones aka autonomous torpedoes end up being kind of similar in threat to large ships as unguided torpedoes were in the past. One can build fairly long range autonomous torpedoes which would be difficult to detect or defend against. I’m not sure that idea has fully percolated through the navies of the world.

The Dreadnought ships were faster than previous generations and had fewer kinds of guns. Faster is an obvious advantage. Fewer kinds of guns is less obvious, but also important: you only need one fire control system for one kind of gun. Might as well make it a big gun. HMS Dreadnought was the first class of ship which had basically one kind of big gun, giving name to the idea, but it could easily have been called IJN Satsuma or USS Michigan type battleships, as the Japanese and Americans had the idea at around the same time. They arguably cribbed the concept from an Italian idea published in Jane’s Fighting Ships. All subsequent battleships were more or less of the Dreadnought type with various incremental improvements in artillery, engines and armor.

The idea for the Dreadnought came about as the result of recent sea battles, mostly involving the Japanese (against China and especially Russia). Gun battles happened at surprisingly long range, making smaller, shorter range quick firing guns the previous generations of battleships were festooned with, or the arbitrary mid-size guns extraneous. It made ballistics calculations easier if you only had one kind of heavy gun. This is a big deal without microchips; fire control systems were cobbled together using machine tools and hand calculators. Turbines made everything happen faster; 21->28 knots instead of 16->18 or so from reciprocating steam engines. Oil fire eventually became the standard, so stokers could do something else, like work on the fire control system.

What I like best about the pre-Dreadnoughts is the way they look. They mostly had short lifespans and didn’t achieve much in battle, but they looked cool. It looks like you took a post-Dreadnought battleship and festooned it with a bunch of steampunk nonsense that looks like it belongs to the horse and buggy era. Casemate guns, birds nests, small turrets, portholes, ventilation tubes (I think obviated by the invention of the small electric motor), giant square masts, davits: all the kind of stuff that had been on boats for decades or centuries before, but arguably no longer relevant to the era of steam.

They’re all weird to modern eyes, but one of the weirdest was the Charles Martel class. It used a tumblehome hull which was absurdly fatter at the waterline than anything that makes sense to the modern eye.

French Battleship Charles Martel

I’m not going to list the armaments the thing had; you can see most of them for yourself; a seemingly haphazard festooning with random sized turrets and gizmoes. Why does it have multiple bird nests on two masts? The thing is also covered in holes, which makes no sense to me. Maybe it didn’t have electric lighting? Seems risky putting holes in your armor though. Notice the gloriously retro ventilation tubes aft of the second funnel.

USS Texas

The USS Texas (launched 1892)kept the main batteries amidships; something the Dreadnaught itself preserved for some of its guns before everyone realized this was ridiculous. Festooned with smaller casemate cannon, it was designed to fight…. South American battleships, which were considered a threat back in those days when the Souf Americans still had well functioning economic and legal systems. It was successful in the Spanish American war, despite being obsolete by that time. I like the cheerful decoration at the prow and the steampunk robotech vibe of the rest of the hull.

French battleship Danton

French battleship Danton has a different set of excesses from the Charles Martel; five funnels, all different sizes, exhausting 26 boilers which drove …. four turbine engines they got from the British. A semi-dreadnaught for the turbines, but it retained the older casemate cannons and oddball calibers. The giant masts give it some of the character of old sailing ships. It was sunk by a U-boat in WW-1.

USS Indiana

This is a late photo of the USS Indiana, around WW-1 times, well past its heyday. I find the added radio mast (the giant tube like tower aft) and numerous ventilation funnels to be festive.

HMS Jupiter

This thing looks like a contemporary cargo boat and previous generation two masted ship of the line collided and got encrusted with mechanical barnacles. One of its most noteworthy feats was a tour as an icebreaker for the Rooskies in WW-1.

Пересвет

Russian battleship Peresvet was relatively lightly armored and sunk by the Japanese at Port Arthur. The Japanese salvaged it, drove it around for a decade or two as INS Sagami, then sold it back to the Russians, now allied with them, for WW-1. It sunk from a German mine off the Egyptian coast shortly after this. It’s recognizably of its time with the birds nests, ram prow casemate guns and so on. But it looks like it’s 4-5 stories above the water; massive freeboard. Why? To present larger broadside target to the Japanese?

Tsesarevich

Russian battleship Tsesarevich we’re back to wacky bulbous tumblehome hull designs; it was actually a relative of Charles Martel above and was made in France. So we know the Russians weren’t fetishists for having 5 story buildings above the water for Japanese target practice. It was also attacked in Port Arthur, and it’s main contributions to WW-1 were malingering communist outbreaks among the crew.

Cool pre-Dreadnought autism playlist by Drachinifel which inspired this:

6 comments

Further examples of group madness in technology

Posted in Progress by Scott Locklin on September 26, 2025

First set of examples here:

Examples of group madness in technology

[フレーム]

Many of these taken from the comments on the last one; thanks bros. Again, one of the worst arguments I hear is that "thing X is inevitable because the smart people are doing it." There are tons of examples of smart people doing and working on stupid things because everyone else is doing it. Everyone conveniently forgets about these stupid things the “smart people” did in the past; blinded by modern marketing techniques trumpeting The Latest Thing. It’s one of my fundamental theorems that “smart people” are even more susceptible to crazes and tulip bulb nonsense than primitive people, mostly because of how they become “smart.” Current year “smart people” achieve their initial successes by book learning. This is fine and dandy as long as someone who is actually smart selected good and true books to read. The problem is, current year “smart people” take marketing baloney as valid inputs. Worse, they also take “smart people” social cues as important inputs: they fear standing out from the herd, even when it is obvious the herd is insane. It’s how stupid political ideas spread as well.

That’s how we have very smart people working on very obvious nonsense like battery powered airplanes. Just to remind everyone: electric motors are heavy, and batteries are like 100x heavier than the equivalent power density of guzzoline. To say nothing of the fact that batteries take a lot longer to charge than filling up gas tanks. If you want to make air travel greener, cheapen the FAA testing requirements on certifying small engines for flightworthiness. We still use designs requiring leaded gasoline from the 1950s, back when certs were cheaper because of the lack of this bureaucratic overhead. You can get 2-4x more fuel efficiency this way. Better than idiocy like hoping a battery operated airplane will work. You can also just make it 10x more expensive so there are less scumbags from America (and everywhere else: I don’t like you either unless you’re Japanese) filling up my favorite places.

Distributed manufacturing. This is a recent one where solid printers are indeed useful and have become important tools in various applications, but people attributed magical properties to additive manufacture. Lots of hobbyists now use plastic solid printers to make plastic telescope rings or Warhammer40k figurines. They get used in big boy machine shops to make enclosures for prototypes, molds and on occasion, metal sintered objects which would be extremely difficult to cast. Believe it or not, making enclosures was a very time consuming part of prototyping in my lifetime. I remember the first solid printed enclosure I got, I thought it was wonderful. Molds; you can now email the pattern to people rather than mailing hand-carved styrofoam molds. Metal sintering solid printing is extremely time and energy intensive (and by nature probably always will be), but it’s totally worth it for something thin with lots of internal structure, like a rocket nozzle. All this is great and welcome progress, but it’s not how it was sold to people maybe 15-20 years ago. Back then, people were overtly saying it would be the end of centralized manufacturing. Every neighborhood would have a star trek replicator which would make them stuff from plans emailed over the internet. This was very obviously the sheerest nonsense to anyone familiar with objects made out of matter and how they are made, yet it was uncritically repeated and amplified by millions of people who should know better. In fact it’s still uncritically repeated, though I guess it is less of a craze than it was 20 years ago as more people have experience with these things. It’s tough to get excited about the “materials savings” for solid printing things when you’re paying 50 bucks for the feedstock needed to make a 2 inch tall Yoda figurine. Back in 2012, there was a mass hysteria about solid printers making guns. As I pointed out, you could make guns of similar quality whittling pieces of wood, or using pipes you get from the hardware store. The only reason this is viable is the legal “gun” part in the US is the lower receiver, which is not a part that needs to be made well to function on an AR-15. Magic star trek replicators for AR-15s, alas, are not going to be a thing in our lifetimes. You can make them on machine tools though, and machine tools don’t cost much. Nobody would have worried about solid printed “guns” if they didn’t think solid printers were magic star trek replicators, which, back in 2012, they did. FWIIW the lizard men at WEF still are trying to sell this, at least when combined with IoT, AI, 5G and … gene editing. Absolute proof nobody involved with WEF has ever manufactured any object of worth to humanity.

The Paperless Office was a past craze. Lots of companies were based around this idea. I never bought into it; I had access to the best screens available at the time, and still sent all the physics papers to the laserjet printer, or walked to the library and photocopied journal articles. Even if you make giant thin PDF readers for portability, it’s a lot easier to scribble notes on a piece of stapled paper, which is also easier to read, lighter, lasts longer, and doesn’t need a charge on the batteries. Some of the ideas in the byte articles above ended up being used in search engines. Stuff like collaborative documents was a useful idea. The OCR approaches in use at the time made scanned documents pretty worthless; not sure that’s improved any. Still lots of paper around every office and barring some giant breakthrough in e-ink, always will be. The vision of paperless sounded real good; you could search a bunch of papers in your pre-cloud document fog, but the reality was you’d do a shitload of work and spend a shitload of money and having a filing cabinet with paper organized by paper subject tabs still worked a lot better and was a zillion times cheaper. BTW I still maintain having a couple of nice nerdy ladies with filing cabinets, telephones and fax machines is more economically efficient than running a database system for your business for like 98% of situations. That’s why the Solow “paradox” is still in play. Pretty sure that’s why places like Japan and Germany still use such systems instead of generating GDP by firing the nice nerdy ladies and hiring a bunch of H1b computard programmers and buying a lot of expensive IT hardware.

Lisp unt Prolog -yes my favorite programming language is a lisp. Prolog nonsense was same era, both were intertwined with the 5th generation computing project, but as Lisp is still a cult language, and insane people still use Prolog, it’s worth a few words. Prolog is easily disposed of: it is trivial to code up constraints which have NP-hard solutions in Prolog. That’s kind of how inherent to how constraint programming works. This probably looks absolutely amazing on 1988 tier technology. 1989 tier technology was 32 bit, hard fixed to 2GB. Actually real life fixed to 64MB because thats how many chips you can stuff into a SparkStation-1. That was a giant, super advanced machine of its day; running a Prolog compiled by Lisp (which took up a good chunk of this memory), you actually could solve NP-hard problems, because there isn’t much to them when you’re constrained to such small problems. This was even more true on the 24bit 1mb Lisp machines. The idea that adding a couple of bits to the result you were interested in would explode the computation didn’t occur to people back then. We should all know this by now and avoid doing any Prolog without thinking about how the compiler works (in which case, why not use something else where it is more obvious), or waiting for super-Turing architectures which think NP=P (or using it as a front end for some kind of solver).
Lisp has a different, more fundamental basket of problems. You can easily write a Prolog in Lisp, which one might notice is a serious problem considering above. This is given as a student exercise by Lisp’s strongest soldier, who, despite being a rich VC, doesn’t seem to have gotten rich investing in any successful Lisp companies. He claims he got his first pile of loot writing a bunch of web shop nonsense in Lisp (which later got translated into shitty C++). I dunno I wasn’t there back in 95; we used Fortran in those days. Even by his own admission, it was a creation of N=2 programmers, very possibly mostly N=1, and it was not modifiable or fixable by anybody else. I think that’s what is cool, and what is wrong with Lisp. You can write macros in it, and modify the language to potentially be quite productive in a small project: but how are others supposed to deal with this? Did you write an extensive documentation of your language innovations and make everyone on the team read it? What happens when N>3 people do this? Is it an N^N problem of communication when N people write macros? In R (an infix version of scheme), people deal with this by ignoring the packages which use certain kinds of alterations of the language (aka the Hadleyverse which I personally ignore religiously), or just embracing that one kind of macro and only doing that thing. Maybe it’s better to keep your language super bureaucratic and spend 400 lines of code every time you send some data to a function to make sure the function knows what’s up. That’s how almost everything that has successfully made money has done it. They all use retard languages that are at least OK at solving problems, not mentat languages that self modify as part of the language specification. Maybe Paul Graham got lucky back in the 1995 because generating valid HTML which holds state was something one or two dudes could do in Lisp. It wasn’t like they had very many choices; most languages sucked at that sort of thing; in fact, in year of our Lord 1995 a lot of people developed programming languages designed to emit stuff like a valid HTML webstore: Javascript, Java, Ruby, PHP are examples we all remember, and which went on to create trillions in value. That is greater value than anything Lisp has ever done, basically by being kind of limited and “squirting stateful HTML over wires” domain specific retarded and not giving users superpowers to easily modify the language. One of the fun things about Paul Graham’s big claims about Lisp is we know for a fact it all could have as easily been done in Perl: because, actually, it was done in Perl, multiple times. Perl was not only more productive in terms of value created, it was more legible too, and amenable to collaboration. Lisp of course had the ability to mutate HTML with state: it was a sort of specification language for other languages. That’s what first-gen AI was inherently; custom interpreters. Maybe if they just solidified the macros and made everyone use them, or, like, wrote a library, it would still be used somehow. Anyway, fuck Lisp, even if I am overly fond of one of its dialects.

CORBA was a minor craze in the mid-90s. I remember building some data acquisition/control gizmo using RPCGEN; took like a day of reading the manual despite never doing anything like that before. As far as I know my smooth brain thing still functions to this day. An architecture astronaut two beamlines over wondered why I didn’t use CORBA. As I recall, his thing didn’t quite work right, and never actually did, but as he was senior to me I just told him I didn’t know C++ very well (plus it didn’t work on VxWorks and lacked a Labview hook). I never learned about this thing, but I think its selling point was its “advanced suite of features” and its object orientation. It was a bit of a craze; if you go look at old byte magazines you’ll find software vendors bragging about using it in the mid 90s. Java Domino, Lotus Notes; are you not impressed? Did these CORBA things not set the world on fire? If you look at what it actually was, it looks like a student project to make teacher happy with fashionable object orientation rather than something used to solve real problems.

Come to think of it, what ever happened to Object Oriented Everything? I remember in the early 1990s when I was still using Fortran, people were always gabbling on about this stuff. People selling the idea would have these weird diagrams of boxes with wires connecting to other boxes; you can tell it was designed to appeal to pointy headed managers. I couldn’t make much of it, thinking perhaps it might make sense for something which has physical box-like things such as a GUI. Later on I realized what people were really looking for was namespaces; something you could get a ghetto version of using naming conventions, or stuff like structs with function pointers in C if you want to get fancy. The other things, polymorphism, operator overloading and inheritance, usually these were not so helpful or useful for anything. People came up with elaborate mysticisms and patterns about objects: remember things like “factory objects” and “decorator patterns” and “iterator patterns?” You could make these nice block diagrams in UML so retards could understand it! All this gorp was supposed to help with “code reuse,” but it absolutely didn’t: mostly it just added a layer of bureaucratic complexity to whatever you were doing, making it the opposite of code reuse: you had to write MOAR CODE to get it to do anything. You could probably trace a history of objecty dead ends looking at C++ “innovations” over the years: objects, generics/templates, eventually getting some functional features allowing one to do some programming that looks a lot like what we were doing using C macros, while maintaining backward compatibility with all 50 of the previous generation of C++ paradigms.

Related: this 5 minute video is worth the time if you’re still an LLM vibe coding respecter. The man has a simple argument; if LLMs are so great at writing code it’s going to replace people googling stack overflow, where’s all the code? He brings receipts from a few of the efforts to measure programmer productivity with LLM code assistants. Comments are funny too!

You can also transport yourself to 1991 and read a post-mortem of the first AI winter here.

43 comments

AltStyle によって変換されたページ (->オリジナル) /