I’ve spent much of this week revising the Tinderbox Hyperbolic View, which shows a map of links in a non-Euclidean space. Why do you need non-Euclidean geometry? Here’s a link map of part of my novel, Those Trojan Girls.
This is focused on one of the middle chapters, and we can see that some long roads lead here, and also that the outbound path to On The Road leads to repercussions.
The existing code was five years old, and my coding style has changed a bit. I never really trusted the geometry, because my grasp of the Poincaré disk was not really satisfactory. Once before, I’d tried a big refactoring to sort things out, but it had ended in a tangle.
This time, though, I had Claude and Gemini. The point was not to vibe code or to race; the point was that Claude does understand hyperbolic geometry better than I. So, I’d show Claude some dodgy code and ask, "Is this mixing up coordinate systems?" Too often, the answer was "Yes!" I’ve complained previously the Claude is sycophantic, but Sonnet 4.5 is more direct and will stand up to you when it is right.
So, in time we built separate classes for DiskPoint and ScreenPoint, and sorted out when to use what. Lots of methods got moved to the Point classes, and the class that calculates the curved lines between notes had to be completely revamped. (In hyperbolic geometry, a circular arc is, in fact, the shortest distance between two points.)
I’m not happy with the typography and I suspect the damping is too high, but it's really behaving far better than before.
Emergent Introspective Awareness in Large Language Models (Jack Lindsey, Anthropic)
I understand the case for pair programming, but it’s my idea of Hell. At school, the made us have lab partners in Science Lab because that meant you needed only 1⁄2 or 1⁄4 of the equipment and materials, and the school was convinced that Science was not important, anyway, not in the way the Art and Shop were important. Besides, choosing lab partners was almost as bad as choosing sides for basketball, and if you enjoy tangles of privacy and drama, try spending 13 years with the same group of 40 kids.
I’m not eager to have the AI write code for me. The sample code it’s presented in conversations has been reasonable as an illustration, but I’m not wild about the underlying architectural ideas.
Still: I’ve been working on an experimental Readwise importer for Tinderbox, and spent yesterday finishing a thoroughgoing refactoring. So I fed the core of the refactored approach to Claude and asked it for comment—essentially, a bit of code review. A lot of what it proposed was the sort of thing I expect from code review: obsessive attention to pitfalls that won’t actually cause trouble and minuscule performance improvements.
Still, it suggested that I retain a single NSURLSession instead of creating a new one when needed. "Is NSURLSession really that costly to construct?" I asked. Claude answered that yes, it actually is pretty hard to construct, and also it caches all sorts of DNS information for repeated calls to the Readwise servers, so things that take 500ms for the first page can be 50ms for subsequent calls. And we have lots of subsequent pages.
In code, I’m a fan of building things right when you need them. Here, my intuition was wrong. (I suspect, too, that just a few weeks ago Claude would not have had the good sense to stand up to my criticism. Things are moving fast.)
by Naomi Novik
The second volume of the Scholomance trilogy follows Galadriel "El" Higgins through the senior year of Magic Boarding School. The school is a terrible place, filled with magic monsters hoping to snack off adolescent wizards and completely devoid of any adult presence. About 1⁄4 of the students survive to graduate, and in some years no one makes it out. The rich kids have many advantages, of course, but El’s mother lives in a Welsh commune. Her father died during graduation, and El has always expected she’ll go the same way. Wonderfully plotted, this is a fine tale.
When I started working on artificial intelligence, experts debated whether or not, someday, a computer could play chess at a high level. Could a computer possibly be the world champion? Today, a computer has been chess champion for ages, and for 5ドル a month you can play as many games as you like against a computer that has a chess rating anywhere you please, from rank beginner to grandmaster.
When it plays chess, AI frequently seems inhuman. AIs make moves that people would be unlikely to find, both because they search more deeply than people, and because they don’t spend much time thinking about what they had been planning to do before you moved your rook. It’s often fairly easy for good chess players to discover cheaters — online players who are using a second computer to ask an AI for the best move.
Grandmaster Daniel Naroditsky, 29, recently died in circumstances that suggest suicide. He had been dogged for some time by accusations from former world champion Vladimir Kramnik that Naroditsky was cheating.
Everyone now assumes that naturally the computer is far more capable than the very best chess players.
by Naomi Novik
Almost all of my reading these days is research for the new book, but I thought it was time to reread this superb school story. Like much else, this is ultimately a response to Hogwarts, an effort to cast Magic School in a realist mode and to engage seriously with the class issues the surround high school.
A fun and fascinating school story set in a college for magicians. In Novik’s world, young magicians are in terrible danger from a host of supernatural beings that want to feast on their magical power. Babies and mundanes are safe, because they’re not very nutritious. Grownups are fairly safe because they’re tough and leathery. But college students have plenty of nutrition, and taste like they’re coated in crunchy sugar shells. Despite lots of wards and precautions, roughly half of each class gets eaten before they graduate.
If the death toll recalls The Hunger Games, this book’s atmosphere is different because its superbly-drawn protagonist is very different. Galadriel "El" Higgins doesn’t want to get eaten by unspeakable monsters, even if that would mean she could blow off her term papers. She has a hard time making friends in college. There’s as reason for that: each magician has a special aptitude for some kind of magic, and her aptitude is for spells of mass destruction. She doesn’t enslave multitudes, but everyone can see in her face that she could. This doesn’t encourage people to hang out. In addition, El was raised in a commune and her mother has no use for money, but magic school is intensely class-conscious: rich kids have good equipment and a head start, and so they’re less likely to be eaten by unspeakable horrors.
The plan for today was to write about Turing and intelligence. People misunderstand Turing and his test because they assume that Turing, being a mathematician and engineer, knew nothing about "intelligence". That’s wrong. But it’s for another day.
Among other tasks today, I had a user report of poor performance in the Tinderbox attribute browser. The user’s document is about 3,500 notes, which used to be a really big document, and so performance issues aren’t a surprise. In the presence of attribute and rules, the Attribute Browser must consider every note, perhaps every three seconds. If there are lots of notes, that might be a problem.
I used the profiler to verify that this was, in fact, a bottleneck. A key step is that, before you can decide the categories of the Attribute Browser, you’ve got to know what values the attribute takes on. That means looking at every note. This took perhaps 25% of the available time and blocked the main and the agent threads. Could it be faster?
I asked Claude, "here’s how we do things now. Could we do this in parallel?" Claude replied that "Yes! It certainly would be faster in parallel. But remember: I have a note about a complete bollix we had with lazy evaluation in dispatch_apply in another chat last week. We don’t want to do that again!" This was the first concrete success for Claude’s note taking.
So, Claude proposed a fairly standard approach using a `dispatch_group`: set a bunch of processes working on separate parts of the document, wait for them all to finish, then merge the results. It wasn’t a pretty method, but OK: it worked, and cleaned out about 20% of the workload.
I’d tweaked Claude’s proposal en passant because it really wasn’t written in my style, but even so, this was ugly. So I refactored to a method object, and then did a bunch of extract methods and extract instance variable to clean things up. At this point, I sent it all back to Claude, hoping that Claude would get some style hints. Instead it pointed out a race condition in my code. (It was also a race condition in Claude’s code, but this isn’t a competition.) So, we got rid of that.
At this point, we’re fast. I’ve made the point to Claude (I hope) that casts are a code smell and that storing naked pointers in NSValue wrappers is really unsightly. The result is reasonable code, code that is very much mine, but Claude really did help out.
I receive multiple crank calls every day, mostly trying to get information about my Medicare coverage. It makes me dread (and avoid) the telephone. It surprises me that the phone companies are not more active in defeating this.
Prof. Roman K______: for some reason, your university’s IT setup blocks our email. So, I cannot reply to your kind message.
Smashing Magazine: I’ve had an article in the queue for five or six months. I’ve queried twice. Do we have an email problem? If we do, give me a call! (Oh, wait....)
Donald Trump is literally demolishing a chunk of the White House.
Jakob Nielsen’s operatic take on Direct Manipulation. Thanks, Ben Shneiderman’s AI Newsletter.
[フレーム]by Ian McEwan
In 2014, a famous English poet celebrated his wife’s birthday with a dinner, which later became a famous literary event. In 2119, a literary historian in the post-apocalyptic British Archipelago, sets out to recover every discoverable detail of that party, and to locate the birthday present: a lost cycle of 15 sonnets thought to have been a masterpiece. I personally found this fascinating exploration of the meaning and mechanism of History nearly impossible to read because one plot thread involves a vivid rendition of a character’s decline into dementia. I learned too much about that from my mother’s last 18 years and I avoid Alzheimer books. Oh well.
New at A List Apart: Design for Amiability: Lessons from Vienna.
by Peter Godfrey-Smith
The capstone book in a series that began with Godfrey-Smith’s landmark speculation on the octopus, Other Minds.Godfrey-Smith does a fine job at explaining what we know, and don’t know, about how plants and animals came to be. What is their felt experience? What is consciousness, anyway? Only in this last question does Godfrey-Smith falter, finding himself willing to countenance mollusk consciousness but unwilling to situate consciousness in a brain made from neurons. We have always known that this is in fact the case—we are dust—yet learned people still avert their eyes.
Modern chatbots imitate intelligent human communication so well that it is often very difficult to understand who you are talking to: a robot or a person. However, they are just imitating, not having a rational conversation. Therefore, to claim that a chatbot has AI is, of course, nonsense. (Kalyaev, I. A. 2024. "How to Measure Artificial Intelligence?". Sci. Tech. Inf. Process. 51 (5), 422-428)
Yesterday, I asked Claude to review for me the Turing-Church thesis, which proposes an equivalence class of machines and formalisms, any of which are capable of solving any problem that one of the others can solve. For example, the Turing machine, with its infinite one-bit paper tape, can (in time) do anything the Macintosh on which I am typing now can do. Some machines cannot do everything a Turing machine can do: for example, regular expressions can express lots of things, but you can’t use them to count.
I asked Claude about super-Turing machines, machines that could do things that computers (as we know them) cannot. Claude responded with a reasonable objection:
This claim faces a fundamental problem: it is unfalsifiable.
Suppose someone produces a device claiming to be a super-Turing system. How could we verify this claim? We would need to demonstrate that it solves a problem provably non-computable for Turing machines—such as the halting problem or computing Chaitin's Omega. But even if the device provides an answer, how do we verify its correctness? We cannot check it algorithmically, because if we could verify the answer computationally, the problem would be computable after all.
Conversely, we cannot easily refute such a claim either. We might not understand the mechanism; it might be Turing-complete but very fast; and any finite number of test cases cannot prove equivalence.
This matters, because quite a few people want to believe that something in the human mind — consciousness, or ensouledness, or conversability — cannot be computed.
Later, picking up on the title of an old Alan Newell paper, I challenged Claude to a game of twenty questions. It got my desk fan, though it needed all twenty to do it. Of course, it has no experience of desk fans, or any need of one on a hot day, and desk fans are not common topics of conversation.
I also asked it to jog my memory, because in a previous chat it had recommended a specific Iain Banks novel (Excession) that I ought to read, and I’d lost the note.
Finally, I asked Claude to add notes to my book project to record the discussion of Turing-Church and to add a few books (oh dear) to my reading list. Claude did all that, though at first it got the containers wrong because display expressions in the chapter titles confused it. I explained the issue, and we got everything sorted away.
This feels like a rational conversation. I know lots of people who would not be able, at the drop of a hat, to discuss the Turing-Church thesis. Even fewer, having spent 45 minutes or so with Church and Turing, would be up for a discussion of science fiction. Every member of my hometown city council is an actual human, but I doubt any of them would be up to this task. If this is an imitation of rational conversation, it’s a very good one indeed.
A day-long symposium on the 75th Anniversary of the Turing Test. You can watch the recording on YouTube.
by Donna Tartt
The Secret History reminds me of Hanya Yanagihara’s A Little Life. Both are large novels about a compact group of unlikable but nonetheless sympathetic young people. Donna Tartt’s novel is set in a New England college, and I’m a sucker for school stories. It’s very well written, but I’m happy to reach the end.
Another encounter between a user and Claude:
I just completed an incredible exchange with Claude on a new test document for holding a quotation collection.
I added a folder under /Hints/AI called "My Prompts to Claude." In this folder, I added a note with an extensive prompt on how to process a series of quotations in my "Quotations Input Folder". For this test, I had seven quotations as notes. The note name was the source of the quotation, and the $text area was the quotation itself. I had a series of attributes for each quotation note in a prototype I made for the notes.
I first asked Claude to verify finding my prompt, which it did, and we went from there.
In this chat with Claude, Claude gave me a lot of advice on formatting prompts and workflows, and how to handle this whole personal knowledge system thingy.
So to say the least, I am incredibly impressed and excited.
A significant comment from Tinderbox backstage:
I don’t know if this is relevant or not, but my own case of adopting Tinderbox as one of the three main apps I now use for my reference-heavy and interlinked writing on the history of medicine started with dissatisfaction with mind-maps that had crap connected note possibilities. Even now Simplemind (apparently coming back into fashion) does not allow any choice at all of the placement of the ‘text pane’ associated with each map node. And in Obsidian (and the new Octarine) it is still not possible to scroll through a notes list in the left pane and see the text in the right pane. Just this kind of ease has made Tinderbox invaluable to me: a flexible (map) view and text option.
After more than a decade now I’m still happy being a trivial user (quite proud of that badge actually) and my regular use hasn’t really got much more complex: still - every day, I go to Tinderbox to keep my intellectual body and soul (and project sprawls) together and I’d be utterly lost across centuries and texts and disciplines (nothing is more interdisciplinary than the history of medicine) and central concepts (and their rectifications) etc without it.
My learned lesson is that the complex functionality of Tinderbox is, for some like me, quite irrelevant because it does the basic crucial organisational tasks so very well, and so elegantly: so there are uses like mine that have grown very few functional tendrils and do not need to. I know that were I to have my Tinderbox use ‘audited’ by an expert I’m sure they’d fin a thousand ways in which what I do could ‘easily’ be automated or made more efficient - but I don’t care, and I don’t have the time or the learning bandwidth to do so at the moment.
When there are posters on the L&L forum who complain that Scrivener has a steep learning curve, perhaps there needs to be a slightly better signposted account of this basic Tinderbox excellence? — E. P. James
Metacognitive Reuse: Turning Recurring LLM Reasoning Into Concise Behaviors. (September 2025). Another effort to help an LLM take notes on its reasoning.
Lots of thoughtful people are incredibly hostile to AI right now, not least because the most prominent AI spokespeople are billionaire snake-oil confidence men and gangsters. The tenor of discussion in reddit and specialized fora is very low.
Nevertheless, LLMs are amazing. You should not let the hucksters blind you.
What I have learned from building bridges between AI and Tinderbox is the importance of asking the right questions. Too often, I see AI applied to tasks it should not do. For example:
- I need a 3-5 page paper comparing Player of Games to Ender’s Game for my college literature course. The references should use ALA format. I really need at least an A-.
- Take these 500 emails from people who want to cancel their subscription, and write a customized response to each. Your first goal is customer retention, but a secondary goal is to delay beyond the start of next month.
Here, the AI is being asked to do things it cannot do well, and in the process it is actively harming the user. The first user doesn’t need an A-; that user needs to learn how to write. The second user doesn’t need to slow-walk cancellations; they need a better magazine.
Some places where Claude+Tinderbox excels include:
- Locating the best books on nearly any topic, however obscure.
- Finding a specific technical paper from an approximate description.
- Google-style web queries that are infeasible because a homonym gets in the way, such as a young European computer scientist who shares a name with an Olympic athlete.
- Sanity checking a book you like but fear may have been superseded or refuted. (Claude strikes me as far more even handed in this role than Wikipedia, which frequently treats these questions as a political football.)
- Thinking through algorithm selection. For example, Tinderbox uses force-directed graph layouts in several places — "dancing" in map view, Gaudí view, hyperbolic view. I’ve written force-directed layout several different times over the years. Claude has the alternative algorithms, their history, and their performance (both in terms of big-O and in terms of runtime on today’s machines) at its fingertips.
These tasks share important characteristics. If the AI is right, they are helpful both in terms of immediate results and also in terms of process. If the AI is mistaken or badly informed, that is likely to be obvious right away. You’re using the AI’s erudition and breadth, and not depending on it for insight or novelty. Those are what you supply.