The LLM flywheel effect
How to manage a team of AI assistants in a virtuous cycle of improvement.
Strategies for Internet citizens
How to manage a team of AI assistants in a virtuous cycle of improvement.
Tuscon’s Museum of Miniatures features hundreds of exhibits like this one.
People have been making these for hundreds of years, but in recent decades practioners have become more precise about measurement and scale. Many of the exhibits use a 1:12 (inch:foot) ratio.
The fine detail is mind boggling. See that page on the desk above? You can actually read it.
There are rooms full of these installations, many of which date from the 1980s and 1990s when an American community of practice coalesced around the style.
I would guess that the whole collection representions millions of hours of effort. It’s almost overwhelming to contemplate.
This guy, Salavat Fidai, sculpts pencil tips. His medium is not quite as insane as that of Willard Wigan, whose work I saw at The Museum of Jurassic Technology. But it pushes the envelope.
As amazing as these miniatures are, I might not have made the visit just to see them. The tractor beam that pulled me in was the special exhibit of Ray Harryhausen’s orginal animatronic models and drawings. Here’s the Kraken from Clash of the Titans.
According to the Harryhausen Foundation’s podcast, he took creative liberties when bringing the legends to life. For example, this scene is a mashup of Jason and the Argonauts and the Labors of Hercules. It was actually Hercules who fought the Hydra. This bothered some classicists but Harryhausen was a pragmatist: “We have to manipulate certain aspects in order to make a movie that will flow.”
Who doesn’t love Bubo the mechanical owl?
American censors, however, did not love bare-breasted Medusa, though they were perfectly fine with her violent and bloody decapitation. Europeans, unsurprisingly, had the inverse reaction.
The skeletons from the iconic swordfighting scene were smaller than I imagined.
This model is from a film I never heard of.
The sign says:
The Story of the Tortoise and the Hare
Ray Harryhausen
c. 1952This is the original model, rediscovered in 2008. An identical replica was made in 2002 to complete this unfinished film, 50 years later.
In 2002, Seamus Walsh and Mark Caballero of Screen Novelties, the award-winning American stop-motion animation studio, worked with Ray Harryhausen to complete his final fairy tale film, The Story of the Tortoise and the Hare, which Ray began in 1953 and never finished. Ray was delighted and grateful for their assistance and greatly admired how Mark and Seamus were able to seamlessly blend the new and original footage.
You can see the remarkable collection of miniatures anytime. But the Harryhausen exhibit, which arrived in Tuscon in September and leaves next May, is a rare U.S. appearance of artifacts that normally reside in Scotland. (Why? Ray’s wife, Diana, had very strong links to Scotland, being the great-great granddaughter of explorer David Livingstone.) So visit soon if you can!
Exactly one hundred and fifty years ago John Muir walked around in the same grove of giant sequoia trees that I walked around in today, and stood next to the same two thousand ton behemoth that had been growing for two and a half millenia.
It has only been known as the General Sherman tree for a tiny fraction of its immense lifespan. I imagine it standing there blissfully unaware of its association with a cruel and destructive human being, indeed unaware of any human activity at all.
But we are making our presence known.
“Death of large sequoias (over 4 ft in diameter) in wildfires prior to 2015 was very rare”
This was my first trip to Sequoia National Park. I explored the tiny section shown on this 1927 USGS topological map.
It’s worth clicking through to the high-res version, zooming in, and imagining what it was like to reach that place in 1875 before there were roads and cars never mind GPS-connected handheld computers.
On the Congress trail in this densest of Sequoiadendron giganteum groves, other magnificent specimens suffer comparison to notable Americans, most painfully this cluster called The House. (There’s a Senate too.)
I live among coast redwoods and was delighted to finally meet their shorter and stouter cousins. If you’ve been thinking about a visit, know that the park is open but unstaffed. I only saw one ranger and he was on latrine duty, nobody is collecting the entrance fee, yet another bit of economic fallout from the shutdown.
After walking the Congress trail I headed down to the museum (which is closed), hiked over to Moro Rock, and walked up the steps to take in the view.
(Wikipedia)
Someday I hope to ascend Half Dome using the cable hand rails but this was an easy way to enjoy the view from a big granite dome. Whitney is only a dozen miles away but “the Great Western Divide rises high enough to block it”.
My day started in Three Rivers and ended in Tehachapi after a long and rewarding detour into another section of the park.
The road up to Lake Isabella winds gradually through Sierra foothills that seemed mellower and more mesmerizing than the ones I’ve seen farther north. The road down follows the Kern River as it flows over endless pillows of granite. There’s nothing like a big dose of the majesty of California, a friend likes to say. It sure was powerful medicine today.
The Volts podcast continues to be my favorite listen. Climate change will wreak ever more havoc on the world, that’s just baked in. But the transition to clean energy is also now baked in. David Roberts delivers a steady stream of hopeful news on that front: plummeting prices for solar panels and batteries, “reconductoring” to grow the capacity of the existing grid, agrivoltaics, new geothermal techniques, and much more.
Cars are a big part of the story. Switching to EVs is great but if we only do that we are still stuck with too many large heavy vehicles that clog roads when moving, waste vast amounts of space when parked, and harm people who move through the world on foot or on bicycles. We don’t just want cleaner cars, we also want far fewer of them. This episode, with the authors of Life After Cars, explores the “tyranny of the automobile”.
American car culture always seemed wrong to me, for many reasons. On this show David Roberts crystallized one of them.
When you ride a bike through Amsterdam, you are a dozen times every minute making small adjustments to other people, and you are accommodating yourself and coordinating with other people in these micro ways over and over and over again as you ride through Amsterdam.
And it just has an effect. You realize you’re living among other people and you’re involved in a common project and you live in a common place and you’re together in the place.
I have long been fascinated by a video called A trip down Market Street. Filmed in San Francisco in 1906, shortly before the great quake, it’s a long shot that moves down Market Street toward the Ferry Building. You see a free-for-all of trolleys, pedestrians, bicycles, horsedrawn carriages, and cars. Clearly the cars are going to win but in this moment they are not yet hermetically sealed shells, they have open tops so drivers see one another and make the same kinds of micro-adjustments to cyclists and pedestrians.
In a San Franciso with fewer and more autonomous cars, can we imagine a way to recapture that kind of sociality?
I’m visiting with American friends who are staying in a rural farmhouse in France’s Dordogne valley. The house, which might be several hundred years old, provides faster internet access than my fiberoptic setup at home. The cars we are piloting along these ancient byways have touchscreens that control Bluetooth and satellite connections. It feels like the perfect juxtaposition of the old and the new. But the illusion cracked yesterday when we headed out to visit the medieval town of Sarlat-la-Canéda. I punched “Sarlat” into the satnav and off we went, choosing the slowest but most scenic of the offered routes. As we approached the destination my friend said: “Something is wrong, Sarlat is small but it’s not this small.” You can probably guess what happened. The maps app had found a tiny hamlet 50 miles to the north instead of the populous town 30 miles to the west. Although I know better I fell for the illusion: I’m on vacation, let the machine take care of the details, we’ll just enjoy the view. Oops.
It wasn’t really a problem. We had plenty of time, we’ve been taking back roads in order to see the countryside, we just ended up seeing more and different countryside than planned. But unlike the last time I toured France, almost 25 years ago when connected phones and map apps weren’t yet a thing, I didn’t have a conventional map and neither did my friends. Had I looked at one we would never have made this error. The map on your phone isn’t really a map, it’s a tiny viewport that can see the whole planet at any resolution but never provides the context your brain needs to reason about spatial relationships. It’ll get you from point A to point B but struggles to convey where B is in relation to C.
I’m not blaming the tech, it is a miracle I will never take for granted. The fault is entirely mine for not having a real map, spreading it out on the kitchen table before we left, enjoying a beautiful and information-dense work of cartographic art, and planning the trip with the big picture in view. That would have been another nice juxtaposition of old and new. On my next GPS-guided trip to town I’ll pick up a real map: another miracle I should never take for granted.
Update: Look what we found in a drawer. Made by Institut Géographique National in 1972.
I reckon you’d need a 16K x 12K screen to view it at print resolution.
Although autonomous LLMs are inherently unreliable, there’s a long software tradition of building reliable layers on top of unreliable layers. That applies here too. We can’t guarantee that you’ll never be led astray when building an XMLUI app with the help of agents that use the XMLUI MCP server to extract patterns from docs, sources, how-tos, and samples. But it’s a lot more likely now that you and your AI team will stay anchored to ground truth. At this point, I would define context engineering as whatever it takes to make that happen.
In the mid-1990s you could create useful software without being an ace coder. You had Visual Basic, you had a rich ecosystem of components, you could wire them together to create apps, standing on the shoulders of the coders who built those components. If you’re younger than 45 you may not know what that was like, nor realize web components have never worked the same way. The project we’re announcing today, XMLUI, brings the VB model to the modern web and its React-based component ecosystem. XMLUI wraps React and CSS and provides a suite of components that you compose with XML markup. Here’s a little app to check the status of London tube lines.
<App>
<Select id="lines" initialValue="bakerloo">
<Items data="https://api.tfl.gov.uk/line/mode/tube/status">
</Items>
</Select>
<DataSource
id="tubeStations"
url="https://api.tfl.gov.uk/Line/{lines.value}/Route/Sequence/inbound"
resultSelector="stations"/>
<Table data="{tubeStations}" height="280px">
<Column bindTo="name" />
<Column bindTo="modes" />
</Table>
</App>
A dozen lines of XML is enough to:
This is a clean, modern, component-based app that’s reactive and themed without requiring any knowledge of React or CSS. That’s powerful leverage. And it’s code you can read and maintain, no matter if it was you or an LLM assistant who wrote it. I’m consulting for the project so you should judge for yourself, but to me this feels like an alternative to the JavaScript industrial complex that ticks all the right boxes.
My most-cited BYTE article was a 1994 cover story called Componentware. Many of us had assumed that the engine of widespread software reuse would be libraries of low-level objects linked into programs written by skilled coders. What actually gained traction were components built by professional developers and used by business developers.
There were Visual Basic components for charting, network communication, data access, audio/video playback, and image scanning/editing. UI controls included buttons, dialog boxes, sliders, grids for displaying and editing tabular data, text editors, tree and list and tab views. People used these controls to build point-of-sale systems, scheduling and project management tools, systems for medical and legal practice management, sales and inventory reporting, and much more.
That ecosystem of component producers and consumers didn’t carry forward to the web. I’m a fan of web components but it’s the React flavor that dominate and they are not accessible to the kind of developer who could productively use Visual Basic components back in the day. You have to be a skilled coder not only to create a React component but also to use one. XMLUI wraps React components so solution builders can use them.
XMLUI provides a deep catalog of components including all the interactive ones you’d expect as well as behind-the-scenes ones like DataSource, APICall, and Queue. You can easily define your own components that interop with the native set and with one another. Here’s the markup for a TubeStops component.
<Component name="TubeStops">
<DataSource
id="stops"
url="https://api.tfl.gov.uk/Line/{$props.line}/StopPoints"
transformResult="{window.transformStops}"
/>
<Text variant="strong">{$props.line}</Text>
<Table data="{stops}">
<Column width="3*" bindTo="name" />
<Column bindTo="zone" />
<Column bindTo="wifi">
<Fragment when="{$item.wifi === 'yes'}">
<Icon name="checkmark"/>
</Fragment>
</Column>
<Column bindTo="toilets">
<Fragment when="{$item.toilets === 'yes'}">
<Icon name="checkmark"/>
</Fragment>
</Column>
</Table>
</Component>
Here’s markup that uses the component twice in a side-by-side layout.
<HStack> <Stack width="50%"> <TubeStops line="victoria" /> </Stack> <Stack width="50%"> <TubeStops line="waterloo-city" /> </Stack> </HStack>
It’s easy to read and maintain short snippets of XMLUI markup. When the markup grows to a hundred lines or more, not so much. But I never need to look at that much code; when components grow too large I refactor them. In any programming environment that maneuver entails overhead: you have to create and name files, identify which things to pass as properties from one place, and unpack them in another. But the rising LLM tide lifts all boats. Because I can delegate the refactoring to my team of AI assistants I’m able to do it fluidly and continuously. LLMs don’t “know” about XMLUI out of the box but they do know about XML, and with the help of MCP (see below) they can “know” a lot about XMLUI specifically.
If you’ve never been a React programmer, as I have not, the biggest challenge with XMLUI-style reactivity isn’t what you need to learn but rather what you need to unlearn. Let’s take another look at the code for the app shown at the top of this post.
<App>
<Select id="lines" initialValue="bakerloo">
<Items data="https://api.tfl.gov.uk/line/mode/tube/status">
<Option value="{$item.id}" label="{$item.name}" />
</Items>
</Select>
<DataSource
id="tubeStations"
url="https://api.tfl.gov.uk/Line/{lines.value}/Route/Sequence/inbound"
resultSelector="stations"/>
<Table data="{tubeStations}" height="280px">
<Column bindTo="name" />
<Column bindTo="modes" />
</Table>
</App>
Note how the Select declares the property id="lines". That makes lines a reactive variable.
Now look at the url property of the DataSource. It embeds a reference to lines.value. Changing the selection changes lines.value. The DataSource reacts by fetching a new batch of details. Likewise the Table‘s data property refers to tubeStations (the DataSource) so it automatically displays the new data.
There’s a name for this pattern: reactive data binding. It’s what spreadsheets do when a change in one cell propagates to others that refer to it. And it’s what React enables for web apps. React is a complex beast that only expert programmers can tame. Fortunately the expert programmers who build XMLUI have done that for you. As an XMLUI developer you may need to unlearn imperative habits in order to go with the declarative flow. It’s a different mindset but if you keep the spreadsheet analogy in mind you’ll soon get the hang of it. Along the way you’ll likely discover happy surprises. For example, here’s the search feature in our demo app, XMLUI Invoice.
Initially I wrote it in a conventional way, with a search button. Then I realized there was no need for a button. The DataSource URL that drives the query can react to keystrokes in the TextBox, and the Table can in turn react when the DataSource refreshes.
<Component name="SearchEverything">
<VStack paddingTop="$space-4">
<TextBox
placeholder="Enter search term..."
width="25rem"
id="searchTerm"
/>
<Card when="{searchTerm.value}">
<DataSource
id="search"
url="/api/search/{searchTerm.value}"
/>
<Text>Found {search.value ? search.value.length : 0} results for
"{searchTerm.value}":</Text>
<Table data="{search}">
<Column bindTo="table_name" header="Type" width="100px" />
<Column bindTo="title" header="Title" width="*" />
<Column bindTo="snippet" header="Match Details" width="3*" />
</Table>
</Card>
</VStack>
</Component>
When the team first showed me the XMLUI theme system I wasn’t too excited. I am not a designer so I appreciate a nice default theme that doesn’t require me to make color choices I’m not qualified to make. The ability to switch themes has never felt that important to me, and I’ve never quite understood why developer are so obsessed with dark mode. I have wrestled with CSS, though, to achieve both style and layout effects, and the results have not been impressive. XMLUI aims to make everything you build look good, and behave gracefully, without requiring you to write any CSS or CSS-like style and layout directives.
You can apply inline styles but for the most part you won’t need them and shouldn’t use them. For me this was another unlearning exercise. I know enough CSS to be dangerous and in the early going I abused inline styles. That was partly my fault and partly because LLMs think inline styles are catnip and will abuse them on your behalf. If you look at the code snippets here, though, you’ll see almost no explicit style or layout directives. Each component provides an extensive sets of theme variables that influence its text color and font, background color, margins, borders, paddings, and more. They follow a naming convention that enables a setting to control appearance globally or in progressively more granular ways. For example, here are the variables that can control the border color of a solid button using the primary color when the mouse hovers over it.
color-primary backgroundColor-Button backgroundColor-Button-solid backgroundColor-Button-primary backgroundColor-Button-primary-solid backgroundColor-Button-primary-solid--hover
When it renders a button, XMLUI works up the chain from the most specific setting to the most general. This arrangement gives designers many degrees of freedom to craft exquisitely detailed themes. But almost all the settings are optional, and those that are defined by default use logical names instead of hardcoded values. So, for example, the default setting for backgroundColor-Button-primary is $color-primary-500. That’s the midpoint in a range of colors that play a primary role in the UI. There’s a set of such semantic roles, each associated with a color palette. The key roles are:
Surface: creates neutral backgrounds and containers.
Primary: draws attention to important elements and actions.
Secondary: provides visual support without competing with primary elements.
What’s more, you can generate complete palettes from single midpoint value for each.
name: Earthtone id: earthtone themeVars: color-primary: "hsl(30, 50%, 30%)" color-secondary: "hsl(120, 40%, 25%)" color-surface: "hsl(39, 43%, 97%)"
Themes aren’t just about colors, though. XMLUI components work hard to provide default layout settings that yield good spacing, padding, and margins both within individual components and across a canvas that composes sets of them. I am, again, not a designer, so not really qualified to make a professional judgement about how it all works. But the effects I can achieve look pretty good to me.
As a Visual Basic developer you weren’t expected to be an ace coder but were expected to be able to handle a bit of scripting. It’s the same with XMLUI. The language is JavaScript and you can go a long way with tiny snippets like this one in TubeStops.
<Fragment when="{$item.wifi === 'yes'}"></Fragment>
TubeStops does also use the transformResult property of its DataSource to invoke a more ambitious chunk of code.
function transformStops(stops) {
return stops.map(stop => {
// Helper to extract a value from additionalProperties by key
const getProp = (key) => {
const prop = stop.additionalProperties && stop.additionalProperties.find(p => p.key === key);
return prop ? prop.value : '';
};
return {
name: stop.commonName,
zone: getProp('Zone'),
wifi: getProp('WiFi'),
toilets: getProp('Toilets'),
// A comma-separated list of line names that serve this stop
lines: stop.lines ? stop.lines.map(line => line.name).join(', ') : ''
};
});
}
This is not trivial, but it’s not rocket science either. And of course you don’t need to write stuff like this nowadays, you can have an LLM assistant do it for you. So we can’t claim that XMLUI is 100% declarative. But I think it’s fair to say that the imperative parts are well-scoped and accessible to a solution builder who doesn’t know, or want to know, anything about the JavaScript industrial complex.
In the age of AI, who needs XMLUI when you can just have LLMs write React apps for you? It’s a valid question and I think I have a pretty good answer. The first version of XMLUI Invoice was a React app that Claude wrote in 30 seconds. It was shockingly complete and functional. But I wasn’t an equal partner in the process. I’m aware that React has things like useEffect and useContext but I don’t really know what they are or how to use them properly, and am not competent to review or maintain JavaScript code that uses these patterns. The same disadvantage applies to the CSS that Claude wrote. If you’re a happy vibe coder who never expects to look at or work with the code that LLMs generate, then maybe XMLUI isn’t for you.
If you need to be able review and maintain your app, though, XMLUI levels the playing field. I can read, evaluate, and competently adjust the XMLUI code that LLMs write. In a recent talk Andrej Karpathy argues that the sweet spot for LLMS is a collaborative partnership in which we can dynamically adjust how much control we give them. The “autonomy slider” he envisions requires that we and our assistants operate in the same conceptual/semantic space. That isn’t true for me, nor for the developers XMLUI aims to empower, if the space is React+CSS. It can be true if the space is XMLUI.
To enhance the collaboration we provide an MCP server that helps you direct agents’ attention as you work with them on XMLUI apps. In MCP is RSS for AI I described the kinds of questions that agents like Claude and Cursor can use xmlui-mcp to ask and answer:
Is there a component that does [X]?
What do the docs for [X] say about topic [Y]?
How does the source code implement [X]?
How is [X] is used in other apps?
You place the xmlui-mcp server alongside the xmlui repo which includes docs and source code. And the repo in which you are developing an XMLUI app. And, ideally, other repos that contain reference apps like XMLUI Invoice.
This arrangement has mostly exceeded my expectations. As I build out a suite of apps that exemplify best practices and patterns, the agentic collaboration improves. This flywheel effect is, of course, still subject to the peculiar habits of LLM assistants who constantly need to be reminded of the rules.
1 don’t write any code without my permission, always preview proposed changes, discuss, and only proceed with approval.
2 don’t add any xmlui styling, let the theme and layout engine do its job
3 proceed in small increments, write the absolute minimum amount of xmlui markup necessary and no script if possible
4 do not invent any xmlui syntax. only use constructs for which you can find examples in the docs and sample apps. cite your sources.
5 never touch the dom. we only use xmlui abstractions inside the App realm, with help from vars and functions defined on the window variable in index.html
6 keep complex functions and expressions out of xmlui, they can live in index.html or (if scoping requires) in code-behind
7 use the xmlui mcp server to list and show component docs but also search xmlui source, docs, and examples
8 always do the simplest thing possible
It’s like working with 2-year-old savants. Crazy, but it can be effective!
To increase the odds that you’ll collaborate effectively, we added a How To section to the docs site. The MCP server makes these articles visible to agents by providing tools that list and search them. This was inspired by a friend who asked: “For a Select, suppose you don’t have a static default first item but you want to fetch data and choose the first item from data as the default selected, how’d you do that in xmlui?” It took me a few minutes to put together an example. Then I realized that’s the kind of question LLMs should be able to ask and answer autonomously. When an agent uses one of these tools it is anchored to ground truth: an article found this way has a citable URL that points to a working example.
It’s way easier for me to do things with XMLUI than with React and CSS, but I’ve also climbed a learning curve and absorbed a lot of tacit knowledge. Will the LLM-friendly documentation flatten the learning curve for newcomers and their AI assistants? I’m eager to find out.
We say XMLUI is for building apps, but what are apps really? Nowadays websites are often apps too, built on frameworks like Vercel’s Next.js. I’ve used publishing systems built that way and I am not a fan. You shouldn’t need a React-savvy front-end developer to help you make routine changes to your site. And with XMLUI you don’t. Our demo site, docs site, and landing page are all XMLUI apps that are much easier for me to write and maintain than the Next.js sites I’ve worked on.
“Eating the dogfood” is an ugly name for a beautiful idea: Builders should use and depend on the things they build. We do, but there’s more to the story of XMLUI as a CMS. When you build an app with XMLUI you are going to want to document it. There’s a nice synergy available: the app and its documentation can be made of the same stuff. You can even showcase live demos of your app in your docs as we do in component documentation, tutorials, and How To articles.
I was an early proponent of screencasts for software demos, and it can certainly be better to show than tell, but it’s infuriating to search for the way to do something and find only a video. Ideally you show and tell. Documenting software with a mix of code, narrative, and live interaction brings all the modalities together.
Out of the box, XMLUI wraps a bunch of React components. What happens when the one you need isn’t included? This isn’t my first rodeo. In a previous effort I leaned heavily on LLMs to dig through layers of React code but was still unable to achieve the wrapping I was aiming for.
For XMLUI the component I most wanted to include was the Tiptap editor which is itself a wrapper around the foundational ProseMirror toolkit. Accomplishing that was a stretch goal that I honestly didn’t expect to achieve before release. But I was pleasantly surprised, and here is the proof.
This XMLUI TableEditor is the subject of our guide for developers who want to understand how to create an XMLUI component that wraps a React component. And isn’t just a toy example. When you use XMLUI for publishing, the foundation is Markdown which is wonderful for writing and editing headings, paragraphs, lists, and code blocks, but awful for writing and editing tables. In that situation I always resort to a visual editor to produce Markdown table syntax. Now I have that visual editor as an XMLUI component that I can embed anywhere.
The React idioms that appear in that guide were produced by LLMs, not by me, and I can’t fully explain how they work, but I am now confident it will be straightforward for React-savvy developers to extend XMLUI. What’s more, I can now see the boundary between component builders and solution builders begin to blur. I am mainly a solution builder who has always depended on component builders to accomplish anything useful at that level. The fact that I was able to accomplish this useful thing myself feels significant.
Here’s the minimal XMLUI deployment footprint for the TableEditor.
TableEditor ├── Main.xmlui ├── index.html └── xmlui └── 0.9.67.js
The index.html just sources the latest standalone build of XMLUI.
<script src="xmlui/0.9.67.js"></script>
Here’s Main.xmlui.
<App var.markdown="">
<Card>
<TableEditor
id="tableEditor"
size="xs"
onDidChange="{(e) => { markdown = e.markdown }}"
/>
</Card>
<Card>
<HStack>
<Text variant="codefence" preserveLinebreaks="{true}">
{ markdown }
</Text>
<SpaceFiller />
<Button
icon="copy"
variant="ghost"
size="xs"
onClick="navigator.clipboard.writeText(markdown)"
/>
</HStack>
</Card>
</App>
You can use any static webserver to host the app. You can even run it from an AWS bucket.
For XMLUI Invoice we provide a test server that includes a localhost-only static server, embeds sqlite, and adds a CORS proxy for apps that need that support when talking to APIs (like Hubspot’s) that require CORS. You may need to wrap similar capabilities around your XMLUI apps but the minimal deployment is dead simple.
XMLUI was conceived by Gent Hito who founded /n software and CData. The mission of /n software: make network communication easy for developers. For CData: make data access easy for developers. And now for XMLUI: make UI easy for developers.
“We are backend people,” Gent says. “All our components are invisible, and when we tried to build simple business UIs we were surprised to find how hard and frustrating that was.”
Those of us who remember the Visual Basic era know it wasn’t always that way. But the web platform has never been friendly to solution builders who need to create user interfaces. That’s become a game for specialists who can wrap their heads around an ongoing explosion of complexity.
It shouldn’t be that way. Some apps do require special expertise. But many shouldn’t. If you are /n software, and you need to give your customers an interface to monitor and control the CoreSSH Server, you shouldn’t need to hire React and CSS pros to make that happen. Your team should be able to do it for themselves and now they can.
I’m having a blast creating interfaces that would otherwise be out of my reach. Will you have the same experience? Give it a try and let us know how it goes!
We mostly don’t want to read the docs, but we do want to converse with them. When we build search interfaces for our docs, we have always tried to anticipate search intentions. People aren’t just looking for words; they need to use the material to solve problems and get things done. When you create an MCP server, you are forced to make those search intentions explicit. That will be as useful for us as it is for the robots, and will help us work with them more effectively.
The great adventure of my birth family was the fifteen months we lived in New Delhi, from June of 1961, on a USAID-sponsored educational mission. So the destruction of USAID feels personal. I’m only now realizing that we were there at the very beginning of USAID, during what Jackie Kennedy later mythologized as the Camelot era. On a tour of India, at a meet-and-greet in New Delhi, she appears in this family photo.
We must have been at the embassy, she’s surrounded by Americans. You can see a few South Asian faces in the background. The young boy at the center of the photo, gazing up at the queen of Camelot, is five-year-old me.
It could have been a Life Magazine cover: “A vision in white, Jackie represents America’s commitment to be of service to the world.” As corny as that sounds, though, the commitment was real. Our nation upheld it for sixty years and then, a few months ago, fed it to the wood chipper and set in motion a Holocaust-scale massacre.
We suggest the number of lives saved per year may range between 2.3 to 5.6 million with our preferred number resting on gross estimates of 3.3 million.
The shutdown likely won’t kill 3.3 million people annually, say its “only” a million. Per year. For six years. It adds up.
Atul Gawande was leader of global public health for USAID. On a recent podcast he runs some more numbers.
On USAID “waste”:
“It’s 0.35% of the federal budget, but that doesn’t help you, right? Try this. The average American paid 14,600ドル in taxes in 2024. The amount that went to USAID is under 50ドル. For that we got control of an HIV epidemic that is at minuscule levels compared to what it was before. We had control of measles and TB. And it goes beyond public health. You also have agricultural programs that helped move India from being chronically food-aid-dependent to being an agricultural exporter. Many of our top trading partners once received USAID assistance that helped them achieve economic development.”
On USAID “fraud”:
“When Russia invaded Ukraine they cut off its access to medicine, bombed the factories that made oxygen, ran cyberattacks. The global health team moved the entire country’s electronic health record system to the cloud, and got a supply chain up and running for every HIV and TB patient in the country.”
On USAID “abuse”:
“The countries where we worked had at least 1.2 million lives saved. In addition, there was a vaccine campaign for measles and for HPV. For every 70 girls in low income countries who are vaccinated against cervical cancer from HPV, one life is saved. It’s one of the most life-saving things in our portfolio. Our vaccine programs would have saved an additional 8 million lives over the next five years.”
America has never been a shining city on the hill but USAID represented our best aspirations. In the throes of the Maoist cultural revolution that tore it down there are many other horrors to confront, but for me this one hits hardest.
This Fresh Air interview with Hanif Kureishi had me riveted from the beginning, for one reason, and then at the end for a different reason. Kureishi is best known as the author of the 1985 British rom-com My Beautiful Laundrette. During an illness in 2022 he fainted, fell on his face, broke his neck, and woke up paraplegic. His account of what that’s like resonated deeply.
Soon after we moved to Santa Rosa a decade ago I became close friends with someone who had suffered the same fate. Until the age of 30 Stan Gow was a rodeo rider, mountain climber, and ski patrol hotshot.
Then he dove into a shallow pool, broke his neck, and spent the next 40 years in a motorized wheelchair.
Before an accident like that you’re an autonomous person, then suddenly and forever after you’re as helpless as an infant, wholly dependent on others who feed you, clean you, dress you, hoist you into the chair in the morning, put you to bed at night, and turn you over in bed during the night.
“You feel like a helpless baby,” Kureishi says, “and a tyrant too.” I saw this happen with Stan. When you have to ask caregivers for everything it feels shameful and embarrassing. Those feelings can convert polite requests into angry demands.
The only escape from that condition, for those lucky enough to be able to own and use one, is the motorized wheelchair. Kureishi has just enough use of an arm to be able to drive himself around the neighborhood. Stan did too, and over the years we walked just about everywhere his wheels could go. Tagging along I gained a deep appreciation for that miracle of mobility, and for the consequences when it’s thwarted by stairs that lack ramps and curbs that lack cuts.
The interview brought back powerful memories of my time with Stan, who died a few years ago after outliving expectations for an injury like his by decades. And then it took a turn when Terri Gross asked about the ethnicity of Kureishi’s caregivers. He was in Italy when the accident happened, and nearly everyone in the hospital was white. When he returned to England it was a different story.
The whole of our huge NHS is run by people from all over the world, and it’s just incredible to lie in bed to be changed and washed by someone and you have these incredible conversations with somebody from Africa, from the Philippines, from India or Pakistan. One of the things you become aware of in these British hospitals is our dependence on immigration.
It’s not quite like that in the US, but much more so than in Italy. During my mother’s final illness one of her caretakers was a Haitian nurse. Mom was a linguist who spoke and taught French, Spanish, and Italian. She’d been unresponsive for a few days, but when the nurse spoke to her in French she perked up like one of the patients in Awakenings.
Paraplegia is rare but helplessness is universal. We all begin that way, we all end that way. Demonizing immigrants is wrong for so many reasons. Among them: who else will take care of you in your time of ultimate need?
A few years ago I abandoned Twitter in favor of Mastodon. Recent events validate that choice and underscore the strategic importance of a decentralized fediverse that can’t be owned by a single corporate or state actor. But while Mastodon meets my needs, much of the Twitter diaspora has gone to Bluesky. That’s fine for now but might not always be. In an article titled "Science Must Step Away From Nationally Managed Infrastructure," Dan Goodman writes:
Many scientists put huge efforts into building networks to communicate with colleagues and the general public. But all that work and the value in those networks was lost when many scientists felt compelled to leave following Elon Musk’s takeover of the platform (now X). The process of rebuilding on Bluesky is underway, but it will take years and may never reach the same critical mass. Even if the transition is successful, the same thing may happen to Bluesky in a few years.
How can we prepare for a future migration from Bluesky to Mastodon? Bridgy Fed — a service that enables you to connect together your website, fediverse account and Bluesky account — will help. But Bridgy Fed needs to be easier to use. So I recruited Claude’s new Sonnet 7 model to do that.
The JavaScript industrial complex won’t crumble anytime soon. But the stage is set for a return to an ecosystem of reusable components accessible to business developers, only this time based on the universal web platform and its core standards.
Perhaps, even though they are not themselves explainable, AIs can help us engineer explainable systems. But I’m not optimistic. It feels like we’re on a path to keep making systems harder for humans to configure, and we keep expanding our reliance on superhuman intelligence to do that for us.
The first time I heard a critique of mediated experience, the critic was my dad. He was an avid photographer who, during our family’s year in India, when I was a young child, used his 35mm Exacta to capture thousands of photos that became carousels of color slides we viewed for many years thereafter. It was a remarkable documentary effort that solidified our memories of that year. But dad was aware of the tradeoff. A favorite joke became: “Q: How was your trip?” “A: I won’t know until the film is developed!” He realized that interposing a camera between himself and the people he encountered had altered the direct experience he and they would otherwise have had.
This weekend I heard Christine Rosen’s modern version of that critique in a discussion of her new book The extinction of experience: Being human in a disembodied world. I listened to the podcast on a hike, my noise-canceling Airpods insulating me from the sounds of the creek trail and from the people walking along it.
It’s complicated. When hiking alone I greatly value the ability to listen to interesting people and ideas while exercising, breathing fresh air, and moving through the natural world. The experience is embodied in one sense, disembodied in another. Reading the same material while lying on the couch would be a different, and arguably more extreme, form of disembodiment. But when I passed a family of four, all walking along looking at their phones, that felt wrong. When people are together they should actually be together, right? You’ve doubtless felt the same when seeing people in this together-but-not-together state.
Lately Pete Buttigieg has been urging us to spend less time online, more time IRL having face-to-face conversations. I think that’s right. There’s no doubt that the decline of social capital described in Robert Putnam’s Bowling Alone has accelerated in the 30 years since he wrote that book. America’s tragic polarization is a predictable outcome. Without the institutions and cultural traditions that once brought us together, face-to-face, in non-political ways, we’re all too vulnerable to being herded into competing online echo chambers that magnify our differences and erase our common humanity.
I won’t be abandoning my mediated and disembodied life online, but I do need to participate in it less and more critically, and prioritize my unmediated and embodied life IRL. The pendulum has swung too far away from the direct experience of shared reality, and that hasn’t been good for me nor for my country,
Earlier efforts to diagram software with LLM assistance weren’t fruitful, but this time around things went really well. I ended up with exactly what I needed to explain the architecture of a browser extension, and along the way I learned a lot about a couple of formats — Mermaid and Graphviz — as well as their tool ecosystems.
“If you work with these cloud platforms every day, you have doubtless forgotten that you ever had questions like these. But every newcomer does. And on a continuing basis, we are all newcomers to various aspects of applications and services. In so many ways, the experience boils down to: I am here, what do I do now?
It’s nice if you can share your screen with someone who has walked that path before you, but that’s often impossible or infeasible. LLMs synthesize what others have learned walking the path. We typically use words to search that body of hard-won knowledge. Searching with images can be a powerful complementary mode.”
What ChatGPT and Claude can see on your screen
Part of the LLM series at The New Stack.
There are plenty of ways to use LLMs ineffectively. For best results, lean into your own intelligence, experience, and creativity. Delegate the boring and routine stuff to closely supervised assistants whose work you can easily check.
Mix Human Expertise With LLM Assistance for Easier Coding
Part of the LLM series at The New Stack.
I was aware of The Geysers, a geothermal field about 35 miles north of my home in Santa Rosa, but I never gave it much thought until my first bike ride through the area. Then I learned a number of interesting things.
It’s the world’s largest geothermal field, producing more than 700 megawatts.
It accounts for 20% of California’s renewable energy.
The naturally-occurring steam was used up almost 30 years ago, and steam is now recharged by pumping in 11 million gallons of sewage effluent daily, through a 42-mile pipeline, from the Santa Rosa plain.
That daily recharge is implicated in the region’s frequent small earthquakes. (But nobody seems too worried about that, and maybe it’s a good thing? Many small better than one big?)
An article in today’s paper reports that AB-1359, signed last week by governor Gavin Newsom, paves the way for new geothermal development in the region that could add 600 megawatts of geothermal production.
How much electric power is that? I like to use WolframAlpha for quick and rough comparisons.
So, 2/3 of a nuke plant. 4/5 of a coal-fired power plant. These kinds of comparisons help me contextualize so many quantitative aspects of our lives. They’re the primary reason I visit WolframAlpha. I wish journalists would use it for that purpose.
In How and why to write letters to voters I discussed Vote Forward, my favorite way for those of us who aren’t in swing states to reach out to voters in swing states. The site works really well for adopting batches of voters, and downloading packets of form letters. As I close in on 1000 letters, though, I’m finding it isn’t great for tracking progress at scale. Here’s how my dashboard page looks.
With 50 bundles in play, many of which are farmed out to friends and neighbors who are helping with the project, it’s become cumbersome to keep track of which bundles are prepped (ready to mail) or not. Here is the checklist I needed to see.
VoteForward Dashboard Report mmorg: 1-UNPREPPED r23Pp: 2-UNPREPPED v9Kbo: 3-UNPREPPED wLMPw: 4-UNPREPPED 24L4o: 5-PREPPED 4nNnj: 6-PREPPED 5rQmV: 7-PREPPED ... YV4dL: 48-PREPPED zKjne: 49-PREPPED ZrKJz: 50-PREPPED
If you’re in the same boat, here’s a piece of code you can use to make your own checklist. It’s gnarly, if you aren’t a programmer I advise you not even to look at it, just copy it, and then paste it into your browser to have it open a new window with your report.
javascript:(function(){
// First part: Adjust height of divs with inline styles
document.querySelectorAll('div[style]').forEach(div => {
let inlineStyle = div.getAttribute('style');
if (inlineStyle.includes('position: relative')) {
div.style.height = '20000px'; // Set the height to 20000px
}
});
// Introduce a delay before processing the list of items
setTimeout(() => {
const items = document.querySelectorAll('li.bundle-list-item.individual');
let dataList = [];
// Iterate over the items to capture data-testid and ID
items.forEach(item => {
let dataTestId = item.getAttribute('data-testid');
// Use the id attribute of the input element to extract the ID
const toggleInput = item.querySelector('input.slide-out-toggle');
const toggleId = toggleInput ? toggleInput.getAttribute('id') : '';
// Extract the ID part from the toggleId pattern "toggle-24L4o-PREPPED"
const id = toggleId ? toggleId.split('-')[1] : 'ID not found';
// Remove "bundle-" and the number part from dataTestId, keeping only "PREPPED" or "UNPREPPED"
dataTestId = dataTestId.split('-').pop(); // Extract only the "PREPPED" or "UNPREPPED" part
// Push the data into the array
dataList.push({ dataTestId, id });
});
// Sort first by whether it's PREPPED or UNPREPPED (descending for UNPREPPED first),
// then by the ID within each group
dataList.sort((a, b) => {
if (a.dataTestId.includes("PREPPED") && b.dataTestId.includes("UNPREPPED")) {
return 1; // UNPREPPED comes before PREPPED
} else if (a.dataTestId.includes("UNPREPPED") && b.dataTestId.includes("PREPPED")) {
return -1;
}
// Sort by ID if they belong to the same category
return a.id.localeCompare(b.id);
});
// Prepare the output string
let output = '';
dataList.forEach((item, index) => {
output += `${item.id}: ${index + 1}-${item.dataTestId}\n`;
});
// Open a new window with the output in a text area for easy copying
let newWindow = window.open('', '', 'width=500,height=500');
newWindow.document.write('<html><body><h2>VoteForward Dashboard Report</h2><pre>' + output + '</pre></body></html>');
newWindow.document.close();
}, 2000); // Adjust delay as needed
})();
Here are instructions for Chrome/Edge, Safari, and Firefox. You might need to tell your browser to allow the popup window in which it writes the report.
Ctrl + Shift + J.Cmd + Option + J.Enter to run the code.Ctrl + Shift + K.Cmd + Option + K.Enter to run the code.Cmd + Option + C.Enter to run the code.It would be nice to have this as a built-in feature of the site but, as we come down to the wire, this may be a helpful workaround.
Thanks, again, to the Vote Forward team for all you do! It’s a great way to encourage voter turnout.
On a recent trip I saw this pair of Latin phrases tattooed on the back of a flight attendant’s arms:
Left: Deo absente. Right: Deum culpa.
I took Latin in middle school, and could guess what the combination might mean. It’s not a common construction, and a search seems to confirm my guess. Both Google and Bing take you to a couple of Reddit posts in r/Latin.
Would this be the correct translation?
A song I like, Deus in absentia by Ghost, has that line in it intending to mean “In the absence of God”, so I was looking into alternate translations/syntax of the phrase intending to mean “In the absence of God; Blame/Fault God”. Would this make sense: “Deum in absente; Culpa Deus” or “Deus Culpa”?
Does the phrase “Deus In Absentia, Deus Culpa” make sense?
I’m using this for a tattoo and want to be absolutely sure it works in the sense of ‘In the absence of God, blame God’. All help appreciated!
Is that the same person I saw? If so, the responses in r/Latin seem to have guided them to the final text inked on their arms. And if so, the message is essentially what I had guessed. The intent of the message, though, is open to interpretation. I’m not quite sure how to take it. What do you think it means? Would it have been rude to ask?
Powerpipe dashboards can now connect not only to Steampipe but also to SQLite and DuckDB. This creates a combinatorial explosion of possibilities, including dashboards that use SQL to visualize large datasets read from Parquet files by DuckDB.
SQL Translation From Postgres to SQLite and DuckDB
Part of the LLM series at The New Stack.
“Communities that want to build comprehensive public calendars will be able to do so using a hybrid approach that blends existing iCalendar feeds with feeds synthesized from web calendars. It’s not a perfect solution, but with LLM assistance it’s a workable one. And who knows, maybe if people see what’s possible when information silos converge, the common tools that can ease convergence will seem more attractive.” — An LLM-Turbocharged Community Calendar Reboot
Part of the LLM series at The New Stack.
“Users of the WordPress API may enjoy the abstraction — and standardization — that a SQL interface provides. If you need to query multiple WordPress sites, Steampipe’s connection aggregator will be really handy. And if you want to integrate data from WordPress with data from other APIs wrapped by other plugins in the Steampipe hub, performing literal SQL JOINs across disparate APIs is a heady experience.” — Building a Steampipe Plugin — and Powerpipe Dashboards — for WordPress
Part of the LLM series at The New Stack.
“Some argue that by aggregating knowledge drawn from human experience, LLMs aren’t sources of creativity, as the moniker "generative" implies, but rather purveyors of mediocrity. Yes and no. There really are very few genuinely novel ideas and methods, and I don’t expect LLMs to produce them. Most creative acts, though, entail novel recombinations of known ideas and methods. Because LLMs radically boost our ability to do that, they are amplifiers of — not threats to — human creativity.” – How LLMs Guide Us to a Happy Path for Configuration and Coding
Part of the LLM series at The New Stack.
Here’s the latest installment in the series on working with LLMS: https://thenewstack.io/choosing-when-to-use-or-not-use-llms-as-a-developer/
For certain things, the LLM is a clear win. If I’m looking at an invalid blob of JSON that won’t even parse, there’s no reason to avoid augmentation. My brain isn’t a fuzzy parser — I’m just not wired to see that kind of problem, and that isn’t likely to change with effort and practice. But if there are structural problems with code, I need to think about them before reaching for assistance.
The rest of the series:
1 When the rubber duck talks back
2 Radical just-in-time learning
3 Why LLM-assisted table transformation is a big deal
4 Using LLM-Assisted Coding to Write a Custom Template Function
5 Elevating the Conversation with LLM Assistants
6 How Large Language Models Assisted a Website Makeover
7 Should LLMs Write Marketing Copy?
8 Test-Driven Development with LLMs: Never Trust, Always Verify
9 Learning While Coding: How LLMs Teach You Implicitly
10 How LLMs Helped Me Build an ODBC Plugin for Steampipe
11 How to Use LLMs for Dynamic Documentation
12 Let’s talk: conversational software development
13 Using LLMs to Improve SQL Queries
14 Puzzling over the Postgres Query Planner with LLMs
15 7 Guiding Principles for Working with LLMs
16 Learn by Doing: How LLMs Should Reshape Education
17 How to Learn Unfamiliar Software Tools with ChatGPT
18 Creating a GPT Assistant That Writes Pipeline Tests
19 Using AI to Improve Bad Business Writing
20 Code in Context: How AI Can Help Improve Our Documentation
21 The Future of SQL: Conversational Hands-on Problem Solving
22 Pairing With AI: A Senior Developer’s Journey Building a Plugin
23 How LLMs Can Unite Analog Event Promotion and Digital Calendars
24 Using LLMs to Help Write a Postgres Function
25 Human Insight + LLM Grunt Work = Creative Publishing Solution
If you don’t live in a swing state, but would like to do more than just send money to help encourage voter turnout in those places, what are your options? For me the best one is Vote Forward, which orchestrates letter-writing to registered voters. I sent hundreds of such letters in 2020 and am aiming to do lots more, with help from friends, this time around.
Even if I lived in a swing state, I’m not someone who’d be comfortable knocking on doors. And the last thing I want to do is pester people in those places with yet another unwanted phone call or text message. The Vote Forward method is perfect for me personally, and I also think it’s the most clever and sensible way to encourage voters in other states. Here’s how it works.
You “adopt” voters in batches of 5 or 20. I just adopted my first 100: 20 in each of Ohio, Pennsylvania, Michigan, New Hampshire, and North Carolina. You download each batch as a PDF that prints 21 pages. Page one has the instructions and the list of registered voters’ names and addresses
The fact that you write the letters (and address the envelopes) by hand is a great idea. We receive very few hand-addressed letters nowadays, I think they have a pretty good chance of being opened. And once opened, the hand-written message is again unusual. The fact that somebody made the effort to do that signals a rare kind of authenticity.
Likewise, I think the nonpartisan tone of the message is unusual and conveys authenticity. I wish voting were mandatory in the US, as it is in Australia and elsewhere. However the chips fall in November, I would like to know that the result truly reflects what everyone thinks. My message last time was something like:
“… because it’s not really a democracy unless everyone’s voice is heard.”
Pages 2-21 are the letter templates. They look like this:
The hardest part for me was the handwriting. I famously struggled with cursive writing in fifth grade. By the time I reached high school I had reverted to printing. Then, in college, I realized that cursive is more efficient and relearned how to do it. I had to relearn all over again in 2020 because cursive was the fastest way to write all those letters. And I’ll probably have to relearn again this time. I suspect many in younger generations never learned cursive at all, in which case writing the letters by hand will be even harder. So: keep the message short!
If you’ve received a link to this post directly from me, it’ll come with an invitation to drop by our house, hang out on the porch, and help me complete batches of these letters. Otherwise, I hope you might try this method yourself, and/or share it with others. In the past week I’ve switched from doomscrolling to hopescrolling and that’s a huge relief. But I also want to do something tangible (again, beyond donations) and this will be my focus. It feels good to do the work, and will feel really good when I visit the post office sometime in October and drop off a big stack of hand-addressed envelopes.
But is it effective? That’s another thing I like about Vote Forward. They’ve made a sincere effort to measure the impact. And they are honest about the findings: the measurable effect is small. I’ll give them the last word here.
Why should we get excited about small differences?
Because getting people who don’t vote to show up at the polls (or mail in a ballot) is actually pretty hard. Most of the factors that affect whether people vote are tied to big, structural issues (like voter ID laws or polling place accessibility) or deep-seated attitudes (e.g., a lack of faith that elections matter). Given these obstacles, boosting turnout by even a small amount is a real achievement! And, when it comes to politics, we know that many races are decided by tight margins, so a small boost in turnout can translate into a meaningful difference in electoral outcomes.
My family, on my dad’s side, were Jews from Poland and Ukraine. His parents came to America before the shit hit the fan, but I grew up knowing two people who weren’t so lucky. Seymour Mayer lived across the street during my teens. And Annie Braunschweig, who we knew as Brownie, had taken care of my sister and me as four- and five-year-old kids when our mom – unusually at that time – went back to work full-time teaching at a university. Both Seymour and Brownie were survivors of Nazi concentration camps, with tattooed numbers on their arms.
I never heard Seymour talk about it. Brownie rarely did, though I remember one story about a mother who tossed her swaddled baby to a stranger as the train was leaving to take her to the gas chambers.
Very few survivors remain. And there are not many of us who have known survivors. I’ve thought a lot, over the years, about what happens when that kind of personal connection ends, and living memories fall off the continental shelf into the deep ocean of history. I suspect the Holocaust may seem no more real, to many born in this century, than the Spanish Inquisition.
I don’t know if Seymour and Brownie ever read “It Can’t Happen Here” but I am pretty sure they’d have thought it absolutely can, they’d be even more horrified in this moment than many of us are, and they’d reject the fatalism that I see taking root among friends and acquaintances.
“It hasn’t happened yet,” they’d say, “you can still prevent it, do not despair prematurely, there is still time, but you must find a way to focus your efforts and unite all whose votes can matter.”
For a long time there were only two essential things that I carried everywhere: keys and wallet. Two was a manageable number of objects that I had to remember to put into pockets, and two was a manageable number of pockets to put them in.
Then my first phone bumped that number to three. When reading glasses became the fourth must-carry item, it started to feel like there were too many objects to always remember and too few pockets to put them in. When the seasons changed, or when traveling, it got harder to reset the canonical locations for all four things.
Although I listen to tons of podcasts, headphones never made the list of always-carry items. But when I emptied my pockets the other day I realized that my magic number is now five. AirPods are the new take-everywhere item.
For a while I resisted the recommendation to upgrade from a wired headset to AirPods. Did I really need another small, rechargeable, easy-to-lose object (actually, three of them)? I’ve learned not to expect that yet another electronic gadget will improve my life. But this one has. Dave Winer, you were right.
Obviously this trend can’t continue indefinitely. Will that thing we anachronistically call a “phone” absorb the wallet, and maybe even the keys? I’m not sure how I feel about that!
Meanwhile, there’s my trusty belt pack. It’s dorky but there’s a pocket for everything, and it works consistently across seasons and continents.
Here’s the latest installment in the series on working with LLMS: https://thenewstack.io/human-insight-llm-grunt-work-creative-publishing-solution/
Although streamlined publishing of screenshots is nice, the biggest win comes from reviewing and revising in Google Docs; which, for better and worse, has become the defacto collaboration standard for many of us.
I am ridiculously pleased with this solution.
The rest of the series:
1 When the rubber duck talks back
2 Radical just-in-time learning
3 Why LLM-assisted table transformation is a big deal
4 Using LLM-Assisted Coding to Write a Custom Template Function
5 Elevating the Conversation with LLM Assistants
6 How Large Language Models Assisted a Website Makeover
7 Should LLMs Write Marketing Copy?
8 Test-Driven Development with LLMs: Never Trust, Always Verify
9 Learning While Coding: How LLMs Teach You Implicitly
10 How LLMs Helped Me Build an ODBC Plugin for Steampipe
11 How to Use LLMs for Dynamic Documentation
12 Let’s talk: conversational software development
13 Using LLMs to Improve SQL Queries
14 Puzzling over the Postgres Query Planner with LLMs
15 7 Guiding Principles for Working with LLMs
16 Learn by Doing: How LLMs Should Reshape Education
17 How to Learn Unfamiliar Software Tools with ChatGPT
18 Creating a GPT Assistant That Writes Pipeline Tests
19 Using AI to Improve Bad Business Writing
20 Code in Context: How AI Can Help Improve Our Documentation
21 The Future of SQL: Conversational Hands-on Problem Solving
22 Pairing With AI: A Senior Developer’s Journey Building a Plugin
23 How LLMs Can Unite Analog Event Promotion and Digital Calendars
Here’s the latest installment in the series on working with LLMS: https://thenewstack.io/techniques-for-using-llms-to-help-write-a-postgres-function/
It was only after I deployed the function and used it in three different dashboards that the penny finally dropped. This had begun as an exercise to wrap a SQL idiom that made use of the lag() function. But when I looked at the final version of the function, I realized it wasn’t using lag(). Nor was it using dense_rank(). Just row_number() with partition logic was sufficient. Why, I asked ChatGPT, wasn’t that initially obvious?
ChatGPT: The final solution using ROW_NUMBER() and partition logic was not immediately apparent due to a combination of reliance on familiar patterns, the evolving understanding of requirements, and the iterative nature of problem-solving. This process highlights the importance of flexibility and openness to re-evaluating initial approaches to find the simplest and most effective solution.
It’s a good answer that synthesizes wisdom drawn from the human minds represented in the corpus of SQL queries and related conversations that ChatGPT feeds on.
The rest of the series:
1 When the rubber duck talks back
2 Radical just-in-time learning
3 Why LLM-assisted table transformation is a big deal
4 Using LLM-Assisted Coding to Write a Custom Template Function
5 Elevating the Conversation with LLM Assistants
6 How Large Language Models Assisted a Website Makeover
7 Should LLMs Write Marketing Copy?
8 Test-Driven Development with LLMs: Never Trust, Always Verify
9 Learning While Coding: How LLMs Teach You Implicitly
10 How LLMs Helped Me Build an ODBC Plugin for Steampipe
11 How to Use LLMs for Dynamic Documentation
12 Let’s talk: conversational software development
13 Using LLMs to Improve SQL Queries
14 Puzzling over the Postgres Query Planner with LLMs
15 7 Guiding Principles for Working with LLMs
16 Learn by Doing: How LLMs Should Reshape Education
17 How to Learn Unfamiliar Software Tools with ChatGPT
18 Using AI to Improve Bad Business Writing
19 Code in Context: How AI Can Help Improve Our Documentation
20 The Future of SQL: Conversational Hands-on Problem Solving
21 Pairing With AI: A Senior Developer’s Journey Building a Plugin
22 How LLMs Can Unite Analog Event Promotion and Digital Calendars