Wandering Thoughts

2025年11月16日

People are sending HTTP requests with X-Forwarded-For across the Internet

Over on the Fediverse, I shared a discovery that came from turning over some rocks here on Wandering Thoughts:

This is my face when some people out there on the Internet send out HTTP requests with X-Forwarded-For headers, and maybe even not maliciously or lying. Take a bow, ZScaler.

The HTTP X-Forwarded-For header is something that I normally expect to see only on something behind a reverse proxy, where the reverse proxy frontend is using it to tell the backend the real originating IP (which is otherwise not available when the HTTP requests are forwarded with HTTP). As a corollary of this usage, if you're operating a reverse proxy frontend you want to remove or rename any X-Forwarded-For headers that you receive from the HTTP client, because it may be trying to fool your backend about who it is. You can use another X- header name for this purpose if you want, but using X-Forwarded-For has the advantage that it's a de-facto standard and so random reverse proxy aware software is likely to have an option to look at X-Forwarded-For.

(See, for example, the security and privacy concerns section of the MDN page.)

Wandering Thoughts doesn't run behind a reverse proxy, and so I assume that I wouldn't see X-Forwarded-For headers if I looked for them. More exactly I assumed that I could take the presence of an X-Forwarded-For header as an indication of a bad request. As I found out, this doesn't seem to be the case; one source of apparently legitimate traffic to Wandering Thoughts appears to attach what are probably legitimate X-Forwarded-For headers to requests going through it. I believe this particular place operates partly as a (forward) HTTP proxy; if they aren't making up the X-Forwarded-For IP addresses, they're willing to leak the origin IPs of people using them to third parties.

All of this makes me more curious than usual to know what HTTP headers and header values show up on requests to Wandering Thoughts. But not curious enough to stick in logging, because that would be quite verbose unless I could narrow things down to only some requests. Possibly I should stick in logging that can be quickly turned on and off, so I can dump header information only briefly.

(These days I've periodically wound up in a mood to hack on DWiki, the underlying engine behind Wandering Thoughts. It reminds me that I enjoy programming.)

web/XForwardedForOutThere written at 22:49:26; Add Comment

2025年11月15日

We haven't seen ZFS checksum failures for a couple of years

Over on the Fediverse I mentioned something about our regular ZFS scrubs:

Another weekend, another set of ZFS scrubs of work's multiple terabytes of data sitting on a collection of consumer 4 TB SSDs (mirrored, we aren't crazy, and also we have backups). As usual there is not a checksum error to be seen. I think it's been years since any came up.

I accept that SSDs decay (we've had some die, of course) and random read errors happen, but our ZFS-based experience across both HDDs and SSDs has been that the rate is really low for us. Probably we're not big enough.

We regularly scrub our pools through automation, currently once every few weeks. Back in 2022 I wrote about us seeing only a few errors since we moved to SSDs in 2018, and then I had the impression that everything had been quiet since then. Hand-checking our records tells me that I'm slightly wrong about this and we had some errors on our fileservers in 2023, but none since then.

  • starting in January of 2023, one particular SSD began experiencing infrequent read and checksum errors that persisted (off and on) through early March of 2023, when we gave in and replaced it. This was a relatively new 4 TB SSD that had only been in service for a few months at the time.

  • In late March of 2023 we saw a checksum error on a disk that later in the year (in November) experienced some read errors, and then in late February of 2024 had read and write errors. We replaced the disk at that point.

I believe these two SSDs are the only ones that we've replaced since 2022, although I'm not certain and we've gone through a significant amount of SSD shuffling since then for reasons outside the scope of this entry. That shuffling means that I'm not going to try to give any number for what percentage of our fileserver SSDs have had problems.

In the first case, the checksum errors were effectively a lesser form of the read errors we saw at the same time, so it was obvious the SSD had problems. In the second case the checksum error may have been a very early warning sign of what later became an obvious slow SSD failure. Or it could be coincidence.

(It also could be that modern SSDs have so much internal error checking and correction that if there is some sort of data rot or mis-read it's most likely to be noticed inside the SSD and create a read failure at the protocol level (SAS, SATA, NVMe, etc).)

I definitely believe that disk read errors and slow disk failures happen from time to time, and if you have a large enough population of disks (SSDs or HDDs or both) you definitely need to worry about these problems. We get all sorts of benefits from ZFS checksums and ZFS scrubs, and the peace of mind about this is one of them. But it looks like we're not big enough to have run into this across our fileserver population.

(At the moment we have 114 4 TB SSDs in use across our production fileservers.)

solaris/ZFSOurRareChecksumFailuresII written at 23:04:17; Add Comment

2025年11月14日

OIDC, Identity Providers, and avoiding some obvious security exposures

OIDC (and OAuth2) has some frustrating elements that make it harder for programs to support arbitrary identity providers (as discussed in my entry on the problems facing MFA-enabled IMAP in early 2025). However, my view is that these elements exist for good reason, and the ultimate reason is that an OIDC-like environment is by default an obvious security exposure (or several of them). I'm not sure there's any easy way around the entire set of problems that push towards these elements or something quite like them.

Let's imagine a platonically ideal OIDC-like identity provider for clients to use, something that's probably much like the original vision of OpenID. In this version, people (with accounts) can authenticate to the identity provider from all over the Internet, and it will provide them with a signed identity token. The first problem is that we've just asked identity providers to set up an Internet-exposed account and password guessing system. Anyone can show up, try it out, and best of all if it works they don't just get current access to something, they get an identity token.

(Within a trusted network, such as an organization's intranet, this exposed authentication endpoint is less of a concern.)

The second problem is that identity token, because the IdP doesn't actually provide the identity token to the person, it provides the token to something that asked for it. One of the uses of that identity token is to present it to other things to demonstrate that you're acting on the person's behalf; for example, your IMAP client presents it to your IMAP server. If what the identity token is valid for is not restricted in some way, a malicious party could get you to 'sign up with your <X> ID' for their website, take the identity token it got from the IdP, and reuse it with your IMAP server.

To avoid issues, this identity token must have a limited scope (and everything that uses identity tokens needs to check that the token for them). This implies that you can't just ask for an identity token in general, you have to ask for it for use with something specific. As a further safety measure the identity provider doesn't want to give such a scoped token to anything except the thing that's supposed to get it. You (an attacker) should not be able to tell the identity provider 'please create a token for webserver X, and give it to me, not webserver X' (this is part of the restrictions on OIDC redirect URIs).

In OIDC, what deals with much of these risks is client IDs, optionally client secrets, and redirect URIs. Client IDs are used to limit what an identity token can be used for and where it can be sent to (in combination with redirect URIs), and a client secret can be used by something getting a token to prove that it is the client ID it claims to be. If you don't have the right information, the OIDC IdP won't even talk to you. However, this means that all of this information has to be given to the client, or at least obtained by the client and stored by it.

(These days OIDC has a specification for Dynamic Client Registration and can support 'open' dynamic registration of clients, if desired (although it's apparently not widely implemented). But clients do have to register to get the risk-mitigating information for the main IdP endpoint, and I don't know how this is supposed to handle the IMAP situation if the IMAP server wants to verify that the OIDC token it receives was intended for it, since each dynamic client will have a different client ID.)

tech/OIDCIdentityProviderAndSecurity written at 23:40:57; Add Comment

2025年11月13日

My script to 'activate' Python virtual environments

After I wrote about Python virtual environments and source code trees, I impulsively decided to set up the development tree of our Django application to use a Django venv instead of a 'pip install --user' version of Django. Once I started doing this, I quickly decided that I wanted a general script that would switch me into a venv. This sounds a little bit peculiar if you know Python virtual environments so let me explain.

Activating a Python virtual environment mostly means making sure that its 'bin' directory is first on your $PATH, so that 'python3' and 'pip' and so on come from it. Venvs come with files that can be sourced into common shells in order to do this (with the one for Bourne shells called 'activate'), but for me this has three limits. You have to use the full path to the script, they change your current shell environment instead of giving you a new one that you can just exit to discard this 'activation', and I use a non-standard shell that they don't work in. My 'venv' script is designed to work around all three of those limitations. As a script, it starts a new shell (or runs a command) instead of changing my current shell environment, and I set it up so that it knows my standard place to keep virtual environments (and then I made it so that I can use symbolic links to create 'django' as the name of 'whatever my current Django venv is').

(One of the reasons I want my 'venv' command to default to running a shell for me is that I'm putting the Python LSP server into my Django venvs, so I want to start GNU Emacs from an environment with $PATH set properly to get the right LSP server.)

My initial version only looked for venvs in my standard location for development related venvs. But almost immediately after starting to use it, I found that I wanted to be able to activate pipx venvs too, so I added ~/.local/pipx/venvs to what I really should consider to be a 'venv search path' and formalize into an environment variable with a default value.

I've stuffed a few other features into the venv script. It will print out the full path to the venv if I ask it to (in addition to running a command, which can be just 'true'), or something to set $PATH. I also found I sometimes wanted it to change directory to the root of the venv. Right now I'm still experimenting with how I want to build other scripts on top of this one, so some of this will probably change in time.

One of my surprises about writing the script is how much nicer it's made working with venvs (or working with things in venvs). There's nothing it does that wasn't possible before, but the script has removed friction (more friction than I realized was there, which is traditional for me).

PS: This feels like a sufficiently obvious idea that I suspect that a lot of people have written 'activate a venv somewhere along a venv search path' scripts. There's unlikely to be anything special about mine, but it works with my specific shell.

python/MyVenvActivationScript written at 22:27:02; Add Comment

2025年11月12日

Getting feedback as a small web crawler operator

Suppose, hypothetically, that you're trying to set up a small web crawler for a good purpose. These days you might be focused on web search for text focused sites, or small human written sites, or similar things, and certainly given the bad things that are happening with the major crawlers we could use them. As a small crawler, you might want to get feedback and problem reports from web site operators about what your crawler is doing (or not doing). As it happens, I have some advice and views on this.

  • Above all, remember that you are not Google or even Bing. Web site operators need Google to crawl them, and they have no choice but to bend over backward for Google and to send out plaintive signals into the void if Googlebot is doing something undesirable. Since you're not Google and you need websites much more than they need you, the simplest thing for website operators to do with and about your crawler is to ignore the issue, potentially block you if you're causing problems, and move on.

    You cannot expect people to routinely reach out to you. Anyone who does reach out to you is axiomatically doing you a favour, at the expense of some amount of their limited time and at some risk to themselves.

  • Website operators have no reason to trust you or trust that problem reports will be well received. This is a lesson plenty of people have painfully learned from reporting spam (email or otherwise) and other abuse; a lot of the time your reports can wind up in the hands of people who aren't well intentioned toward you (either going directly to them or 'helpfully' being passed on by the ISP). At best you confirm that your email address is alive and get added to more spam address lists; at worst you get abused in various ways.

    The consequence of this is that if you want to get feedback, you should make it as low-risk as possible for people. The lowest risk way (to website operators) is for you to have a feedback form on your site that doesn't require email or other contact methods. If you require that website operators reveal their email addresses, social media handles, or whatever, you will get much less feedback (this includes VCS forge handles if you force them to make issue reports on some VCS forge).

    (This feedback form should be easy to find, for example being directly linked from the web crawler information URL in your User-Agent.)

  • As far as feedback goes, both your intentions and your views on the reasonableness of what your web crawler is doing (and how someone's website behaves) are irrelevant. What matters is the views of website operators, who are generally doing you a favour by not simply blocking or ignoring your crawler and moving on. If you disagree with their feedback, the best thing to do is be quiet (and maybe say something neutral if they ask for a reply). This is probably most important if your feedback happens through a public VCS forge issue tracker, where future people who are thinking about filing an issue the way you asked may skim over past issues to see how they went.

    (You may or may not ignore website operator feedback that you disagree with depending on how much you want to crawl (all of) their site.)

At the moment, most website operators who notice a previously unknown crawler will likely assume that it's an (abusive) LLM crawler. One way to lower the chances of this is to follow social conventions around crawlers for things like crawler User-Agents and not setting the Referer header. I don't think you have to completely imitate how Googlebot, bingbot, Applebot, the archive.org bot and so on format their User-Agent strings, but it's going to help to generally look like them and clearly put the same sort of information into yours. Similarly, if you can it will help to crawl from clearly identified IPs with reverse DNS. The more that people think you're legitimate and honest, the more likely they are to spend the time and take the risk to give you feedback; the more sketchy or even uncertain you look, the less likely you are to get feedback.

(In general, any time you make website operators uncertain about an aspect of your web crawler, some number of them will not be charitable in their guess. The more explicit and unambiguous you are in the more places, the better.)

Building and running a web crawler is not an easy thing on today's web. It requires both technical knowledge of various details of HTTP and how you're supposed to react to things (eg), and current social knowledge of what is customary and expected of web crawlers, as well as what you may need to avoid (for example, you may not want to start your User-Agent with 'Mozilla/5.0' any more, and in general the whole anti-crawling area is rapidly changing and evolving right now). Many website operators revisit blocks and other reactions to 'bad' web crawlers only infrequently, so you may only get one chance to get things right. This expertise can't be outsourced to a random web crawling library because many of them don't have it either.

(While this entry was sparked by a conversation I had on the Fediverse, I want to be explicit that it is in no way intended as a subtoot of that conversation. I just realized that I had some general views that didn't fit within the margins of Fediverse posts.)

web/SmallCrawlersGettingFeedback written at 23:17:17; Add Comment

2025年11月11日

Firefox's sudden weird font choice and fixing it

Today, while I was in the middle of using my normal browser instance, it decided to switch from DejaVu Sans to Noto Sans as my default font:

Dear Firefox: why are you using Noto Sans all of a sudden? I have you set to DejaVu Sans (and DejaVu everything), and fc-match 'sans' and fc-match serif both say they're DejaVu (and give the DejaVu TTF files). This is my angry face.

This is a quite noticeable change for me because it changes the font I see on Wandering Thoughts, my start page, and other things that don't set any sort of explicit font. I don't like how Noto Sans looks and I want DejaVu Sans.

(I found out that it was specifically Noto Sans that Firefox was using all of a sudden through the Web Developer tools 'Font' information, and confirmed that Firefox should still be using DejaVu through the way to see this in Settings.)

After some flailing around, it appears that what I needed to do to fix this was explicitly set about:config's font.name.serif.x-western, font.name.sans-serif.x-western, and font.name.monospace.x-western to specific values instead of leaving them set to nothing, which seems to have caused Firefox to arrive on Noto Sans through some mysterious process (since the generic system font name 'sans' was still mapping to DejaVu Sans). I don't know if these are exposed through the Fonts advanced options in Settings → General, which are (still) confusing in general. It's possible that these are what are used for 'Latin'.

(I used to be using the default 'sans', 'serif', and 'monospace' font names that cascaded through to the DejaVu family. Now I've specifically set everything to the DejaVu set, because if something in Fedora or Firefox decides that the default mapping should be different, I don't want Firefox to follow it, I want it to stay with DejaVu.)

I don't know why Firefox would suddenly decide these pages are 'western' instead of 'unicode'; all of them are served as or labeled as UTF-8, and nothing about that has changed recently. Unfortunately, as far as I know there's no way to get Firefox to tell you what font.name preference name it used to pick (default) fonts for a HTML document. When it sends HTTP 304 Not Modified responses, Wandering Thoughts doesn't include a Content-Type header (with the UTF-8 character set), but as far as I know that's a standard behavior and browsers presumably cope with it.

(Firefox does see 'Noto Sans' as a system UI font, which it uses on things like HTML form buttons, so it didn't come from nowhere.)

It makes me sad that Firefox continues to have no global default font choice. You can set 'Unicode' but as I've just seen, this doesn't make what you set there the default for unset font preferences, and the only way to find out what unset font preferences you have is to inspect about:config.

PS: For people who aren't aware of this, it's possible for Firefox to forget some of your about:config preferences. Working around this probably requires using Firefox policies (via), which can force-set arbitrary about:config preferences (among other things).

web/FirefoxSuddenWeirdFontChoice written at 23:03:46; Add Comment

2025年11月10日

Discovering orphaned binaries in /usr/sbin on Fedora 42

Over on the Fediverse, I shared a somewhat unwelcome discovery I made after upgrading to Fedora 42:

This is my face when I have quite a few binaries in /usr/sbin on my office Fedora desktop that aren't owned by any package. Presumably they were once owned by packages, but the packages got removed without the files being removed with them, which isn't supposed to happen.

(My office Fedora install has been around for almost 20 years now without being reinstalled, so things have had time to happen. But some of these binaries date from 2021.)

There seem to be two sorts of these lingering, unowned /usr/sbin programs. One sort, such as /usr/sbin/getcaps, seems to have been left behind when its package moved things to /usr/bin, possibly due to this RPM bug (via). The other sort is genuinely unowned programs dating to anywhere from 2007 (at the oldest) to 2021 (at the newest), which have nothing else left of them sitting around. The newest programs are what I believe are wireless management programs: iwconfig, iwevent, iwgetid, iwlist, iwpriv, and iwspy, and also "ifrename" (which I believe was also part of a 'wireless-tools' package). I had the wireless-tools package installed on my office desktop until recently, but I removed it some time during Fedora 40, probably sparked by the /sbin to /usr/sbin migration, and it's possible that binaries didn't get cleaned up properly due to that migration.

The most interesting orphan is /usr/sbin/sln, dating from 2018, when apparently various people discovered it as an orphan on their system. Unlike all the other orphan programs, the sln manual page is still shipped as part of the standard 'man-pages' package and so you can read sln(8) online. Based on the manual page, it sounds like it may have been part of glibc at one point.

(Another orphaned program from 2018 is pam_tally, although it's coupled to pam_tally2.so, which did get removed.)

I don't know if there's any good way to get mappings from files to RPM packages for old Fedora versions. If there is, I'd certainly pick through it to try to find where various of these files came from originally. Unfortunately I suspect that for sufficiently old Fedora versions, much of this information is either offline or can't be processed by modern versions of things like dnf.

(The basic information is used by eg 'dnf provides' and can be built by hand from the raw RPMs, but I have no desire to download all of the RPMs for decade-old Fedora versions even if they're still available somewhere. I'm curious but not that curious.)

PS: At the moment I'm inclined to leave everything as it is until at least Fedora 43, since RPM bugs are still being sorted out here. I'll have to clean up genuinely orphaned files at some point but I don't think there's any rush. And I'm not removing any more old packages that use '/sbin/<whatever>', since that seems like it has some bugs.

linux/Fedora42OrphanUsrSbinBinaries written at 23:10:34; Add Comment

2025年11月09日

Python virtual environments and source code trees

Python virtual environments are mostly great for actually deploying software. Provided that you're using the same version of Python (3) everywhere (including CPU architecture), you can make a single directory tree (a venv) and then copy and move it around freely as a self-contained artifact. It's also relatively easy to use venvs to switch the version of packages or programs you're using, for example Django. However, venvs have their frictions, at least for me, and often I prefer to do Python development outside of them, especially for our Django web application).

(This means using 'pip install --user' to install things like Django, to the extent that it's still possible.)

One point of friction is in their interaction with working on the source code of our Django web application. As is probably common, this source code lives in its own version control system controlled directory tree (we use Mercurial for this for reasons). If Django is installed as a user package, the native 'python3' will properly see it and be able to import Django modules, so I can directly or indirectly run Django commands with the standard Python and my standard $PATH.

If Django is installed in a venv, I have two options. The manual way is to always make sure that this Django venv is first on my $PATH before the system Python, so that 'python3' is always from the venv and not from the system. This has a little bit of a challenge with Python scripts, and is one of the few places where '#!/usr/bin/env python3' makes sense. In my particular environment it requires extra work because I don't use a standard Unix shell and so I can't use any of the venv bin/activate things to do all the work for me.

The automatic way is to make all of the convenience scripts that I use to interact with Django explicitly specify the venv python3 (including for things like running a test HTTP server and invoking local management commands), which works fine since a program can be outside the venv it uses. This leaves me with the question of where the Django venv should be, and especially if it should be outside the source tree or in a non-VCS-controlled path inside the tree. Outside the source tree is the pure option but leaves me with a naming problem that has various solutions. Inside the source tree (but not VCS controlled) is appealingly simple but puts a big blob of otherwise unrelated data into the source tree.

(Of course I could do both at once by having a 'venv' symlink in the source tree, ignored by Mercurial, that points to wherever the Django venv is today.)

Since 'pip install --user' seems more and more deprecated as time goes by, I should probably move to developing with a Django venv sooner or later. I will probably use a venv outside the source tree, and I haven't decided about an in-tree symlink.

(I'll still have the LSP server problem but I have that today. Probably I'll install the LSP server into the Django venv.)

PS: Since this isn't a new problem, the Python community has probably come up with some best practices for dealing with it. But in today's Internet search environment I have no idea how to find reliable sources.

python/VenvsVsSourceTrees written at 23:22:12; Add Comment

2025年11月08日

A HTTP User-Agent that claims to be Googlebot is now a bad idea

Once upon a time, people seem to have had a little thing for mentioning Googlebot in their HTTP User-Agent header, much like browsers threw in claims to make them look like Firefox or whatever (the ultimate source of the now-ritual 'Mozilla/5.0' at the start of almost every browser's User-Agent). People might put in 'allow like Googlebot' or just say 'Googlebot' in their User-Agent. Some people are still doing this today, for example:

Gwene/1.0 (The gwene.org rss-to-news gateway) Googlebot

This is now an increasingly bad idea on the web and if you're doing it, you should stop. The problem is that there are various malicious crawlers out there claiming to be Googlebot, and Google publishes their crawler IP address ranges. Anything claiming to be Googlebot that is not from a listed Google IP is extremely suspicious and in this day and age of increasing anti-crawler defenses, blocking all 'Googlebot' activity that isn't from one of their listed IP ranges is an obvious thing to do. Web sites may go even further and immediately taint the IP address or IP address range involved in impersonating Googlebot, blocking or degrading further requests regardless of the User-Agent.

(Gwene is not exactly claiming to be Googlebot but they're trying to get simple Googlebot-recognizers to match them against Googlebot allowances. This is questionable at best. These days such attempts may do more harm than good as they get swept up in precautions against Googlebot forgery, or rules that block Googlebot from things it shouldn't be fetching, like syndication feeds.)

A similar thing applies to bingbot and the User-Agent of any other prominent web search engines, and Bing does publish their IP address ranges. However, I don't think I've ever seen someone impersonate bingbot (which probably doesn't surprise anyone). I don't know if anyone ever impersonates Archive.org (no one has in the past week here), but it's possible that crawler operators will fish to see if people give special allowances to them that can be exploited.

(The corollary of this is that if you have a website, an extremely good signal of bad stuff is someone impersonating Googlebot and maybe you could easily block that. I think this would be fairly easy to do in an Apache <If> clause that then Allow's from Googlebot's listed IP addresses and Denies everything else, but I haven't actually tested it.)

web/GooglebotClaimsBadIdea written at 23:04:04; Add Comment

2025年11月07日

Containers and giving up on expecting good software installation practices

Over on the Fediverse, I mentioned a grump I have about containers:

As a sysadmin, containers irritate me because they amount to abandoning the idea of well done, well organized, well understood, etc installation of software. Can't make your software install in a sensible way that people can control and limit? Throw it into a container, who cares what it sprays where across the filesystem and how much it wants to be the exclusive owner and controller of everything in sight.

(This is a somewhat irrational grump.)

To be specific, it's by and large abandoning the idea of well done installs of software on shared servers. If you're only installing software inside a container, your software can spray itself all over the (container) filesystem, put itself in hard-coded paths wherever it feels like, and so on, even if you have completely automated instructions for how to get it to do that inside a container image that's being built. Some software doesn't do this and is well mannered when installed outside a container, but some software does and you'll find notes to the effect that the only supported way of installing it is 'here is this container image', or 'here is the automated instructions for building a container image'.

To be fair to containers, some of this is due to missing Unix APIs (or APIs that theoretically exist but aren't standardized). Do you want multiple Unix logins for your software so that it can isolate different pieces of itself? There's no automated way to do that. Do you run on specific ports? There's generally no machine-readable way to advertise that, and people may want you to build in mechanisms to vary those ports and then specify the new ports to other pieces of your software (that would all be bundled into a container image). And so on. A container allows you to put yourself in an isolated space of Unix UIDs, network ports, and so on, one where you won't conflict with anyone else and won't have to try to get the people who want to use your software to create and manage the various details (because you've supplied either a pre-built image or reliable image building instructions).

But I don't have to be happy that software doesn't necessarily even try, that we seem to be increasingly abandoning much of the idea of running services in shared environments. Shared environments are convenient. A shared Unix environment gives you a lot of power and avoids a lot of complexity that containers create. Fortunately there's still plenty of software that is willing to be installed on shared systems.

(Then there is the related grump that the modern Linux software distribution model seems to be moving toward container-like things, which has a whole collection of issues associated with it.)

sysadmin/ContainersAbandonInstalling written at 22:58:19; Add Comment
(Previous 10 or go back to November 2025 at 2025年11月06日)

These are my WanderingThoughts
(About the blog)

Full index of entries
Recent comments

This is part of CSpace, and is written by ChrisSiebenmann.
Mastodon: @cks
(削除) Twitter (削除ここまで) @thatcks

* * *

Categories: links, linux, programming, python, snark, solaris, spam, sysadmin, tech, unix, web
Also: (Sub)topics

This is a DWiki.
GettingAround
(Help)

Search:

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.

AltStyle によって変換されたページ (->オリジナル) /