Thursday, November 06, 2025

The Complexity Argument for Capitalism

We're seeing an attack on capitalism on both ends of the political spectrum with with the election of Democratic Socialist Zhoran Mamdani as mayor of New York, and Donald Trump trying to direct the economy through tariffs, less independency of the Federal Bank and partial government control of companies such as a nearly 10% ownership of Intel and the "golden share" of U.S. Steel.

Now I'm not an economist and there are many arguments for and against socialism and capitalism, especially in their purest forms. I want to focus on the complexity of the economy. Let's start by considering rent control.

It starts with the worthy goal of preventing people from having to leave their apartments because of rising rents due to market demand. So you limit the rent increases and the ability to evict current residents. But this could cause landlords to cut back on improvements, not prioritize service and repairs, or build new properties to increase the supply. So you have to give landlords incentives to do these things, leading to other issues which leads down a rabbit hole. I saw extreme rent control in Berkeley in the 1980s and a lack of available housing forced me to live 13 miles away.

That's just rent control in a single city. Other attempts to control the economy will lead to unexpected inefficiencies and you can't predict markets enough to catch them.

The 1920 essay Economic Calculation in the Socialist Commonwealth by Ludwig von Mises argued that market prices give critical information for companies and individuals to make decisions. I think of it like a circuit, where the prices are values of intermediate gates so that we can have simple operations at every level leading to a very complex behavior in the aggregate, much like a neural net today. Frederick Hayek went further in his 1945 essay The Use of Knowledge in Society that a center planner couldn't adapt with the speed of change in the markets.

Could today's neural nets replace the need for these intermediate pricing and adapt quickly? It's really a combination of the two, many companies are using sophisticated machine learning models that lead to more accurate pricing, adding to the efficiency of markets.

Capitalism has its challenges: Companies can shrink or close, people can lose their jobs and we need good safety nets. Government has a role to protect safety, enforce contracts and provide services and some control where competition doesn't scale, like utilities and pubic transportation. Government also has a role in promoting public goods including education, scientific research and national defense.

While neither Mamdani or Trump want anything close to complete government control of the economy, I still worry that even their limited influences will hurt innovation and in the longer run hurt the very people these policies are meant to help.

Monday, November 03, 2025

Did Euclid exist? Is it okay to quote people that did not exist?


The following excerpt from Abrahim Ladha's comment on Lance's post about
AI and intro theory caught my attention:


---------------------------
BEGIN EXCERPT

Not just with AI, but in theory and math courses, there have always been Bart Simpson
level questions by students of the form

"Why do I need to know this?

like

Why do I need to know my times tables when the calculator does this for me?

There has never been a good answer because these students already do not have an open mind.
Euclid was asked such a question once and said something like

give him a penny so he may feel like he earned something for what he learned.

END EXCERPT
-----------------------------------

He gives a quote by Euclid.

1) I have heard that Euclid might not have really existed.

2) I have never heard that quote.

I asked Abrahim Ladha if he had a source for the Euclid quote.

I never go a response.

I asked chatty:

What is the historical evidence that Euclid existed?

and also

Did Euclid say give him a penny so he may feel like he earned something for what he learned?

The answer to both questions are below. The bottom line is that

1) Euclid might not have existed.

2) The idea underlying the quote is attributed to him. The exact quote is modern rendering.


Should Abrahim have consulted chatty before making his comment.

1) YES, it's important to make sure your facts are correct.

2) NO, his point is still valid. (That is a slippery slope.)

----------------------------------------
BILL: What is the historical evidence that Euclid existed?

That’s a great question — and a surprisingly tricky one.

Short answer

There is no direct historical evidence that Euclid (often called Euclid of Alexandria) existed
as an individual in the way we can verify the existence of, say, Archimedes. What we “know”
about Euclid comes almost entirely from later sources, not from contemporary records.

The evidence we do have

1)
Type: Ancient testimonies

Source: Proclus, Commentary on the First Book of Euclid’s Elements

Date: ca. 450 CE

Note: Written about 700 years after Euclid’s supposed lifetime. Proclus says Euclid lived
in the time of Ptolemy I (reigned 323–283 BCE) and taught in Alexandria. This is our main source.

2)
Type: Mentions by other ancient authors

Source: Pappus of Alexandria (4th century CE), Diogenes Laërtius (possibly), and others

Date: 200–600 years later

Note: These references repeat information similar to Proclus’s account — that Euclid compiled
the Elements and worked in Alexandria.

3)
Type: Mathematical evidence

Source: The Elements and other works attributed to Euclid (Data, Optics, Phaenomena, etc.)

Date: ca. 300 BCE?

Note: These texts show a unified style and method, suggesting a single organizing mind
(or school). But there is no personal or biographical detail inside them.

4)
Type: Absence of contemporary evidence

Note: There are no letters, inscriptions, or administrative records naming Euclid. No ancient
biography (unlike Archimedes or Apollonius).

Scholarly interpretation

Traditional view: There was a historical Euclid, a mathematician who worked in Alexandria
around 300 BCE and compiled earlier Greek geometry into the Elements.

Skeptical or revisionist view: “Euclid” may have been a collective name for a team or a
school of mathematicians in early Hellenistic Alexandria, much like “Hippocrates” in medicine.

The Elements itself is consistent in style, so even if multiple people contributed, it was
edited or systematized by someone with extraordinary coherence and rigor.

In summary

There is no direct documentary or archaeological proof that Euclid existed as an individual,
only later testimony and internal evidence from the mathematical works. Historically, most
scholars accept that there was a person — or at least a leading mathematician — behind the
name “Euclid,” but his life remains almost entirely unknown.
---------------------

BILL: Did Euclid say

`give him a penny so he may feel like he earned something for what he learned.'

ChatGPT said:

The quote you’re referring to — something like “Give him a penny so he may feel he has learned
something” — is attributed to Euclid, but not in those exact modern words.

It comes from an ancient anecdote recorded by Proclus (a 5th-century commentator on Euclid) in
his Commentary on Euclid’s Elements. The story goes like this:

A student, after learning the first theorem of geometry, asked Euclid,

“What shall I get by learning these things?”

Euclid called his slave and said,

“Give him a penny, since he must make gain out of what he learns.”

In Greek, it’s roughly:

“Δός αὐτῷ ὀβολόν, ἐπειδὴ κερδαίνειν δεῖται ἀπὸ τῶν μαθημάτων.”


So your paraphrase captures the spirit of the anecdote — Euclid was mocking the idea of
studying geometry for material reward — but the exact wording

“give him a penny so he may feel he earned something for what he learned”

is a modern retelling, not a direct quotation.



Wednesday, October 29, 2025

AI and the Power of Nonuniform Circuits

The advances in artificial intelligence have changed the way we think about computing. For today's post, how nonuniformity plays a much larger role than I previously believed.

First, some background. Circuit complexity gives a different, more combinatorial, way to look at computation. Instead of viewing algorithms as step-by-step programs, circuit complexity captures computation as fixed hardware, gates wired to compute a function of their inputs.

Every polynomial-time computable language L can be computed by a family \(C_0,C_1,\ldots\) of polynomial-size circuits, where \(C_n\) has n inputs, the number of gates of \(C_n\) is bounded by \(n^c\) for some \(c\) and \(x=x_1\ldots x_n\) is in L iff \(C_n(x_1,\ldots,x_n)\) outputs 1.

The converse isn't true, there are languages L that have polynomial-size circuits that you can't compute in polynomial time, because each \(C_n\) might not have any relation to \(C_m\) for \(n\neq m\). We can get the equivalence by adding a uniformity condition, that there is some easily computable function \(f\) with \(f(1^n)=C_n\). Or we can keep the nonuniformity to define a larger class called P/poly.

In practice, nonuniformity rarely helps. We can hide randomness in the nonuniformity but we could likely also use pseudorandom generators to the same effect. Nonuniformity was more a technicality that we had to deal with instead of something that seemed to give circuits significantly more power.

Until we reached this new age of AI. A machine learning model like a neural net goes through a lengthy training process to learn its weights, the training optimized offline. The learned weights become the nonuniform circuits, as technology improves and we create larger retrained circuits, the weights change as well.

We typically use the complexity class TC\(^0\), languages accepted by poly-size, constant-depth with threshold circuits as a rough theoretical model for neural nets. For nonuniform TC\(^0\) we cannot prove any significant upper bounds, we can't even show NEXP, nondeterministic exponential time, cannot be computed by nonuniform TC\(^0\) circuits. And while computing NEXP or even P in TC\(^0\) would be incredibly surprising, these advances in artificial intelligence tell us these nonuniform circuits are far more powerful than we would ever have expected.

Sunday, October 26, 2025

Bill's Bad Advice

I sometimes give the following advice for research which I label Bill's Bad Advice. We will later see who it might be good advice for. Spoiler alert: the number of people for whom it is good advice is shrinking but might include Lance especially now (see his post about stepping down from admin, here).

When you come across a possible research topic or problem, or have some idea, and are wondering if you want to pursue it, here is my bad advice:

1) DON"T CARE if anyone else cares. If YOU care then that is enough to at least get started.

2) DON"T CARE if it has the potential for a published paper. FIRST do the work then, if you feel like it, look for a good venue. You might not bother if posting to arxiv or making an open problems column out of it or a (guest) blog post out of it is a good endpoint. (I run the SIGACT News Open Problems Column- feel free to contact me if you want to submit one.)

3) DON"T CARE if it has practical implications.

4) DON"T CARE if you can get a grant for it. With the current state of the NSF this advice may soon become irrelevant.

5) DON"T CARE if someone else already did it (though at a later stage you should check on this). Even if you work on it and find someone else did it, you will have LEARNED about the problem through your efforts. You might then want to do a survey for your own benefit to consolidate your knowledge.

Why should you NOT CARE about any of these things? Because they get in the way of actually DOING something.

Here are two examples of when this approach WORKED and one where it DID NOT WORK, though both classifications might depend on your definition of WORKED.

WORKED: My work on Muffins. All I wanted was to get some High School and Undergraduate Projects out of it. I ended up with a book on it which made me twenty dollars last year! More to the point, I learned a lot, as did my co-authors and I am happy with the book. The co-authors were Undergraduates so my dept put me up for a mentoring award (I have other credentials as well). I did not win, but I got a nice letter saying they had many qualified applicants. OH- it didn't say if I was one of them.

WORKED: I have had many Ramsey Projects where a High School Student codes stuff up and learns some Ramsey, some coding, and gets the experience of research. Sometimes they do a survey paper or open problems column. We both learn A LOT from this and the student gets a good letter from me. Do they do something NEW? Publishable? No, though some surveys and open problems columns have come out of this. I DO tell them ahead of time that the work is unlikely to lead to original results (and hence unlikely to be good for a science competition).

DID NOT WORK: See this blog post here about the math and here about finding out that the problem we were working on was already known and more was known than we thought. I didn't mind this, but one of the authors did.

Note that WORKED and DID NOT WORK also depend on your goals.

For whom is this bad advice? Good advice?

1) It was always bad advice for young assistant professors who need to get papers and grants to get tenure.

2) Hypothetically, once you get tenure you have job security and hence can change fields or follow my bad advice without consequences. But with grants and salary and teaching load issues, this is less the case. Perhaps I am nostalgic for a time that never was.

3) High School Students were my main audience for bad advice. It's not as important for them to get papers out as for (say) assistant professors. But even this is changing. Colleges are getting more competitive. And HS students may well want a project that can lead to Science competitions. I am not going to say things were better when I was a kid but instead pose non-rhetorical questions:

a) Are high school students getting into research earlier than they used to? I am sure the answer is yes.

b) Are we losing the safe space where a high school student can just learn things and do things and not worry so much about if it's publishable? Yes, but I am not sure how widespread that is.

c) If we are losing that safe space, is that a bad thing?

d) Points a,b,c apply to ugraduates who want to go to graduate school more than for high school students who want to go to college.

4) Full Professors may have more freedom to follow my bad advice. Lance is looking for things to do now that he is no longer a dean, and indeed, is back to being a teaching-and-research professor. So he might follow my advice. However, he actually cares if people care about his work. He does not have to follow all of my advice, but he can follow some of it.



Thursday, October 23, 2025

AI and Intro Theory

This fall, for the first time at Illinois Tech, I'm teaching Introduction to Theory of Computation. While I've taught a variation of this course a couple dozen times, I last taught this class Spring 2016 at Georgia Tech. Intro Theory is a course whose main theorems don't change but it feels different in the AI era.

Here are the big and little things for me.

NP as a Hard Problem?

Garey and Johnson's classic 1979 book on NP-completeness had this famous cartoon.

While the employee couldn't prove that a certain NP-complete problem didn't have a quick solution, at least he could tell his boss that all these famous computer scientists have also tried and failed. The implication is once you show a problem is NP-complete, you should give up. But these days, we have great tools that can often solve or approximate well many practical instances of NP-complete problems such as satisfiability, mixed-integer programming and traveling salesman. NP-completeness is no longer an ending point to give up, but a starting point to try other approaches.

Computation is Hard to Understand

Rice's theorem tells us we can't understand anything about the language a Turing machine computes in general. We can't even tell if a machine halts on blank tape. But for most well-designed programs, we knew how they worked, or at least how they were supposed to work if correctly implemented.

With machine learning, we now have neural nets that we cannot understand how they compute what they do. And like Turing machines, for the most part, all we can do to understand how they work is to look at their input/output behavior.

New Theorems

I first taught this course not long after Neil Immerman and Robert Szelepcsényi independently proved that nondeterministic space is closed under complement and it's now a staple in my course. There are more theorems proven since then that I mention, but don't prove, including the PCP theorem, Primes in P, undirected graph connectivity in deterministic log space, and very recently deterministic time \(t(n)\) in \(\sqrt{t(n)\log t(n)}\) space.

AI in Teaching

I simply can no longer come up with problems that I can expect undergrads to solve that AI can't solve. So I decided to grade homework on completion instead of correctness so students who wanted to try hard wouldn't feel they needed to use AI to get full credit. For exams, in-class proctored with no electronics. I definitely preferred take-home exams; I just cannot do it anymore.

I use AI to help me format lecture notes and check over my homeworks and exams. The students mostly use AI as their first stop for help instead of office hours. For good reason, the latest LLM models know the material in this course pretty well.

I did manually grade the midterms but for fun asked AI to grade a few of them and its scores were similar to mine.

It won't be long before AI can create videos of me teaching more coherently than I do myself.

Just this week, ChatGPT Pulse out of nowhere gave me a fully formatted intuitive description of the switching lemma for me to teach in class. So I did.

At what point does the AI just take over?

Monday, October 20, 2025

Sept 16, 2025 was Pythagorean Day


Several people emailed me that September 16, 2025---written as 9-16-25 in the US---represents the integer side lengths of a right triangle.


9-16-25 is the only such triple that is also a valid date. This kind of mathematical alignment only happens once every 100 years. The next occurrence will be September 16, 2125.

Since this is such a rare event, let's explore some more math-themed dates.

1) Pythagorean Triples That Work as Future Dates

Note that 9-16-25 is not a Pythagorean triple; however, 3-4-5 is.

Here are some future dates that are both Pythagorean triples and valid calendar dates:

March 4, 2105 is 3-4-5

May 12, 2113 is 5-12-13

June 8, 2110 is 6-8-10

July 24, 2125 is 7-24-25 (Darn---July 24, 2025 was recent and I missed it!)

August 15, 2117 is 8-15-17

I think that's it. Recall that we need the month to be in \(\{1,\ldots,12\}\) and the day to be in \(\{1,\ldots,31\}\) with some exceptions:

Thirty days has September, April, June, and November

All the rest have thirty-one,

Excepting February, fun!

And that has twenty-eight days clear

And twenty-nine in a Leap Year

There are 24 versions of this poem at a website which is here.

2) Why Didn't Anyone Email Me About Earlier Dates?

I wonder why nobody emailed me on, say, March 4, 2005 (3-4-5). That's a Pythagorean triple, but maybe it just looked like three consecutive numbers. Oh well.

And what about May 12, 2013 (5-12-13)? That's a really cool Pythagorean triple. Oh well.

3) Other Math-Related Dates Using Month, Day, and Year.
So dates like Pi Day don't count---we want the full date to be interesting mathematically. Side note---I looked up how Pi Day is referred to and its Pi Day, not \(\pi\) day. Probably because not all typesetting systems can easily produce \(\pi\).



a) Square days:

Dates where the full 8-digit number (MMDDYYYY) is a perfect square.

a1) September 27, 2025 is 9-27-2025 and \(9272025=3045^2\).

Bonus: if you write it as 27-09-2025 then: \(27092025=5205^2\).

a2) Feb 2, 2084 is 2-02-2084 and \(2022084=1422^2\).

b) Palindrome days

b1) March 11, 2030 is 03-11-30 might be the next one.

b2) I was hoping that Feb 2, 2022 (2-2-22) would be a Tuesday (Twosday) but alas, it was not. I asked ChatGPT what is the next year that ends with 22 where Feb 2 is a Tuesday. It gave me incorrect answers four times. When I pointed this out it thanked me for checking its work and then gave me a later incorrect answer. It then gave me a python program that I could run to find out myself. I found out that between the years 1622 to 9922, only looking at years ending with 22, the following pattern happens:

Feb 2, 1622 is a Wednesday
Feb 2, 1722 is a Monday
Feb 2, 1822 is a Saturday
Feb 2, 1922 is a Thursday

Feb 2, 2022 is a Wednesday
Feb 2, 2122 is a Monday
Feb 2, 2222 is a Saturday
Feb 2, 2322 is a Thursday

This pattern repeats as far as I computed, which was 9922.

I started in 1622 since the Gregorian calendar started in 1582. I stopped in 9922 because of numerical Python issues. I do not know if the pattern goes on forever, or if one of the leap year exceptions will cause that pattern to change. Recall that

Every year divisible by 4 is a leap year.
EXCEPT if the year is divided by 100 but not 400 then its not a leap year. I have not found any other exceptions on the web but there may still be some.



b3) I was hoping Feb 22, XX22 (2-22-22) was sometimes a Tuesday. Here are all the years after 1622, ending in 22, before 10,000, where Feb 22 is a Tuesday: 2022, 2422, 2822, 3222,...,9622 and I assume in general (2022+400x). Again, the pattern might not hold if there are leap year exceptions I don't know about.

-------------------------------

That is the end of my post. A bonus for my readers: a mini meta post. Long time reader and fellow SUNY Stony brook math major David Marcus (he was class of 1979, I was class of 1980) has been proofreading some of my posts before I post them. Lance suggested I used chatty instead. Instead of using chatty instead, I used chatty and then used David. David still found mistakes. I give an example here by pointing to all three versions:

Original Version: here

After ChatGPT: here

After David: here (this is not the final version since I read it and made some minor changes)

I may at some later point look at several ORIG/AFTER CHAT/AFTER DAVID and see what errors chatty is not catching.




Wednesday, October 15, 2025

Fall Jobs Post 2025

Each fall I try to predict the theory computer science faculty job market to come and give suggestions to those going after them. Get set for a rocky ride, with AI’s disruption of computer science, fiscal stress at universities, and new U.S. policies affecting visas and funding.

In last year's post I wrote
In two years we have gone from nearly all of our computer science graduates getting jobs in the field to many of them struggling to get internships and jobs in the top companies if at all. If the past is any guide, a weak tech job market leads to fewer majors which leads to fewer slots for computer science faculty. We'll start to see these trends this year and they will accelerate quickly if the tech jobs market doesn't recover.

In fact the tech job market for computer science graduates has become much more challenging, ironically as the tech companies have had huge market growth with advances in artificial intelligence. Academia moves slowly, some departments are still catching up on hiring, but that will soon slow. Many computer science departments are adding AI majors and specializations and computer science enrollments over the next few years will be hard to predict.

Computer science is not protected from larger forces. Many universities are facing long-term fiscal challenges, and cuts in research and student funding from the US government aren't helping, leading to hiring freezes at many schools. The demographic cliff, a drop in the US birth rate around the 2008 fiscal crisis, will start hitting colleges in earnest next year.

The Trump administration has made it more difficult for foreigners to get student visas, leading to a projected drop in incoming international students between 30% and 40%. Recent changes to H1B's, namely the 100ドルK fee an employer would need to pay, may discourage more students who study in the US with the hopes of building a career and a life here. International students, many of whom have gone into computer science, are a major source of tuition for many universities.

Universities would also need to pay the 100ドルK to hire foreigners as faculty and postdocs. Startup packages for new computer science faculty typically run several hundred thousand dollars so this is not out of the question, but schools might be reluctant to pay the 100ドルK in tight fiscal times. These rules may evolve via litigation or administrative reinterpretation.

Even in a turbulent market, visibility and adaptability matter most. Universities outside the US are trying to take advantage by going after students and faculty. More than ever you should consider positions around the world, especially, but not only, if you are not a US citizen or permanent resident. Other countries have their own challenges, so tread carefully.

On to specific advice for students on the academic job market.

CATCS maintains a Theoretical Computer Science Jobs posting site. More generally in computer science, there is the CRA Database of Job Candidates and the CRA and ACM job postings.

And in general make sure you stand out. Areas related to data such as Artificial Intelligence, Data Science and Cybersecurity, will draw the most interest. Best if you can tie your research to those areas, or at least that you are open to teaching in them. Make sure your teaching statement talks about how you will handle AI in the classroom.

Have a well-designed website with all your job materials and links to your papers. Make sure your Google Scholar, LinkedIn and GitHub (if relevant) sites are accurate and up to date. Make a short video describing your research to a general computer science crowd. Take advantage of all the links above. Network at FOCS or SODA if you can get there. Reach out directly to professors you know at schools you are interested in. And start now.

Sunday, October 12, 2025

The Most Common Name in the World is Not Charles Lin. But It Seems That Way To Me.


In 2001 I supervised Charles Lin's Master's Thesis, which was on Private Information Retrieval.

In 2025 I supervised Charles Lin's Master's Thesis, which was on Ramsey Theory.

These are different people. To celebrate the second one's thesis, I took them both out to lunch together.

A Picture of Charles, Charles, and Bill is here.

I asked ChatGPT about common names.

1) ChatGPT tells me (and I believe them) that the number of people in American named Charles Lin is between 200 and 900. The answer pointed out that Charles is common and Lin is common but the combination is not common.

2) ChatGPT tells me (and I believe them) that the number of people named John Smith is between 25,000 and 40,000. I've heard it's a common name, but I do not know anyone named that. How is that possible?

3) I predict that people with last name Smith will be reluctant to name their child John because it is so common. Hence this name will decline. However, I made that same prediction 40 years ago. Maybe it did decline TO 40,000.

4) The most common boys' names in 2025 (so far) in the US are, in order:

Liam, Noah, Oliver, Theodore, James, Henry, Mateo, Elijah, Lucas, William

According to this website here, the five most common boys' names in 1960 (the year I was born), in order, are

David, Michael, John, James Robert.

James is the only on both lists.

I don't have the next five, though I suspect Henry and William would be on it. I also suspect that Liam and Mateo would not be in the top 100.

5) The most common girls' names in 2025 (so far) in the US are, in order:

Olivia, Emma, Amelia, Charlotte, Mia, Sophia, Isabella, Evelyn, Ava, Sofia

(Should they have Sophia and Sofia both on the list? I would say yes but it is odd.)

According to that same website, the most common girls' names in 1960 (the year I was born), in order, are

Mary, Susan, Linda, Karen, Donna.

None of the 1960 name are on the 2025 list. I suspect that would still true for the next 10.

6) When I was in grade school if the teacher said Quiet down Bill and Lisa the class went quiet.

7) The most common first name in the world is Mohammad. The most common last name in the world is Wang. Hence the most common name in the world has to be Mohammad Wang. I've asked students to find the flaw in that logic and all 8 Mohammad Wangs in my class found the flaw. Kudos to them! (Darling says I should always tell you when I am kidding. I am kidding. Only 2 of the 8 Mohammad Wangs in my class found the flaw.)

8) (ADDED LATER) One of my readers emailed me the following two items

a) From The Big Bang Theory

Howard: It gets better. Someone has to go up with the telescope as a payload specialist, and guess who that someone is.

Sheldon: Mohammed Lee.

Howard: Who’s Mohammed Lee?

Sheldon: Mohammed is the most common first name in the world, Lee, the most common surname. As I didn’t know the answer, I thought that gave me a mathematical edge.

b) As an interesting aside, I noticed that this semester I have three students named "Luke" and three named "Ben" which made me notice that kids who grew up with the original Star Wars movies would have kids in college around now (but none of my current students are named "Leia")

9) (ADDED LATER) I HEARD that the name `Barack' was popular when he was president.

I looked it up:

5 babies got that name in 2007

52 babies got that name in 2008

69 babies got that name in 2009

it declined since then.

BAD SCIENCE TIME: The number of babies named Barack increased 10-fold from 2007 to 2008.

TRUE but not significant.

10) I hope to supervise at least one more Charles Lin before I retire.


Subscribe to: Comments (Atom)

AltStyle によって変換されたページ (->オリジナル) /