Saturday, September 11, 2021
Federal judge awards Epic Games a mere consolation prize against Apple, which regrettably succeeded with its 'web apps are viable' lie
Yesterday's Epic Games v. Apple ruling by Judge Yvonne Gonzalez Rogers of the United States District Court for the Northern District of California (judgment, permanent injunction, and detailed Rule 52 order) amerely defers the resolution of the real competition issues facing iOS app distribution. It's one of those situations in which either side "gets something" and could claim victory, as Apple apparently does though the stock market initially disagreed (I, personally don't think the decision should have moved the stock at all). This makes it all the more remarkable that Epic doesn't engage in spin but concedes defeat. It's not that Epic achieved nothing; but for the time being, all it got is a consolation prize, and that's why Fortnite won't return to iOS at this stage.
Let me just share a few short observations for now as an exhaustive analysis would take a lot more time:
Epic are--for the time being--losers, but Apple are liars in the web apps context. They shamelessly took advantage of the fact that courts need to rely on expert testimony--which I almost mispelled as "testimoney" as expert witnesses say a lot of things when the pay is good--as soon as it comes to technical questions, and the biggest and most impactful lie here was that Apple portrayed HTML 5 web apps as a viable alternative to native apps for game distribution. I know exactly how the HTML 5 version of my Viral Days game feels and performs (it's just terrible); and I know how easy it would be for Apple with its death grip on iOS to always ensure that web apps would fail to meet customer expectations. I can't blame the judge for not knowing this. She's a judge, not a programmer. From my vantage point, based mostly on Epic's proposed findings of fact and conclusions of law, Epic had made the case for the insufficiency of web apps pretty well, though I raised three issues (two of them IP-related) shortly before the trial that I thought Epic should have brought up.
Epic v. Apple has been a greater success in terms of revelations (such as Tim Cook's admission that he doesn't receive reports on developer satisfaction) than in terms of remedies. An anti-anti-steering order (meaning that Epic got what AmEx was denied) may help Spotify to some degree, but doesn't solve the fundamental problems.
Epic could have succeeded--and on appeal still could succeed--even under the market definition adopted by Judge YGR: digital mobile gaming transaction. In my recollection of the late-May closing argument, that's a market definition Epic's counsel could actually live with. Without the word "mobile" in it, the definition would be too broad and Apple would be in the clear. But the aforementioned web apps lie and other smokescreens regarding alternative ways to distribute games to iOS users worked.
Originally, Judge YGR didn't want to make the factual findings herself. She'd much have preferred for a jury to decide because appeals courts afford way more deference to jury verdicts than judicial findings of fact. It was already obvious to me last year that she didn't want this case to be the next FTC v. Qualcomm, where one of her colleagues in the same district found in plaintiff's favor and the appeals court threw out the whole thing. So what Judge YGR has rendered here is a decision that gives either party plenty of ammunition at the appellate stage (e.g., "Apple's slow innovation stems in part from its low investment in the App Store"), though the worst part for Epic is clearly that the judge says the Fortnite maker has "failed to prove [its Sherman Act Section 2] claim for myriad reasons." In some contexts, the ruling leaves open the possibility of certain claims being proven with more or better evidence. Some of the rationale could even be described as displaying a great deal of insecurity along the lines of "maybe I should ... but I'm not sure I can." Anything could still happen before the Ninth Circuit. It could nix Epic's consolation prize based on California Unfair Competition Law but could also reverse key conclusions that enabled Apple's acquittal. Should the appeals court find that Apple has actually violated the antitrust laws (and not merely committed an "incipient" antitrust violation as Judge YGR put it), Epic could theoretically still prevail 100%.
What's disappointing is that the judge actually saw--as her world-class examination of Apple CEO Tim Cook on the last day of testimony showed--that Apple doesn't face sufficient competitive constraints with respect to developers, and knows that many developers are unhappy, but that this realization didn't dissuade her from a negative finding on Epic's Section 2 claim.
Apple's minimalist concessions have worked so far. The remedy imposed here bears a strong resemblance to how Apple resolved its Japanese antitrust issue relating to "reader" apps. But there's so much going on around the globe that Apple is merely delaying the inevitable because dilatory tactics are profitable. United States Senators from both sides of the aisle have responded to the decision by stating their resolve to take action. Before this case has been decided by a final court of appeal, new legislation may already address some of the issues and allow alternative payment options.
The ruling says: "Here, Epic Games has provided requests for its remedy which principally appear to eliminate app review." As I am pursuing my own complaints over Apple's app review, yesterday's decision obviously does nothing to address my primary concern. I still believe that alternative app stores are needed, and that anything less won't do.
Share with other professionals via LinkedIn:
Thursday, April 15, 2021
Discussion of Apple's alleged need to redesign iPhone to support third-party app stores continued--and expanded into why web apps don't help
This is a follow-up to yesterday's posts on Epic Games v. Apple, especially the third one (Apple expert incorrectly claims Apple would need to "redesign" iPhone hardware and software to allow alternative app stores ), which Apple Insider picked up.
That article drew additional attention to the discussion. A New Zealand-based developer, @hishnash, gave his explanation of what Apple's witness meant by a need to redesign even the iPhone's hardware. Epic Games founder and CEO Tim Sweeney then pointed to the fact that Apple's Enterprise program works on current iPhones:
How does the Enterprise program work then?https://t.co/TfUN3rqHTm
— Tim Sweeney (@TimSweeneyEpic) April 15, 2021
We then discussed the technical aspects of installing and running apps on iPhones that are not installed via the App Store (but via the Enterprise Program, TestFlight, or Microsoft App Center). @hishnash noted that there are certain feature sets, such as CarPlay, that require special permission. My understanding of everything said up to that point was that it was "all just about Apple lifting lifting restrictions (some contractual, some technical) as opposed to really having to take its architecture to a higher level."
This here hinges on how one would reasonably understand the verb "to redesign" in connection with "software and hardware." There is another key term in this dispute--commission--that Apple clearly redefines in ways that no dictionary supports. In the case of to redesign, they also mislabel something, but one needs to consider the context:
"The duty upon Apple is more than the usual duty to deal; it would include a duty to redesign its hardware and software [...]" (emphases added)
The duty-to-deal case in U.S. antitrust law is Aspen Skiing, which must nowadays be understood in light of the warning in Trinko not to go too far, but Trinko didn't affect the part that is relevant in this redesign context. The Aspen outcome was that a larger ski resort had to (again) offer multi-area tickets together with its smaller competitor, which might otherwise have been forced out of the market.
So when Apple's expert says Epic wants more than the Aspen plaintiff and points to a "redesign" of hardware and software protected by intellectual property rights, the question is: is Epic really asking for (fundamentally) more? If yes, then "redesign" would mean Apple would have to make a huge architectural effort, and that wouldn't be fatal to Epic's case, but certainly involve a higher hurdle. Unlike Apple's witness, I don't see a structural difference. Here's why:
Epic's case, just like Aspen Skiing, is at the core about lifting restrictions, not about creating something new. The ski resorts didn't have to create a new skiing area. They just had to provide a ticket that gave customers access to both companies' existing areas. Apple doesn't have to invent something new: all those arguments about APIs, certificates, credentials etc. don't change the simple fact that Apple artificially put up barriers with a Gods may do what cattle may not attitude.
Apple's expert makes it sound like the Aspen Skiing defendant didn't have any obligation under antitrust law beyond signing a contract. But it definitely took more to implement the court-mandated cooperation. Even if you had a duty to sell someone potatoes, you'd have to do something to make it happen. Aspen Skiing was about issuing (in that case, printing) tickets, and about validating them (manually or electronically). Interestingly, the security architecture that ensures only authorized apps can access, for example, the CarPlay APIs is also about issuing "tickets" (in that case, digital certificates) and about validating them (@hishnash mentioned "root certificate chain validation" in connection with "the Entitlement system"). Also, even those ski resort tickets involved intellectual property (copyrights, trademarks)--plus real property.
It's a different jurisdiction, but when the European Commission obligated Microsoft to give developers of competing network servers (like the Samba open source project) fair access, Microsoft not only argued that it shouldn't have to grant a license but additionally complained about having to provide technical documentation. Then-competition commissioner Neelie Kroes "[found] it difficult to imagine that a company like Microsoft does not understand the principles of how to document protocols in order to achieve interoperability" and fined Microsoft for its (temporary) refusal.
Unlike the question of whether a "commission" is a "rebate" (it's a clear Boolean "false"), this here is one of degree. I stand by my criticism of the verb "to redesign" in this context because lifting unreasonable restrictions just means to fix a problem, not to take some technology to a higher level as "redesign" implies.
In particular, I'm unaware of Epic Games demanding access to Car Play at this stage. The types of apps one finds on the Epic Games Store don't need APIs that are subject to specific restrictions. In light of that, I summarized my understanding as follows:
My takeaways:
— Florian Mueller (@FOSSpatents) April 15, 2021
1) Apps like Fortnite could easily be installed by 3rd-party app stores. Don't need CarPlay, don't technically depend on iTunes payment system.
2) It would be Apple's choice to give them optional access to its payment system or to enable them to use CarPlay etc.
Of course, even the second part (access to those other systems) could involve duties to deal. But it would be a first step to at least allow games like Fortnite to be installed via third-party app stores.
@hishnash acknowledged that "apps that could be replaced with PWAs (aka apps that do not use any key system apis could be installed through a third party apps store." (PWAs = HTML5-based Progressive Web Apps). In other words, the Epic Games Store could install Fortnite because--performance and other issues apart--Fortnite doesn't need to access APIs like CarPlay and could therefore be a web app (it wouldn't be playable, but again, no API access issue). Thereafter, the discussion was more about whether third-party app stores would be an effective remedy as developers would still need to use some of Apple's SDKs etc., and whether PWAs would be a workable or potentially even superior alternative.
Epic's proposed findings of fact and conclusions of law list a number of shortcomings that PWAs have. Mr. Sweeney described Apple's pointing to PWAs as "disingenuous" because Apple limits those APIs and doesn't allow third parties to fix the issues though they technically could. In addition, what I know from one of my own projects is that major ad networks don't even support in-app advertising in WebGL apps. So there are usability issues including but not limited to performance, and monetization issues. Mr. Sweeney noted that Apple itself tells developers to build native apps:
And don’t forget, Apple’s own advice to developers has consistently been to build native apps to get the best performance and features on iOS.
— Tim Sweeney (@TimSweeneyEpic) April 15, 2021
And in my favorite @TimSweeneyEpic tweet to date, he explained iOS is "an intermediation trap":
The whole iOS platform - the app store, the guidelines, the PWA limitations, everything - is built as an intermediation trap. All of the pieces carefully fit together to obstruct developers from competing with Apple.
— Tim Sweeney (@TimSweeneyEpic) April 15, 2021
@hishnash thought Epic wanted exposure on the App Store and would therefore not even accept a workable PWA solution. He inferred this from Epic's decision to offer Fortnite via the Google Play Store despite sideloading being possible. Let's focus on Apple here, so I'll just note that sideloading on Android doesn't really work for consumer software in its current form: my own company even experienced problems because beta testers didn't manage to install our stuff via Microsoft App Center. As for Apple, I pointed @hishnash (whom I commend for his thoughtfulness and constructive attitude) to the fact that Epic Games v. Apple is not just a Fortnite case but the central issue is third-party app stores like the Epic Games Store (even though the early stage of the dispute was very much about #FreeFortnite).
The remainder of the remedies discussion basically was about weighing the pros and cons of two remedies--PWAs and third-party app stores. @hishnash thought that if Apple--in what I consider an alternative universe--sincerely made an effort to provide a great user experience for PWAs, developers would then at least be independent, while apps distributed via alternative app stores would still depend on some kind of IP license from Apple:
I don't think Epic would be happy with an alternative app[ s]tore if they still had to pay apple 30% of all revenue for linking against the Metal apis.
— hishnash (@hishnash) April 15, 2021
In other words, we were then talking about two different dependencies:
With PWAs, @hishnash assumed there was no licensing issue (and for simplicity's sake, I don't want to digress into the related IP questions here), and if usability issues came up, the solution would be to hold Apple to a hypothetical commitment to comply with a certain technical standard.
For apps distributed via third-party app stores, @hishnash assumed there was a need to take a license from Apple (which I again won't discuss from an IP perspective, though I'd be tempted to talk about the recent Android-Java API Supreme Court decision), and the terms might be unreasonable (as he suggested in the above tweet).
It's not that @hishnash would necessarily oppose both better PWAs and competing stores. At least on videogame consoles (which are an important topic, but can't talk about them or this post will never come to an end), he might even like the idea of going down both avenues. And he clarified he didn't mean to say that PWAs "worked well right now."
To make a long story short, the reason I don't believe PWAs could lead to a practicable and reasonably enforceable remedy is because user experience happens in users' minds. What's pretty doable (not trivial, but manageable) is to ensure compliance with a standardized protocol like 5G. What's already a lot harder is to measure purely technical performance: there are different benchmark programs for CPUs, for example. But what's absolutely impossible--except perhaps with 26th-century artificial intelligence--is to objectively quantify the user experience, especially of entertainment products like games. So even if Apple theoretically promised to do a better job on PWAs, or just made an announcement to that effect without a formal legal obligation, developers wouldn't have a reliable--i.e., justiciable--assurance of being able to compete outside Apple's App Store.
By contrast, if Apple clearly had to allow the distribution of apps via third-party stores that use the same APIs as those distributed via the App Store, the remaining area of potential dispute would come down to license fees for SDKs, for developer tools, for documentation etc.--which can be worked out. The EU solved that problem in the Microsoft network server protocol case. I follow standard-essential patent royalty disputes all the time (this blog has written more about them than about any other topic so far). Even if something went wrong, we could all develop, innovate, compete--and seek a refund later. Right now, Apple just says "no" and that's the end of the story (and of this post).
Share with other professionals via LinkedIn:
Saturday, December 19, 2020
Viral Days: inspired by the COVID-19 pandemic, this real-time strategy game for Android and iOS demonstrates the propagation of a virus and, especially, the most effective ways to stop it
Nine months ago, to the day, I woke up after about four hours of sleep. With large parts of the world in lockdown, I started thinking about how a mobile game could make a useful contribution in the current situation and any future situation, as the SARS-CoV-2 pandemic is not the first and won't be the last of its kind.
I'll tell you in a moment what happened then, but fast foward from March 2020 to this weekend, and Viral Days (product website) is available for iOS on Apple's App Store, and for Android on the Google Play Store, [Update] the Huawei App Gallery, and the Samsung Galaxy Store [/Update]. Here's a gameplay video (this post continues below the video):
[フレーム]Back in March, some low-quality games from traditional genres, basically cheap knockoffs of titles like Angry Birds and Super Mario Bros., had been rethemed and rebranded as COVID-19 games. But they made no sense. You don't cure a disease by throwing toilet rolls at virus-like faces, or avoid getting infected by jumping over obstacles. What I wanted to come up with instead was a game that would really make a difference. A game that would
demonstrate the problem of exponential propagation in a simplified, time-compressed form and
promote several of the most effective ways to stop (or at least slow down) the spread of the virus.
After about an hour, I felt very good about my rough idea, and I luckily got another three or four hours of sleep. After waking up, I was (still) absolutely determined to turn this idea into reality. That same morning, before stores opened, I already had a call with Mario Heubach, whose company was doing contract development for my app development firm. We actually had been working together on another title--a highly interactive trivia game--since the summer of 2018 and were probably just a few months away from launching it. I was worried he'd declare me completely crazy to put a near-finished project on hold in order to start a new one. Let's face it: this is completely against conventional wisdom. But under the circumstances, what one would normally not even consider for a second was the right decision this year--we agreed on this much, and in early April, after some further conceptual work and research, development began. The Unity 3D engine was our obvious choice, and we also found valuable material on the Unity Asset Store.
We took our time to get it right--we really wanted to make a high-quality game--and finally submitted the app to Apple, Google, and Huawei yesterday. Apple and Google approved very quickly--they had previously taken a look at our beta versions. It's part of the history of this project that we initially had to deal with rejections, but I don't want to go into detail, at least now here and now. What matters is that the game is now available. And today we also submitted it to Samsung's Galaxy Store for review.
The initial release comes with 14 different languages.
Later this month we'll publish an HTML 5 (WebGL) game based on the same engine. I'm pretty sure that one will go absolutely viral, and when you see it, you'll see immediately why I think so. Stay tuned.
I'm aware of only one other game that "strategygamifies" the problem of a viral pandemic: Ndemic Creations' Plague Inc., which was launched in 2012 with what is now called its "main mode." That "main mode" has the objective of extinguishing humanity by means of a lethal virus. By stark contrast, my game's subtitle is "Heal - Protect - Prevent." Also, the virus in Viral Days isn't lethal. There's a difference like day and night between those two virus games not only in terms of the game objective but also the genre. Plague Inc. is a numbers-centric, abstract game where you see dots on a world map. Viral Days is about people you see--and try to take good care of. It's hands-on because players get to distribute masks, hospitalize or home-quarantine infected people, disperse crowds, and when you impose a lockdown in my game (available once you've reached level 18), you see people running home, just like you can see how infections are happening when an ill person and a healthy person spend too much time close to each other.
Viral Days highlights proximity with a frame that adjusts dynamically. I prototyped that one back in 2014, originally for a completely different purpose, and for a long time I had been looking for a way to put it to use in a game. In the early morning hours of March 19, 2020, I finally found it.
This game has the potential to reach a huge audience--and should have a positive effect on many (especially, but not only, young) people's attitude towards masks and social distancing. Apple disallows COVID-19-themed games, and Google has strict rules concerning metadata containing such keywords as COVID-19, corona(virus), and pandemic. But Viral Days is a generic virus game. In fact, what you see in the game would apply to the Spanish Flu of 1918 as well.
When I started blogging about those App Store antitrust cases in the summer, I said I was about to publish a game app myself. It took a few months longer than I thought then, but by now you know which one I meant. I'm so happy to have created a game that I'd definitely play even if I hadn't made it. And proud to have invented a new strategy game genre: real-time strategy without anything resembling military combat. It's viral real-time strategy.
Share with other professionals via LinkedIn:
Saturday, June 15, 2013
German VP8 infringement cases show Google's inability to cut through the codec patent thicket
I don't think any responsible standardization body will be able anytime soon to declare VP8 a "royalty-free" codec. Everyone in the industry knows -- though a few deny against better knowledge -- that there's a whole patent thicket surrounding video compression techniques, and that all codecs implement more or less the same fundamental concepts. Only a fool could believe Google when it originally claimed that there were no issues concerning third-party rights. Google's license deal with 11 MPEG LA contributors comes with terms that Google certainly wouldn't have accepted if it had not been seriously concerned about the risk of litigation brough by those right holders. And with Nokia refusing to make its patents available for VP8 on royalty-free or even FRAND terms (Nokia reserves the right to seek injunctions, or to charge as much as it wants, no matter how much above a FRAND rate it might be), there's at least one company that is actually suing over VP8.
Nokia brought its latest patent assertion against VP8 as part of an ITC complaint and its first two infringement actions over VP8 in Mannheim, Germany, and this post is mostly about the trial held on Friday over the second one of these cases. Also, since my last post on VP8 I've had the opportunity to read the full text of the order to reopen the proceedings in the first case. As I wrote two weeks ago, that order wouldn't have come down without at least a finding of a likely infringement. It relates to the possibility of staying the case pending a parallel nullity proceeding, and German courts don't reach the question of a stay if they can outright dismiss a case for lack of infringement. The order says one can conclude that devices implementing VP8 infringe EP1206881 if the asserted claim (claim 46) is interpreted "broadly", in which case there is room for invalidity contentions (which, as the order doesn't say explicitly because it's clear to patent litigators, wouldn't be relevant to a narrower claim construction). Therefore, the court now wants to take a closer look at a particular invalidity theory. While the court could still reach a different conclusion on infringement, it would have dismissed the case immediately (to conserve court and party resources) if Google and HTC had convinced the court that Nokia's claim construction is overbroad. Absent a relatively unlikely change of mind by the court, the likely outcomes are a stay or an injunction. A stay would mean that an injunction will issue if the asserted patent claim survives the nullity proceedings in its granted form, or in a form that still warrants an infringement finding. This means that Google won't have legal certainty concerning the German part of EP'881 (courts in other European countries will not be bound by the related German decisions) for years to come.
Yesterday's patent-in-suit, EP1186177 on a "method and associated device for filtering digital video images" is less likely to be deemed infringed when the Mannheim court announces a decision on August 2, but Nokia can and presumably will appeal a dismissal, and despite the inclination indicated by the court at yesterday's trial, EP'177 is indicative of VP8's patent issues because it's within striking distance of VP8: the whole case hinges on only one claim element. All other claim limitations are considered satisfied by HTC's VP8-implementing devices. That fact reduces to absurdity all those claims that VP8 is so very original that no third party patented anything that might read on it.
While video compression techniques involve advanced mathematical operations, it's actually quite easy to explain the only claim limitation that may (and, apart from the possibiltiy of an appeal, probably will) help Google and HTC avoid an infringement finding:
Video codecs -- and this is true of H.264 as it is of VP8 -- divide an image into macroblocks (rectangular areas). Each macroblock is processed separately. End users just see the overall image and shouldn't notice the edges of macroblocks, which are only an internal thing. But they would notice those borders if certain mathematical operations resulted in artefacts (unwanted elements that appear in an image but don't exist in reality) that a codec failed to filter out. The patent relates to this filtering process. The codec looks at the overall image and identifies actual vertical and horizontal lines that were filmed (for example, a corner of a building). Only artefacts at the edges of the macroblocks are filtered out; real-world lines are preserved.
For each macroblock there are different compression methods. The patent mentions "intra coding", "copy coding", "motion-compensated prediction coding", and "not-coded coding". Google and HTC can't dispute that VP8 uses all of these. But their non-infringement argument, which the court views favorably, is that VP8 does not "determin[e] at least one parameter of the filtering operation based on the types of the first and second prediction encoding methods". They say VP8 analyzes the pixels of two neighboring macroblocks, but it does not make a determination by directly checking on which encoding methods were used for the two blocks in question. Nokia, however, says that the pixels result from processing of a macroblock using a certain encoding method, and that there are correlations between the encoding methods used and at least one parameter (they particularly stressed the number of pixel rows/columns subjected to filtering).
At German patent trials claim construction and infringement analysis usually aren't separated as systematically as in U.S. lawsuits, in which the court firstly provides interpretations of disputed terms (which are later put before a jury if a jury trial is requested, which is the case in most U.S. patent infringement cases). In a U.S. proceeding over the same patent (I couldn't find a granted U.S. equivalent of this particular one, but let's assume so for the sake of the argument), the term "determining at least one parameter [...] based on the types of the first and second prediction encoding methods" would have resulted in a claim construction dispute, with Google and HTC trying to defeat Nokia's infringement theory by inserting into the court's construction of the term a word like "directly", while Nokia would probably have argued that the plain and ordinary meaning is fine and has scope for its infringement theory, possibly also proposing an alternative like "depending on" (for "based on"), which is even broader. Additionally or alternatively, the defendant and the intervenor would have tried to have "the first and second prediction encoding methods" interpreted as "the types of the first and second prediction encoding methods".
At yesterday's German trial the question was not how to rephrase the term, but the analysis that had to be performed was the same. Both parties advanced reasonable arguments. Google's counsel pointed to items (a) and (d) in paragraph 24 of the patent specification. Item (a) is "the type of block on either side of the boundary", while (d) relates to "differences in pixel values", which are only influenced by the encoding method types. Google plausibly argues that the description of the patent therefore distinguishes between these two (and two other) criteria. So far, so good. It makes a lot of sense, but it doesn't prove beyond reasonable doubt that the disputed claim term relates to only item (a) and not item (d). Nokia pointed to other parts of the description which, such as paragraph 37, show that the patent discusses an analysis of pixel values.
In order to back up its "royalty-free" claims, Google needs legal certainty, but there are just too many patents in this field covering the same or at least very similar techniques as those used by VP8. There's clearly a lack of certainty if the outcome of a single infringement case depends on whether "based on" can or cannot be reasonably understood to have the meaning of "depending on". And as I wrote further above, the first finding of a likely infringement has already been made in connection with another Nokia patent.
Nokia and Google may also have some patent issues to sort out over Google Maps according to a TechCrunch article. At this stage I just wanted to mention this nonjudgmentally.
If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents and Google+.
Share with other professionals via LinkedIn:
Friday, May 31, 2013
German court has apparently found Google's VP8 video codec to infringe a Nokia patent
This morning -- right before an HTC v. Nokia trial over a power-saving patent allegedly infringed by Lumia phones -- Judge Andreas Voss ("Voß" in German) of the Mannheim Regional Court announced a procedural decision in the first Nokia v. HTC lawsuit involving Google's VP8 video codec. The court has reopened the proceedings in order to take an even closer look at the validity of this patent with a view to a decision on whether to stay the case pending a parallel nullity (invalidation) action before the Bundespatentgericht (Federal Patent Court). While Judge Voss did not explain the reasons for this decision at today's announcement other than referring to a post-trial brief on (in)validity, there's every indication that the court has identified an infringement of EP1206881 on an "apparatus and method for compressing a motion vector field" in the aftermath of the March trial.
Here's why:
Under German bifurcation rules, there is no full invalidity defense in an infringement proceeding. Defendants challenging the validity of a patent-in-suit must do so at the (Munich-based) Federal Patent Court, while infringement cases are adjudged by regional courts. Infringement proceedings are frequently adjudged ahead of nullity cases (particularly in a court like Mannheim, a rocket docket for patents), but a regional court can stay an infringement case pending resolution of the nullity action if, in its assessment, there is a high likelihood of invalidation.
Last year (well ahead of any Nokia v. HTC trial) Judge Voss personally explained after the announcement of another decision how his court proceeds in its adjudication of motions to stay infringement cases pending nullity proceedings:
If it finds no infringement, it dismisses a case immediately and never reaches the question of (in)validity.
Once it identifies a first infringement, it immediately interrupts the infringement analysis (if there are other infringement theories -- such as claims against different technologies -- at issue) and focuses on the possibility of a stay pending the nullity case.
In light of the above, the reopening of proceedings to further evaluate the question of (in)invalidity is most likely attributable to an infringement finding. At the very least there has been a finding of likely infringement. If the court disagreed with Nokia's infringement claims, it would dismiss the case rather than waste court and party resources on the analysis of a question that would never have to be reached in that scenario.
Google, to which this case is of far greater strategic concern than to HTC, is an intervenor in this case. For Google's aspirations to elevate VP8 to an Internet standard, an infringement finding by the most experienced court in the world with respect to information and communications technology patents would be a huge setback. Even if the court stayed the case over doubts concerning the validity of the patent-in-suit, an injunction could enter into force if and when Nokia successfully defends this patent in the proceedings before the Federal Patent Court. For HTC a stay would be great because it would likely settle its dispute with Nokia before a final decision on the validity of this patent -- but Google needs a dismissal, which it didn't obtain today and appears unlikely to obtain later this year.
It's unclear whether there will be a second trial to discuss the new invalidity contentions that resulted in the reopening of the proceedings. It's also possible that the court will decide after further briefing, without another hearing.
Google's problem with Nokia's opposition to VP8 won't go away anytime soon. The next VP8 trial will be held by the same court in two weeks from today, and last week Nokia brought a third patent infringement claim against VP8 through a new ITC complaint. All in all Nokia identified to the Internet Engineering Task Force (IETF) a total of 64 patents and 22 pending patent applications that allegedly read on VP8. Nokia can't sue over the 22 applications unless and until a given application is granted, and the 64 patents don't represent 64 different inventions (it's a per-jurisdiction count). While a negative ruling (unless reversed on appeal) in one jurisdiction would make it more difficult for Nokia to prevail over the same patent in other European countries or over other members of the same patent family in non-European jurisdictions, courts in other countries (including other European countries) are still free to disagree with the first ruling.
It's not known at this stage whether there are other companies than Nokia who may assert patents against VP8 in the future. Google did a deal with 11 patent holders affiliated with MPEG LA, but there could be others. Some may be interested only if VP8 becomes an Internet standard and, as a result, is used more widely than today. And the MPEG LA deal is controversial: the FRAND-zero patent license Google proposes is not palatable to key players in the open source movement. The Software Freedom Law Center welcomes it, but the explanation it provides would equally apply to Microsoft's Android and Linux patent license deals, which Google doesn't like at all.
Unless the Mannheim court has changed its procedural approach to patent infringement cases with parallel nullity actions since the explanations Judge Voss gave last year (which is very unlikely), today is a Black Friday for VP8.
If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents and Google+.
Share with other professionals via LinkedIn:
Friday, May 24, 2013
Nokia files third patent infringement complaint targeting Google's VP8 video codec
Google's WebM/VP8 video codec faces three kinds of patent issues that make it unlikely to be adopted as an Internet standard anytime soon:
While Google reached a license agreement with 11 companies that identified to MPEG LA patents they believe read on VP8, an open source community leader believes the license terms for implementers of the standard are irreconcilable with key aspects of software freedom and unworkable for open source. It's a FRAND-zero license: zero license fees, but other terms are imposed, and one needs to sign up in order to benefit from the agreement.
There may still be companies holding patents that read on VP8 but which didn't identify them to MPEG LA or didn't participate in the license deal with Google. VP8 is untested in court.
One company that has stated clearly that it opposes VP8's adoption as an Internet standard and is unwilling to extend a license (not even at a FRAND royalty rate, let alone as a freebie) to implementers of VP8 is Nokia. It's already suing HTC over VP8 patents, and it identified to the Internet Engineering Task Force (IETF) 64 granted patents and 22 then-pending patent applications that it believes read on VP8.
The latest news is that Nokia has now brought its third patent infringement lawsuit over VP8, in the form of a new (second) ITC complaint against Android device maker HTC. Item #3 on my list of patents asserted in that complaint is U.S. Patent No. 6,711,211 on a "method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder", which is also item #71 on the list of intellectual property rights Nokia identified to the IETF. Nokia's infringement allegations filed with the ITC specifically relate to (not only, but primarily) VP8.
This is the 37-page infringement claim chart that quotes extensively from the VP8 specifications (this post continues below the document):
Nokia VP8 Infringement Claim Chart for '211 Patent
[フレーム]The first Nokia patent assertion against VP8, in a lawsuit against HTC in Mannheim, Germany, went to trial in March (Google participated as an intervenor). The patent at issue in that litigation is EP1206881 on an "apparatus and method for compressing a motion vector field". A ruling had originally been scheduled for last Friday but was postponed to May 31 (next week's Friday).
The next VP8 patent trial (the defendant is HTC, again) will be held by the same court on June 14, 2013. The patent-in-suit in that action is EP1186177 on a "method and associated device for filtering digital video images".
Nokia is asserting 50 different patents against HTC in the U.S., UK and Germany, and three of them allegedly read on VP8 and are on the list of patents identified to the IETF. Nokia isn't just saying that VP8 infringes its patents -- it's actually suing to prove it in court.
In all three HTC VP8 cases, Nokia is pursuing injunctive relief. The German cases would result in sales bans (possibly along with a recall from retail and destruction of infringing goods) in that country; if Nokia prevails on its ITC complaint, there will be U.S. import ban and customs would hold infringing devices upon entry into the U.S.
With so much legal uncertainty, Google may at some point realize that VP8 does not offer a fundamental advantage over H.264 in terms of the licensing situation, but represents a questionable tradeoff: H.264 comes with predictable, limited royalties (even more so after Google's Motorola failed to convince a U.S. court of the merits of its exorbitant royalty claims), while VP8 products may get banned without any licensing obligation on the part of Nokia and possibly other patent holders.
The Electronic Frontier Foundation (EFF) has criticized Google for abandoning open standards in connection with chat services (Google Talk), as I just found out via Techmeme. It appears again and again that Google's commitment to openness is vey selective, and I believe VP8 is ultimately about control and cost reduction, not openness and freedom.
If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents and Google+.
Share with other professionals via LinkedIn:
Saturday, May 18, 2013
Google's FRAND-zero patent license for VP8 threatens to divide Web and FOSS communities
Google is already promoting the VP9 video codec, which may very well raise new patent issues, while pushing for adoption of VP8 as an Internet standard. But the patent license it has drafted for VP8 and just published doesn't meet the requirements of the Open Source Initiative's definition of open source, says the President of the OSI's Board of Directors, Simon Phipps, in a blog post. According to Mr. Phipps, the draft license "shows signs of unfamiliarity with the tenets of software freedom". The OSI can't speak for the Free Software Foundation, of course, but the two organizations share many values and the FSF's emphasis of software freedom ("[t]he issue is not about price") entails even stricter requirements for acceptable license terms. Simply put, if your proposal doesn't please Simon Phipps, know that Richard Stallman ("RMS") is harder to please.
Historically, the World Wide Web Consortium (W3C) has applied its royalty-free (RF) licensing requirements in ways that ensured compatibility of HTML-related essential patent licenses with the philosophies of Free and Open Source Software (FOSS) organizations, particularly the FSF and the OSI. The Web movement and the FOSS movement have succeeded symbiotically and in tandem: FOSS powers large parts of the Web and drove its adoption, while the Web has allowed FOSS to thrive and contributed greatly to its popularity. If Google wants the W3C to consider its proposed VP8 patent license an acceptable W3C RF license, it effectively asks the W3C to part ways with the FSF and the OSI, after approximately two decades of close and fruitful collaboration. This is utterly divisive.
Shortly after the announcement of an MPEG LA-Google license deal relating to VP8 I was confused about Google's intentions to comply with the W3C patent policy when I saw a Google employee link to a web page that involved FRAND licensing commitments when he said they were planning to comply with the W3C's patent policy. Now that Google's proposal has been published, the answer is that Google's proposed VP8 patent license is not a permissive RF license but a typical FRAND-zero (or, synonymously, "RAND-zero") license. Zero license fees to be paid by licensees (though Google presumably paid or pays MPEG LA) -- but reasonable and non-discriminatory terms (field-of-use restrictions, reciprocity) are imposed and, which Mr. Phipps considers the most significant issue with the proposal, "gaining benefit from the agreement requires individual execution of the license agreement".
The final two sentences of the OSI President's blog post declares Google the loser and VP8's rivals the winning camp:
"This document seems to me to be an effective outcome for those in MPEG-LA's patent-holder community who want to see VP8 disrupted. It has provoked an autoimmune response that must have Google's enemies smiling wickedly."
I don't want to speculate about the intentions of the 11 originally-unnamed, meanwhile-disclosed companies that contributed patents to the MPEG LA-Google deal, or of the MPEG LA pool firm. Frankly, it doesn't matter what company A or company B wants to achieve in this context. At least for now, Google's own license grant under Section 3 of that proposed agreement raises the same issues that Mr. Phipps criticizes with respect to the other patents involved -- Google isn't being more generous than the MPEG LA group in those respects. At any rate, conspiracy theories aren't even needed when simple business logic can explain everything. If a company believes that video codecs should be available on affordable terms, but that intellectual property holders should be compensated somehow, then it can be Google's best friend and will nevertheless attach certain conditions to a license grant. Such conditions can be monetary and non-monetary. The financial part has been resolved. While I doubt that the patent holders gave Google a freebie (considering that they don't even do this in connection with H.264, the standard they promote), Google can apparently afford those royalties without having to charge end users. There's major strategic value for Google to gain in controlling an Internet standard, as non-MPEG LA-contributor Nokia's comments on its decision to withhold a license implied. So Google picks up the bill. But the non-monetary terms shine through its proposed "VP8 Patent Cross-license Agreement".
Mr. Phipps says it's probably "unworkable" for the FOSS community, and at the very least unacceptable, that a licensee must identify itself and sign up to get a license, including downstream users since there's no right to sublicense. The FOSS approach is that someone just grants you a license and the downstream is automatically licensed, too, so you can share freely without any bureaucracy or loss of data privacy involved for anyone. But let's think about the modus operandi of those third-party patent holders, wholly apart from any theories of world domination or destruction. They want a reciprocal license (Section 5 of the proposed VP8 license). That's why Google calls this a "cross-license". It would be foolish for them to make their VP8-essential patents available when a beneficiary of their license grant can withhold a license. But they must have a reasonable degree of legal certainty that they can use the other party's back-licensing obligation as a defense to infringement claims. And that's why they need a formal cross-license agreement in place. Otherwise the licensee could later claim that it never consented to that license grant.
Google itself is a good example -- "good" only in terms of suitability, though bad in terms of behavior -- of why reciprocal-licensing commitments must be formalized. Courts in three different countries have already found Google to fail to honor grant-back obligations vis-à-vis Microsoft -- two of them formally ruled on this (England and Wales High Court, Mannheim Regional Court; both in connection with ActiveSync), and the third one (the United States District Court for the Western District of Washington) did not formally adjudge the issue because Google itself (only its Motorola Mobility subsidiary) was not a party to the relevant case, but nonetheless stated that Microsoft was an intended third-party beneficiary of the Google-MPEG LA agreement concerning H.264. And in those cases, Google had identified itself and formally signed license agreements, but it still disputed the applicability of those terms. Now imagine what would happen if someone with Google's mentality, which a U.S. judge described as "what's mine is mine and what's yours is negotiable" , refused to honor a grant-back obligation and claimed that there wasn't even an enforceable agreement in place... especially in jurisdictions that don't even recognize the concept of third-party beneficiaries to an agreement.
As for field-of-use restrictions, Mr. Phipps criticizes that the license doesn't cover you "[i]f you're writing any multipurpose code or if the way you're dealing with VP8 varies somewhat from the normal format -- perhaps you've added capabilities". Again, let me remind you that Google's own license grant under the proposed agreement comes with the same restrictions. Google itself apparently doesn't want people to modify VP8. It wants to control it. Just the way it controls Android through its arbitrarily-applied compatibility definition. Even if Google ultimately agreed with Mr. Phipps and allowed modifications with respect to its own patents, it would still have to convince those third-party patent holders to grant an equally permissive license. But in that case, someone could use patents that also read on, for example, H.264 and call it a modification of a licensed VP8 codec. Just like Mr. Phipps considers certain aspects of the proposed license "unworkable" for open source, so would it be unworkable for patent holders who generally license their patents on commercial terms to grant a license without any field-of-use restriction (and to an unidentifiable, unlimited number of beneficiaries).
The OSI President hopes that Google will improve this license agreement. But whether it can is another question. It can probably make improvements with respect to its own patents, and I believe that's what it should do at a minimum. This would affect its ability to monetize Motorola Mobility's H.264 declared-essential patents, but those have been found to have very little commercial value anyway. At least Google would show that it respects the FOSS philosophy.
Finally I'd like to talk about what the terms of the proposed license say about the need Google saw to take a license from those 11 MPEG LA contributors. After the announcement of the license deal some people argued that Google merely wanted to avoid litigation but that the agreement didn't constitute an admission of the very third-party patent infringement issues Google had denied for a long time. In other words, they said Google was paying for peace of mind, not for essential intellectual property.
It's true that sometimes license deals are struck even though the licensee is convinced of the merits of its case. That's the nuisance-value business model of certain patent trolls: they'll sue you over meritless claims and offer a license at much less than the cost of a proper defense (which is usually not recoverable in the United States). However, I believe that when all the parties to an agreement are not patent trolls but (as Judge Robart described Google in the MPEG LA H.264 context) "sophisticated, substantial technology firm[s]", then I believe there must be a strong presumption that a license deal doesn't just involve bogus claims. And that presumption is further strengthened if a licensee insisted over the years that certain claims had no merit.
Granted, a presumption, even a very strong one, still isn't proof. One needs to know the actual terms of an agreement to have clarity. None of them were announced two months ago. It's just clear that whatever Google pays is enough that Google can just absorb the costs for the downstream. The amount of money involved could be more, or even much more, than what is needed to prove that Google took those infringement allegations seriously, but if Google pays for it silently (because it can afford it), we won't know. Now at least some of the non-monetary terms are clear -- or they will be clear with definitive certainty if Google, despite criticism from Mr. Phipps and others who will agree with him (or even go beyond his criticism), can't offer a license to those third-party patents on permissive terms. The non-monetary terms demonstrate that Google took those infringement allegations seriously. Otherwise it wouldn't have drafted a license that threatens to divide the Web and FOSS communities, which in turn would have major impact on Google's own open source reputation. The non-monetary price Google is willing to pay here is so substantial that I believe it would have chosen to defend itself in court against any infringement claims (which it could have done proactively through declaratory judgment actions) if it had truly thought that all those infringement allegations were bogus, as it would have had all of us believe.
If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents and Google+.
Share with other professionals via LinkedIn:
Saturday, March 23, 2013
Nokia comments on VP8 patent infringement assertions filed with IETF, criticizes Google
After discovering and blogging about Nokia's declaration of intellectual property rights (IPRs) allegedly infringed by Google's VP8 video codec I asked Nokia for comment. The declaration itself just listed 64 granted patents and 22 pending patent applications, and stated that Nokia would not commit to royalty-free or even just FRAND licensing with respect to VP8. Meanwhile a Nokia spokesman has sent me the following response:
"Nokia believes that open and collaborative efforts for standardization are in the best interests of consumers, innovators and the industry as a whole. We are now witnessing one company attempting to force the adoption of its proprietary technology, which offers no advantages over existing, widely deployed standards such as H.264 and infringes Nokia's intellectual property. As a result, we have taken the unusual step of declaring to the Internet Engineering Task Force that we are not prepared to license any Nokia patents which may be needed to implement its RFC6386 specification for VP8, or for derivative codecs."
A few observations:
Nokia describes H.264, which resulted from the consensus of dozens of leading industry players and patent holders, as "open and collaborative", while criticizing Google's push to elevate VP8 to an Internet standard as "one company attempting to force the adoption of its proprietary technology".
Different companies mean different things when talking about "open" versus "proprietary" technology. The way H.264 was defined definitely meets all the criteria for an open standard. The process was inclusive, collaborative, and consensus-based. By contrast, VP8 was created by a single company, which Google acquired. That's the very opposite of inclusion.
VP8 is sometimes also described as an "open source" codec. But even H.264 has been implemented in open source software, and it's fully documented, so anyone can implement it. There's nothing "closed source" about H.264. There are closed-source H.264 codecs just like anyone can create, and some have created, closed-source VP8 implementations. It's not like VP8 comes with a GPL-like copyleft mechanism.
So in which ways would some argue VP8 is "open" and H.264 is not? All that you ultimately hear from VP8 supporters comes down to patent royalties. But I don't know any dictionary definition of "open" that corresponds to "free of charge". You can find millions of closed-source programs on the Internet that you can download for free (like Chrome, the browser with which I wrote this post), just like you can pay millions of dollars for program code that is actually provided to you for inspection. Wholly apart from true and twisted meanings of the word "open", even if one wanted to define "open" as "royalty-free", Nokia's IPR declaration and its ongoing VP8-related patent lawsuits (one trial has already taken place this month, another one will be held in June) suggest that implementing VP8 may ultimately prove even more costly than implementing H.264. For H.264 all the essential patent holders have made a FRAND licensing commitment, and Google itself, as a suspected FRAND abuser under antitrust scrutiny, has already experienced that U.S. courts, including Judge Posner (who ruled against Google on FRAND) and the highly influential Court of Appeals for the Ninth Circuit (who supported Microsoft against Google), consider such FRAND pledges to be enforceable contracts. No promise, thus no contract, exists in Nokia's case with respect to VP8. Nokia can demand anything or simply withhold licenses altogether.
It's not even clear how serious Google is about VP8 being royalty-free. Instead of linking to the W3C's royalty-free patent policy an email a Google employee sent to a mailing list referred to a FRAND standard.
I can relate to Nokia's objections to Google's quest for world domination. Google owns the largest video website, YouTube. It owns the market-leading smartphone and tablet computer operating system, Android. And above all, the leading search engine and online advertising business. A video codec standard of Google's choosing -- driven by a desire not to pay royalties to those who invested heavily over the years in technological progress in this area -- is not a good idea.
It's not just Nokia's concern. A famous app developer who dislikes software patents, Instapaper author Marco Arment, has also pointed out in a recent blog post that "'open' has very little to do with anything they [Google] do":
"What they're really doing most of the time is trying to gain control of the web for themselves and their products."
Google is so aggressive and its agenda poses so much of a threat to competition that even software patent critics at some point believe that some reasonable patent enforcement may be necessary to thwart the most problematic ones of Google's initiatives.
Nokia's statement describes its VP8 IPR declaration as an "unusual step". Unusual, but not unprecedented. I sided with Apple against Nokia in the nano-SIM context. At some point Nokia declared itself unwilling to make its patents available to implementations of Apple's proposal. So did Google's Motorola. After some of its concerns were addressed, Nokia ultimately agreed to make its related patents available on FRAND terms. But I don't expect such a solution here. Nokia is a member of ETSI, but not of the VP8 consortium, and I don't think it will partner with Google on VP8 anytime soon. ETSI needed a way forward for a new SIM card standard, while no one (besides Google) really needs VP8 given that H.264 works fine for the industry at large and for consumers (and it already has a successor, H.265). Even if Nokia -- contrary to what I believe will happen -- agreed to the same kind of solution, a FRAND licensing commitment would be antithetical to Google's claims that VP8 is royalty-free and unencumbered, and irreconcilable with the W3C's royalty-free patent policy.
If you'd like to be updated on the smartphone patent disputes and other intellectual property matters I cover, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents and Google+.
Share with other professionals via LinkedIn:
Monday, May 10, 2010
{Video codecs} Food for thought
The patent thicket problem
A couple of weeks ago I stated my conviction that there's no such thing as an open video codec that can be guaranteed to be unencumbered by patents. I regret to say this, but like I wrote then, the field of multimedia formats is a true patent thicket. Ed Bott, a ZDNet blogger, counted 1,135 patents from 26 companies just in the H.264 pool. That is only one of the multimedia standards MPEG LA commercializes, and there are patent holders who don't work with MPEG LA but who may also have rights that are relevant to Theora (or VP8, for that matter).
Those 1,135 patents refer to registrations in a total of 44 different countries (with different numbers in each country). So there are certainly many duplicates in terms of the scope of the patent claims. Nevertheless, even just a fraction of 1,135 is still a huge number considering that it's about a single codec.
Contrary to popular misbelief, patent law doesn't stipulate a 1-to-1 relationship between patents and products. In the pharma sector there is sometimes only one new patent on a given product, or maybe two or three. In software, every little step of the way is, at least potentially, patentable. That's why even a codec like Theora or VP8 might, although it has a different background, infringe on some MPEG LA patents.
The Xiph.Org Foundation's president, Christopher 'Monty' Montgomery, wrote that if Steve Jobs' email was real, it would "strengthen the pushback against software patents". I'm afraid there isn't enough of a pushback out there that would really be needed to bring about political change in that regard (because small and medium-sized IT companies aren't truly committed to the cause), but now that more and more people do look into the threat that patents pose to FOSS (and other software), there will be greater awareness for the patent thicket problem and for the fact that patent law creates huge numbers of little monopolies as opposed to serving to protect completely functional products or technologies.
The question of relative safety in patent terms
I agree with the FSFE's president, Karsten Gerloff, that "[j]ust because a standard calls for licensing fees does not mean that the users are safe from legal risk". It's true that there might even be patents that could be asserted against H.264 but aren't under the control of MPEG LA, for reasons such as the ones I outlined recently.
However, given the extremely widespread commercial use of H.264, including some of the prime targets of patent trolls, the fact that no such patents have been asserted against H.264 licensees so far is a fact that certainly makes a number of people reasonably comfortable. While there is also significant use of Theora (and related technologies), its adopters aren't nearly as attractive targets for patent trolls as the users of H.264. Besides patent trolls, there are those large commercial holders, and as I explained before, the MPEG LA pool is so big that problems for Theora are in my opinion not outside the realm of plausibility.
The question isn't how attractive a target Theora has been so far. If it was elevated from its current status to a part of the HTML 5 standard, we'd be talking about a commercial relevance that is easily 100 times greater.
The need for consensus in the HTML 5 standard-setting discussion
HTML 5 is an extremely important leap forward and the W3C certainly wants to achieve consensus that results in consistent support by the major browser makers. This is also in the interest of web developers and website operators. If the W3C imposed Theora as the standard video format against the concerns the leading proprietary browser makers voice, this could result in inconsistent implementations of what should become a common basis for all browsers. That, in turn, would mean a lot of potential hassle for the web community.
The market relevance of Apple and Microsoft is significant enough that even without the patent uncertainty argument the preferences and positions of those vendors must be taken into account by the W3C. Their support for H.264 doesn't mean they leverage their relevance as platform companies in order to push another product of their own. H.264 is a multi-vendor patent pool, and of the 1,135 patents in the H.264 pool, Apple contributed only one and Microsoft only 65 (less than 6% of the total), according to Ed Bott's count. Both companies are also H.264 licensees and, quite plausibly, net payers (getting charged more license fees for their own use than the share of MPEG LA income that they receive). They may very well have strategic reasons for which they favor H.264, but that would be another story.
The burden of proof in the HTML 5 standard-setting discussion
I can understand the frustration of FOSS advocates and, especially, the Xiph.Org Foundation that some companies make references to uncertainty surrounding patents Theora may infringe without telling the public which those patents are. At the same time, I don't think anyone could have expected Steve Jobs to include a list of patents in that email about open-source codecs, which was just a high-level explanation of his views.
Unfortunately for Theora's developers and other supporters, there is no such thing as a burden of proof on browser makers saying they're uncomfortable with Theora because of patent-related uncertainties.
If the proponents of Theora want to disprove the "uncertainty" argument, they can't just refer to the fact that nothing has happened yet. If Theora was elevated to a part of the HTML 5 standard, the resulting adoption would represent a fundamental change of the situation.
Unfortunately, it's easier to make the case for than against a possible infringement of patents by a given piece of software. If a patent holder wants to document an infringement, there are different formats, the most popular one being a so-called claim chart. If the Xiph.Org Foundation and its allies now wanted to show that Theora doesn't infringe on any patents, they'd have to theoretically look at every patent out there.
That wouldn't be possible, but how much of an effort would be reasonable?
Under normal circumstances I believe one couldn't expect an open-source project to undertake any patent clearance of this scale. However, if companies such as Google and Opera and a formal non-profit with very deep pockets such as the Mozilla Foundation push for a standards decision with far-reaching implications for the whole industry, then I don't think it would be unreasonable to expect that they should at least look at the patents in the MPEG LA pool and perform patent clearance for Theora with respect to those.
As long as they don't make that kind of reasonable best effort, their argument about Theora being patent-safe amounts to "trust us". I said that I agree with the FSFE that the availability of a patent pool doesn't guarantee that the pool is complete. Nor does the opposite situation (developers electing not to take out patents) guarantee anything.
I don't know what Theora's proponents and opponents laid on the table in internal discussions at the W3C level. What I just wrote is based on the public debate. Also, what I wrote about Theora would equally apply to VP8 if Google proposed its inclusion in HTML 5 (after possibly open-sourcing it).
Is H.264 licensing a practical alternative for FOSS?
I asked MPEG LA, the patent pool firm that manages H.264 and other codecs, whether it would -- hypothetically speaking -- be possible for Mozilla (the maker of Firefox) to license H.264 and then make it available to everyone on Free and Open Source Software terms including the right for users to include the code in derived works. This is the answer MPEG LA gave me:
MPEG LA’s purpose is to provide voluntary licenses of convenience to users enabling them to have coverage under the essential patents of many different patent holders as an alternative to negotiating separate licenses with each. The licenses are nonexclusive and limited to coverage in connection with the applicable standard (e.g., AVC/H.264) being licensed. Therefore, although MPEG LA does not regulate this space directly, as you point out, users are not authorized to use the licensed technology beyond these limitations without payment of applicable royalties or other licenses from patent holders permitting such use.That answer doesn't mention Free Software or open source, but it clearly reaffirms that "users are not authorized to use the licensed technology beyond [certain] limitations without payment of applicable royalties or other licenses [...]", and such limitations aren't compatible with FOSS licenses. They go clearly against both the Free Software Definition and the Open Source Definition with their respective prohibition of discrimination against certain types of use and the requirement to allow such use free of charge.
Under our AVC License, the Licensee is the party providing the AVC end product in hardware or software. Therefore, for products where Mozilla is the Licensee, it would be responsible for paying the royalties and notifying users of the License coverage, and where other parties are Licensees, those responsibilities will fall upon them. In normal usage such as personal use, no additional License or royalty is necessary because applicable royalties are paid by the end product supplier, but additional License rights may be required where the codec is used for other purposes such as subscription or title-by-title sale of AVC video.
While it's clear that code made available under a FOSS license couldn't practically implement H.264, the alternative approach would be for a FOSS browser maker such as Mozilla to include a proprietary plugin in a distribution to end users. The proprietary plug-in would be installed automatically but the license terms would make it clear that, unlike the FOSS code that is part of the same distribution, that part can't be incorporated into derived works without obtaining a license to the H.264 patent pool from MPEG LA.
Canonical (Ubuntu) and OpenOffice are comfortable with proprietary extensions to free software
Ubuntu maker Canonical has chosen that mixed free-unfree software approach. This caused some outrage by parts of the community (since it gave the impression of a FOSS company supporting H.264 against Theora), and Canonical had to justify its approach. I interpret Canonical's "clarifications" as a recognition of the fact that H.264 is commercially extremely relevant, but they try to maintain their FOSS image as much as they can.
There's a similar debate now concerning OpenOffice, for which there are free as well as unfree plug-ins and certain FOSS advocates would like unfree ones to be excluded from the project's official list of extensions. Bradley Kuhn, a Free Software Foundation board member, expressed his personal views in a blog post, "Beware of proprietary drift". It seems the Free Software Foundation lost this argument and the OpenOffice project will continue to welcome extensions that aren't Free Software.
While patents aren't explicitly discussed in the OpenOffice context, this is clearly an example of where things may be heading, contrary to the FOSS purism some people advocate. Proprietary extensions to OpenOffice could also contain patented elements.
Will the W3C at some point have to depart from its royalty-free standards policy?
My prediction is that there won't be a solution for an HTML 5 video codec that proprietary and FOSS-oriented vendors can reach consensus on. The current diversity of codecs and plug-ins is suboptimal but acceptable: it certainly hasn't prevented web video from becoming extremely popular. So there isn't really a pressing need to converge on a single standard for now.
In the long run it remains to be seen whether the W3C can maintain is royalty-free standards policy. That approach has been key to the success that web technologies have had so far, but it could, as the situation concerning codecs demonstrates, increasingly impede progress.
In the early days of web technology development, there wasn't much attention by big industry, nor by patent trolls. Hence it was possible to create patent-free standards.
The kind of technology created at that time was also far simpler than today's advances in online media. The field has become very sophisticated, which has many implications including the consequence that patent thickets related to new web technologies will reach previously unseen heights in terms of size and density.
In this earlier blog post I wrote, under the subhead "The FOSS way of innovation exposes all FOSS to patent attacks", that patents reward the first to patent a new idea, while FOSS innovation is usually of a different kind (with a few exceptions). That's also an important kind of innovation, but it's not favored by the patent system and may therefore not be a sufficient basis for future web innovation.
HTML may become like GSM, at some point requiring licenses to large numbers of patents
As the web advances in technological terms, and given that software patents are extremely unlikely to be abolished in the largest markets anytime soon, the W3C may in a matter of only a few years feel forced to revisit its standards policy.
It takes licenses to thousands of patents in order to build a GSM phone, and at some point it may be required to license large numbers of patents to build a fully functional HTML web browser. I'm afraid it's only a question of when, not if it will happen.
If you'd like to be updated on patent issues affecting free software and open source, please subscribe to my RSS feed (in the right-hand column) and/or follow me on Twitter @FOSSpatents.
{Video codecs} Accusations flying in the aftermath of Steve Jobs' email
After Steve Jobs made a thinly-veiled threat of patent enforcement against Theora and other open-source codecs, two key players from the Xiph.Org Foundation (the organization behind Theora) responded publicly. Its founder, Christopher 'Monty' Montgomery, sent his quick comments to the media (I also received them from him directly when emailing him after seeing Steve Jobs' email). His colleague Gregory Maxwell, the Theora project leader, sent his reaction to a public mailing list. A few days later, Karsten Gerloff, the president of the FSFE, stated his opinion on his blog.
The two Xiph leaders and the FSFE president took different angles but all of them doubted that Steve Jobs' threat had any substance. They used different terminology ranging from "blackmail" to (in a semi-hypothetical context) "jackbooted thugs". Those are hard words, but are they backed up by hard facts? Let's look at them one by one.
Are those patents holders dogs that bark but don't bite?
The official Xiph.Org statement starts by mentioning a long history of veiled patent threats against Ogg multimedia formats, ten years ago with respect to Ogg Vorbis (the audio format) and in recent years against Theora (the video format from the same family). Monty then concedes that this time it might "actually come to something", but he won't worry until "the lawyers" tell him to.
If the veiled threats Monty refers to appeared vain in the past (since no legal action against those open-source codecs was actually undertaken), I can understand the Xiph.Org Foundation's wait-and-see approach. However, a famous Spanish proverb says (in a literal translation) that "the pitcher goes to the well so often that it ultimately breaks."
For whatever reasons, one of which may be the fact that suing open source over patents hurts a company's popularity among software developers, certain patent holders may have refrained from legal action in the past but we may now have reached (or be nearing) a point where at least some of the relevant patent holders may indeed be prepared to strike. A reluctance to do so need not be an impediment forever. When weighing off pro's and con's (of legal action), patent holders may come down on the "no" side in one year and on the "yes" side a few years later under different circumstances in the market.
One field that is very litigious -- and for which HTML 5 and video are going to be fairly relevant -- is the mobile communications sector. Apple and Nokia are suing each other in different courts in parallel. Apple is suing HTC. Those actions are real and giving cause for concern that the concept of the mobile web may also bring mobile sector-like litigiousness with it.
The representation that patent holders -- especially some of those who have contributed to MPEG LA's H.264 pool -- only make unspecified threats and are too afraid of actually taking their patents to court (which could result in invalidation of patents for prior art or a court opinion that interprets a patent claim more narrowly than its owner) was voiced by FSFE president Karsten Gerloff in an effort to question the substance of Steve Jobs' infringement assertion. I understand his motives and they are good, but I have a different impression of how far Apple is willing to go. Just in its litigation with HTC, which is not the only one to which Apple is a party as we speak, Apple is asserting 20 patents.
Karsten makes a similar claim about Microsoft and the possible infringement of some of its patents by the Linux kernel. But it's not hard for me to imagine that there may be (easily) hundreds of Microsoft patents that have the potential to read on the Linux kernel. The ones that are most frequently heard of, the FAT patents, have survived various patent busting attempts due to the way patent law unfortunately works, a fact on which I reported recently.
I strongly doubt that companies of the nature and stature of an Amazon or HTC would pay Microsoft patent royalties without substance just on the basis Karsten speculates about. There's nothing to gain for those companies by doing a press release in which they confirm (even without specifying details, which simply isn't usually done) royalty payments to one major patent holder. That can actually result in others who believe they have patents reading on GNU/Linux trying to collect royalties from the same licensee.
All right holders will prefer to achieve their objectives without suing, which is always just a last resort, but that doesn't necessarily make it a safe assumption that they aren't prepared to sue, especially if they have already proven so or are, like Apple, proving it right now.
Is there an antitrust problem?
Monty and Gregory (both of the Xiph.Org Foundation) allude to antitrust issues in their statements while I can't see any problems in that regard.
Monty says about MPEG LA that "they assert they have a monopoly on all digital video compression technology, period, and it is illegal to even attempt to compete with them." Monty notes they don't say exactly that, but it appears to be how he interprets their past statements on these kinds of issues.
Assuming -- just for the sake of the argument -- that MPEG LA's patent pool indeed does cover so many codec-related techniques that no one can build a competitive codec at this stage without infringing on at least some of those patents, that would (in case it's true) constitute a monopoly. However, in that case the only obligation that regulatory authorities could impose on MPEG LA under competition rules would be to make its IP available on a RAND (reasonable and non-discriminatory) basis. In other words, they can charge something (there's no way that competition law could justify an expropriation without compensation), but they aren't allowed to overcharge.
When Steve Jobs wrote that a patent pool was being assembled to "go after Theora" and other open-source codecs, he didn't say that the objective would be to shut everyone else down. this could also simply mean to collect royalties from those using that technology. As long as those royalties are RAND, there wouldn't be any anticompetitive behavior, but Theora would lose its royalty-free status. It could still compete, but the playing field would look different than the way Theora's proponents describe it as of now.
Gregory's email statement quotes a US Department of Justice statement on licensing schemes premised on invalid or expired intellectual property rights not being able to withstand antitrust scrutiny. I can't see that this reduces in any way the legal risk for Theora and its proponents. I assume that there are, unfortunately, large quantities of valid and non-expired patents related to codecs.
I also can't think of any legal theory based on which patent holders forming a pool to assert rights against Theora would have to contact the Xiph.Org Foundation beforehand. Not only is there no legal obligation but also do I think that in case there are patent holders who (unfortunately) own patents that read on Theora, they are free to coordinate their efforts and present a united front to Theora's supporters.
The term "anti-competitive collusion", which appears in Gregory's email as one of the possible explanations for what's going on, is unclear to me. While my sympathy is with an open-source project, this is just about what would or would not be legal if undertaken, a question on which I reach, to my own dismay, a somewhat different conclusion.
Is there a risk of H.264 becoming too expensive?
Karsten (FSFE) is afraid of a future H.264 "lock-in" and the cost increases this could result in:
It hardly takes economic genius to determine that when enough people and works are locked into H.264, the MPEG-LA will have every incentive to start charging any fee they please. (Oh, and don’t you dare use that expensive camera for professional purposes. Your H.264 license is purely for non-commercial use.)Lock-ins can indeed come with a hefty and ever-increasing price. The mainframe hardware market, in which IBM has a monopoly, is a good example: for a given amount of RAM, the cutthroat price is 60 times of what it is for an Intel-based PC.
However, in the specific case of H.264 and the license fees charged by MPEG LA now and in the future, there are assurances that a scenario of "charging any fee they please" (as Karsten wrote) won't happen.
Like I explained further above, if MPEG LA had a monopoly because any video codec (at least any codec that would be competitive in today's market) needs at least some their patents, then antitrust rules would require RAND pricing. Otherwise, if those patents don't cover the entire field, there could and would be competition, which would gain traction in the market especially in the event of price hikes.
One must also consider that MPEG LA's current pricing is very far from "any fee they please" (even though in a perfect, software-patent-free world the price would be zero), and they have promised to keep future price increases within certain limits. To those who are interested in those pricing questions, I can strongly recommend Ed Bott's ZDNet blog post, "H.264 patents: how much do they really cost?" His analysis contains a number of good points that are consistent with my own analysis of the information available on MPEG LA's website. While controversial (starting with its headline), his blog post "Ogg versus the world: don't fall for open-source FUD" is also quite interesting.
Having analyzed in this post some of what's been said in the debate, I will outline some of my own thoughts in the following post, including what I believe the W3C may have to consider at some point.