Diary of an Engine Diversity Absolutist

Silver Metal Round Gears Connected to Each Other

Browsers play a pivotal role in the web, but does the web need multiple browsers? I think most web professionals would say yes, but how about browser engines, the underlying software platforms that browsers are built on? Does the web need multiple engines? If so, why? To unpack this a bit, we need to talk about “open.” In the web standards world, we like to make a big deal about how the web is open. But what does open mean? The web is open in the sense that anyone can build something with it – that there are no gatekeepers. It is also open in the sense that it’s built on top of open standards.

There are many definitions out there of what makes a standard open. One I particularly like is the UK Government Open Standards Principles which partially defines an open standard as one that has:

• collaboration between all interested parties, not just individual suppliers

• a transparent and published decision-making process that is reviewed by subject matter experts

• a transparent and published feedback and ratification process to ensure quality

You could say the web is “open” because the process by which it is developed and maintained adheres to these simple ideas of transparency, collaboration, and wide review between many stakeholders. It’s also open in the sense that you as a web user can choose what browser you use, and you can make that choice based on criteria that matter to you.

Today, in one of those Twitter arguments that I usually try not to get involved in, I saw the following statement from Alex Russell, head of standardization for Google Chrome (and a friend).

I have an enormous amount of respect for Alex and for the work he has put in to the web “project,” and he is not entirely wrong. His work on initiating the idea and some of the core components of progressive web apps alone make him one of the key contributors to the web platform.

However, Alex has gone on record here, and I think the wording he’s using, and its implications, need to be addressed. The sentiment about “engine diversity” points to a growing mindset among (primarily) Google employees that are involved with the Chromium project that puts an emphasis on getting new features into Chromium as a much higher priority than working with other implementations. As a web technologist, a member of the web standards community, and as someone who works on browsers, I think this mindset shift is problematic. So I would like to address that growing mindset and why I think it’s not a good thing for the web.

Ok – so let’s talk about concrete benefits of having multiple web rendering engines. Those of us who lived through the early 2000s will remember well the time period when Microsoft’s Internet Explorer had a strangle-hold on the Web. It may be difficult to visualize, but as recently 2008, IE had something like a 70% market share. In 2004, over 90% of web usage happened through IE. People who lived through that period of time remember what happened. Innovation on the web effectively halted – the web was considered by many to be a dead platform. It was the advent of a new browser, Firefox, powered by a new engine that revitalized the market and paved the way for the era we are living through today, an era of unprecedented innovation and growth of the web platform.

“But it’s not 2004, Dan,” I hear you saying. “And Microsoft’s Internet Explorer was closed source and proprietary, so this is a flawed analogy.” I hear you. Yes, it’s true, the Chromium platform is open source and has many other contributors besides Google, including my employer, Samsung (and Microsoft, for that matter). Actually, I am very much a supporter of the Chromium project, which I think is great and which has spawned many other browser projects that are moving the web forward in important ways.

There is still a matter of the governance for the Chromium project, which is very tightly bound to Google Chrome and to Google-employed people. Sometimes when I’ve been reviewing new web features that have been proposed by the Chromium team, I’ve been referred to documents written in Google docs, made available only to Google employees. Innovation of the Chromium platform is controlled by Google and bound to a Google vision of what the web is, and what the web should be, prioritized by Google based on their own strategic interests.

We also know from the MDN Web Developer Needs Assessment results from 2019 that cross-browser compatibility is one of the key pain points of web developers. Web developers care about compatibility between browsers and engines because their users are using those browsers and engines. This puts additional pressure on implementers to get wide review and develop their new technologies in an open and transparent manner.

As noted above, one of the key features of the open standards processes that underlie the web is wide review. The reason for wide review is to keep the web honest. The web is not just a single open source project. It is a medium which is part of the commons in a number of important ways. Wide review sometimes means people disagree with you. I’ve been honored to be a part of that wide review process through the reviews of new specifications that we’ve been doing in the W3C TAG, a process that was prompted in part by Alex Russell during his tenure on the TAG.

Since we started the formal process of TAG review in 2013, I’ve seen many many examples of new web features that have benefited from this process and approach. Because TAG review also includes a requirement to respond to our security & privacy self-review document, this also makes implementers think about and pay attention to security & privacy issues. You only need to look back at our 350+ closed issues to get a feeling for the impact that this process has had. But let’s take a look at a few recent examples to highlight where wide review has had a positive benefit. A few notable recent reviews where I feel we are making an impact (“when web standards go right”) are:

  • Contact API I love this API because I have personally worked on maybe 4 or 5 projects that have sought to bring access to the someone’s address book to the web. And it’s something we do need if we want to create better user experiences for the mobile web, specifically for Progressive Web App developers. And it’s fraught with privacy issues because it’s an API that connects the wild web, a places that is riddled with malware and bad actors, to some of a person’s most valuable and most private information.
  • The WebXR API: This is a great example of the web standards process working extremely well. The original API was developed in a W3C Community Group. When it became more mature, a W3C Working Group was created. This group brought in multiple browser implementers (including multiple engines), and other parts of the ecosystem. There was wide review, including a TAG review. Feedback from the wide review was taken on board and now implementations are being worked on and rolled out. We need more things like this.
  • Clipboard Access APIs: This is another one of those powerful APIs that also bring risks to the web platform. This API is a really good example of an effort where wide review has made a difference. The original specification included some user flows that could have not had a strict requirement for user activation. Put another way, it could have meant that in some cases a web page that you happen to have open in a TAB could have had access to the contents of your system clipboard without your knowledge. Since the system clipboard often contains private information, we pushed back on this from the TAG and it looks like a re-think on these aspects is in progress.
  • Badging API: This is another API that is really important for Progressive Web App developers, or anyone who wants build an application that has some control over the indication of “new activity that might require […] attention.” The original design was very Chrome-centric and very centered around PWAs. On the back of conversations between browser makers that happened at last year’s W3C TPAC meeting it looks like we are headed towards a design and approach that takes into account multiple approaches (for example, also including an indication in a browser tab on desktop as well as an icon on a mobile device).

Last year, I worked on a TAG finding, the Ethical Web Principles, which (among other things) reiterates the web is multi-browser:

“The constant competition and variety of choices for our users that come from having multiple interoperable implementations means the web ecosystem is constantly improving.”

W3C TAG Ethical Web Principles

TAG review is only one facet of wide review of new features, however, There is also robust discussion and review of new features in domain-specific working groups, in the WHATWG community, and (in the case of new JavaScript language features) in TC39.

Wide review for accessibility features has also historically been part of the story of the web. Without a culture of and mechanism for wide review, would we have had such a strong emphasis on accessibility?

In all of these cases, the back pressure that gives wide review any force, beyond a moral high ground, is the fact of multiple implementations. To put it another way, why would implementers listen to wide review if not for the implied threat that a particular feature will not be implemented by other engines?

So yes, I absolutely think multiple implementations are a good thing for the web. Without multiple implementations, I absolutely think that none of this positive stuff would have happened. I think we’d have a much more boring and less diverse and vibrant web platform. Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.

The web’s key differentiator is that it is a part of the commons and that it is multi-stakeholder in nature. In a world where one engine is beginning to play a dominant role there is a risk of reducing that set of stakeholders. We need stronger mechanisms to ensure more voices, and especially more diverse voices representing more of those stakeholders, to be part of the process of improving and evolving the web, not fewer. Those voices and stakeholders also need to be part of the conversation that shapes the future vision for what the web is. Multiple implementations are one way to help to ensure those voices will continue to be heard.

Thanks to Ada Rose Cannon and Andrew Betts for providing some feedback on this post.

Liked this post? Follow this blog to get more. 

6 Comments on “Diary of an Engine Diversity Absolutist

  1. disclosure: I work at Google, and previously worked in the Chrome team. These thoughts and opinions are my own, and I don’t speak for anyone else.

    There’s an important question that Alex has posed multiple times, but has gone largely unnoticed and unremarked upon:

    Should the web move at the rate of the slowest-to-improve engines?

    How long should an engine wait after having written a proposed spec, tests, and an experimental implementation that has gotten positive feedback from web developers, but that no other engine can apparently be bothered to implement or sign off on? A year? Three years? Ten?

    As someone who isn’t in the trenches, but tries to follow spec discussions, I get the impression at times that the teams at Mozilla and Apple do not have the bandwidth to implement or even to review the designs of many proposals in a timely fashion. In Apple’s case certainly this can not be because of lack of organizational resources, and underfunding the web appears to align with their business.

    Engine diversity is valuable, it matters. It isn’t the only value though, and sometimes it’s in tension with our other values, like making the web a platform that is fit for purpose. When I read the MDN developer needs survey, I see large numbers of devs who need a more capable web. Given an open ended question about what’s missing from the web, 12.4% say access to hardware, 4.7% say filesystem access, 3.4% say PWA support, and 3% say access to native APIs.

    If it takes 10 years before we can meet those needs, will the developers still be there? If Chromium wasn’t proposing standards to meet those needs, would the other browser engines step up, or would the web languish?

  2. FWIW I agree and I think our shared vision of the web has to include the idea that some APIs and features are going to ship to one particular browser either before they get to other engines or even exclusively. Right now, Web Bluetooth (to pick an example) seems to be exclusively Chrome with little inetrest from other implementers. Does that mean that those implementers are “holding back the web”? I’m not sure I would agree with that, even though I am a proponant of Web Bluetooth and I’ve even given talks and demos about Web Bluetooth. I find this situation frustrating because I think wider deployment of Web Bluetooth could be good for the web and good for the development of an ecosystem of connected devices. At the end of the day, I am willing to accept a web where that API may be only available on one engine. I attempted to outline above some success stories where I think collaboration led to a stronger technology with more support across different engines and browsers. I want to see more fo those.

  3. Thanks for a well concidered article.

    I’d like to add that as well as “wide review” the W3C process requires there be sufficient “implementation experience”. This is often taken to be “2 or more” independeent implementations of the standard etc. To my mind that is a very good reason to ensure diversity, the more the better the understanding of implmentation issues or chance for discovery of issues.

    https://www.w3.org/2019/Process-20190301/#implementation-experience

  4. >Should the web move at the rate of the slowest-to-improve engines?

    No. It should move at the rate of consensus, and where there is a stalemate, it ought to move as slowly as possible to ensure that a bad feature does not creep onto the web and cause any number of risks.

    Google dearly needs to eat its own humble pie. Did the Pointer-versus-Touch events fiasco teach you nothing? The web does not need more buggy APIs rushed out in Chrome, only to be adopted so quickly the bugs and design flaws cannot be ironed out.

    We can’t even rely on Chrome doing what your own scroll anchoring “spec” says it’s supposed to be doing. You rushed draft WebRTC out so fast it ruined the final standard for years. Ditto Web Components.

    How can we trust you as shepherds of the web after all of that? You may be able to win the hearts of minds of fellow “rush it to market” folks, but we’ve seen that mentality played out to it’s logical extreme in the ActiveX days. We don’t want a repeat of it again.

  5. By the way, the Web is not perceived only through browsers, and if you accept that then you’re going to agree that a discussion about diversity of browser engines is off point, maybe? I hope I am not the only one to say this.

  6. Experienced web developers suffer when they spend their time learning new features or conducting regression testing taking them away from more value adding work for their employer. Only a very small elite have the engineering skill, time, and a role within a deep pocketed employer to engage and understand change.

    The web has become a lot more complex since Sir Tim’s original emailed specification. New engineers take increasingly more time to become productive members of the web workforce. This is not a good thing.

    After 30 years most users of the Web are happy with the experience and functionality. Maturity has arrived.

    Therefore the right pace of change is one that the majority of engineers can engage with productively, not one the company with the most elite engineers could operate at if they were unconstrained.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.