Web Hypertext Application Technology Working Group Launches Mailing List
Friday June 4th, 2004
Ian Hickson writes: "Some of you may be interested to hear that people from Opera and Mozilla have set up an open mailing list for discussing Web Forms 2 and other new specs that have been discussed in various places over the last few months."
The list is the public forum of the newly-formed Web Hypertext Application Technology Working Group, an organisation made of contributors from several major Web browser development teams. Current invited members are Brendan Eich, David Baron, David Hyatt, Håkon Wium Lie, Ian Hickson, Johnny Stenback and Maciej Stachowiak.
The group is working on specifications for Web Forms 2.0, Web Apps 1.0, Web Controls 1.0 and a CSS object rendering model. This work will be largely done outside of the World Wide Web Consortium, though finalised versions of the specs are expected to be submitted for formal standardisation. While the decision to operate independently of the W3C may be seen as controversial, many feel that formal standards bodies move too slowly to react to proprietary technologies such as Microsoft's forthcoming XAML. In addition, many in the W3C are pushing for Web applications standards based on technologies such as XForms and Scalable Vector Graphics, whereas the members of the WHATWG favour backwards-compatible HTML-based solutions, which they believe would be easier to implement and more likely to be adopted by Web developers.
#36 Re: Disappointing news, IMO
Tuesday June 8th, 2004 5:28 AM
You are replying to this message
> I'm in favour of a 'clean break' for Web technology; remaining backwards-compatible for ever keeps sites messy.
Messy, yes. But it also keeps sites working in the UAs that people actually use. The difficulty in breaking backwards compatibility is that it necessarily involves breaking your site for some subset of the possible audience. Of course, one could imagine two presentations of a site, one using the 'old' technlogy and one using the newer technology but that is more expensive than just developing for the universally supported tech and may prevent the use of the more interesting or worthwhile aspects of the new development.
Given these limitations, the number of people who will actually use new technologies /on the web/ is very small.
> now that standards are XML-based, there really shouldn't be too much difficulty in implementing that
For browser authors? Sure - inasmuch as there will be no need to reimplement the parser, although presumably a lot of new code to actually display and interact with the documents will be needed. But browser authors don't write sites. People who write sites don't, as a rule, use XML based languages, even where they have reasonably wide support. That's partially because XML isn't very widely suppoorted, of course. I expect it's also partially because XML is an order of magnitude harder to write than HTML. The (lack of) error handling in XML documents requires an entrely different toolchain to HTML. One has to ensure at every stage that there is no possibility of trivial validation errors creepng into the document. As Evan Goer found <http://www.goer.org/Journal/2003/Apr/index.html#29> , even XML advocates find this to be a very difficult challenge on their personal weblog. Scale this up to a large commercial site like ebay with thousands of contributers and you have a tough challenge to solve just making sure that your site doesn't spend most of it's life showing a parsing error.
XHTML has failed because it offers no benefits sgnificant enough to be worth replacing entire CMS installations and retooling websites to deal with the added complexity of XML.
The one significant benefit it does offer is the support for compound doocuments. At the moment that means MathML. Whilst it's true that some people are using MathML in XHTML (e.g. <http://golem.ph.utexas.edu/string/> ), sites like Wikipedia are choosing to ignore it and use inaccessible PNGs instead. Why is this? Well I don't know about the details of Wikipedia, but I would guess some of the following issues led them toward bitmap graphics over XHTML+MathML:
1) Support. Actually, as of recently, one can serve the same XHTML+MathML documents to IE users and Mozilla (but not Camino!) users. But IE users need to download a plugin. And for safari / Opera / pluginless IE users, all you get is dense, incomprehensible text - there's no provision for graceful degredation (unless you can create both MathML and PNG representations of an equation and content type sniff. Or rather, UA sniff because browsers don't actually advertise MathML in their accept header)
2) Validity. Wiikepedia appears to validate (at least, I tried a random page and it seems to validate). If, however, it is at all possible to create an invalid page within the wiki, then XML is a problem because it could lead to pages returning XML parsng errors. So they would need to hack validation on submit into the wiki.
The point of the new group is to avoid the first of these problems. The fact that the second will also be avoided is just a collary of maintaining backwards compatibility. Sadly it's necessary in order for any significant adoption to occur.
> The page isn't going to work in IE anyway... why not make a clean break?
As I understand it, the idea of the group is to extend into things that can be implemented in IE using HTCs or some other method of extending IE (short of binary plugins). At the very least, any new technologies are going to degrade gracefully enough that all enhanced pages will continue to work, albeit less smoothly, in IE 6. That's the essence of the backward compatibility - it's not just supposed to be a surface feature.
> appropriate browser plugins/extensions that can handle namespaced DOM entries rather than only external files, could solve many problems in a coherent manner
Such as the problem of getting people to download and install plugins? Flash is bundled with IE and no other plugn is at all widely used. Otherwise, we could all use MathML and SVG already...