Web Hypertext Application Technology Working Group Launches Mailing List

Friday June 4th, 2004

Ian Hickson writes: "Some of you may be interested to hear that people from Opera and Mozilla have set up an open mailing list for discussing Web Forms 2 and other new specs that have been discussed in various places over the last few months."

The list is the public forum of the newly-formed Web Hypertext Application Technology Working Group, an organisation made of contributors from several major Web browser development teams. Current invited members are Brendan Eich, David Baron, David Hyatt, Håkon Wium Lie, Ian Hickson, Johnny Stenback and Maciej Stachowiak.

The group is working on specifications for Web Forms 2.0, Web Apps 1.0, Web Controls 1.0 and a CSS object rendering model. This work will be largely done outside of the World Wide Web Consortium, though finalised versions of the specs are expected to be submitted for formal standardisation. While the decision to operate independently of the W3C may be seen as controversial, many feel that formal standards bodies move too slowly to react to proprietary technologies such as Microsoft's forthcoming XAML. In addition, many in the W3C are pushing for Web applications standards based on technologies such as XForms and Scalable Vector Graphics, whereas the members of the WHATWG favour backwards-compatible HTML-based solutions, which they believe would be easier to implement and more likely to be adopted by Web developers.

#43 Re: Re: Re: Disappointing news, IMO

by jgraham

Tuesday June 8th, 2004 2:10 PM

You are replying to this message

> It's entirely possible that XHTML2 will be supported in future Microsoft browsers (operating systems).

"Another point that came out of the discussions is that, in case there was any doubt, Internet Explorer in Longhorn will not support XHTML or SVG." <>

If it's not in Longhorn, it's irrelevant for the forseeable future.

> Look at UTF-8: plenty of people use UTF-8 on their sites now, whenever they have a need for internationalisation, because it just works.

It also helps that UTF-8 is universially supported and backwards compatible with the most common existing encodings. The X* technologies have neither of these advantages.

> As for XML - I do not see the need for well-formedness as any kind of problem.

So you think companies will happily throw away their existing software and upgrade to 'XML compatible' software despite the huge cost and marginal benefit? I don't.

> creating WYSIWYG tools that will only output well-formed XML is hardly a major challenge

So where are all the WYSIWYG tools that output well formed XML? I know Dreamweaver can be persuaded to produce decent code. Nvu doesn't do XML. Every other WYSIWYG tool I can think of produces dreadful code. And, in any case, the editor is the wrong place to enforce well-formedness. Most people consder the fact that Nvu eats certian kinds of preprocessor code and many hand edits to be a bug. But it's just an artefact of the fact that Nvu (as far as I know) writes out a serialisation of the in-memory DOM struture when it saves a file. That's the way a 'real XML' tool is suppsed to work, but it turns out not to have the flexibility that most people require.

> The only major problem ... is really when you need to pull in unreliable content.

Where unreliable content includes: 100% of all existing ('legacy') content, any content that has been edited by a human, any content (e.g. trackbacks) that is automatcally syndicated from another site (Jacques Distler, who has been successfully running an XML based website for years recently had a problem with syndicated content that led to the yellow parsing error of death until he could roll out a patch). That's pretty close to 100% of all content.

Incdentially, the 'edited by a human' thing is pretty important. If you have people contributing to your site, everything they enter will have to be validated (even if you use a tool, if it doesn't use a 'real XML processor', you need to be wary of bugs...) which meanns that you have to include a validation step in the publishng process and make sure that all the contributers are clued up enough to be able to fix valiidation errors.

If you can do that, fine. The problem with trying to base the web on a technology that requires validation is that most people *can't* do it. The situation is compounded by the fact that, at present, they can do what they're trying to do *without the extra effort*. I don't see people transitiooning from a forgiving system to an unforgiving system without kicking and screaming. If XML becomes a successful language on the web, I expect it will be a direct result of the development of a liberal XML parser that lowers the barrier to authoring.

> As for not using MathML, this was probably due to lack of support in browsers

> current plugins don't solve the problem because they work with separate files, whereas the real power and convenience IMO is in embedded, inline XML formats - when you can put equation or a bar of music notation in the middle of your essay

MathML might be a faliure because of the lack of native browser support. But if that's true, it is also neccessarily the case that plugins that allow specialised inline content are a faliure. Mathplayer <> will display inline MathML in IE. It will (now) work with Real XHTML files. It will do all the things that you advocate that plugins do. But people still use bitmaps rather than MathML because bitmaps have close to universial support (well, non-visual UAs don't support them but they don't support MathML either). MathML also suffers from being a pig to author. The only reasonable solution for heavy use is to use something like Itex2MML to convert from a LaTeX-like syntax to MathML. Sadly, as with all auttomatic authoring tools, this places substantial limits on the quality of the final code (in particular, disttinctions between, say, mi and mo are often lost).

> There might be technical limitations in the wiki software that let it output ill-formed code; those are technical limitations, it's a software issue, fixing it isn't a problem.

I have a strong suspicion that you're wrong and that, unless the tool was designed to produce valid markup from the start, it will often be non-trivial to ensure that iit always produces valid markup under and circumstances. Even if it's technically possible, no one will do it because, unless certian specific technologies are required, XHTML offers notthing that HTML 4 doesn't but comes with big strings attached.