's column with predictions for the
Web in 1999
I have received some interesting comments on my December Alertbox with predictions for future Web trends.
The Web Is Boring
Dan Shafer, the Editorial Director of CNET's
Interesting stuff. Limiting yourself to just a handful of predictions jeopardizes your percentage chance of being right but helps focus your audience's attention on those issues you consider important. I like that. Too many top 10 lists floating around anyway.
While I essentially agree with most of what you predict, I would take issue on two points.
First, as to the
deployment of rich media content
. I think it is essential to the long-term health and well-being of the Web that we find ways of making such content available and accessible via the bandwidth people have. I know that the Web isn't like TV but many people who aren't yet on the Web -- and probably the majority of those who are -- don't know that or don't appreciate it.
"Boring" is a word I hear used a lot to describe today's Web
and when I probe the meaning of that term, I usually find that the speaker is comparing it to television or, more generously perhaps, to other software applications he is accustomed to seeing and using.
While indiscriminate use of multimedia isn't any more sound an idea than the profligate use of graphics in general (a point where you and I surely agree), I
think that we'll see a significant increase in the number of sites who make use of rich media in 1999. Not all of those uses will be appropriate and some will definitely be annoying, but I predict that a number of hugely successful commercial sites will find ways to make rich media a differentiator next year.
The other point at which I disagree is on the
nearness in time of the portable, wireless, and small-format Web access device
. While I do think that portable and wireless access will become much more broadly accessible during the year, I
think the small-format prediction will come true quite that soon. The primary reason I disagree is that we have a serious chicken-egg problem here.
The vast majority of content on the Web today won't scale well to the small device.
That means that the publishers of that content must revise its format to accommodate the new form factors or that a lot of new content will have to be written to take them into account. I think it is highly unlikely that the latter will take place, so these devices will end up being initially used only to access sites that prepare content specifically for them.
Ultimately, I see this as a server problem. Rather than expecting each Web site to create content for all these multifarious formats, the devices themselves will need to incorporate small Web servers (or similar technology) that filters incoming content and adjusts its format to suit the capabilities of the device. If we don't find a way to do this intelligently -- with JINI, for example -- then we are going to fragment the Web into dozens, perhaps hundreds of tiny pieces. Nobody wins if that happens.
People may say that text-dominated websites are boring, but then they would also say that newspapers are boring. (And yes, I know that many people don't read newspapers: but most of them probably do read crime novels or other text-only media.) Text is a very vivid medium - the human brain is a very powerful illustration tool: read
and you imagine the blood-stained broken bronze helmets littering the battlefields of Troy 3000 years ago.
The comparisons to TV and software are very different:
The TV comparison is a complaint about poor production values.
The software comparison is a complaint about lack of interaction, engagement, and accomplishment. It is in fact more fun to use something that gives you a sense of having done something.
to see more bandwidth and more multimedia, but wishing for it doesn't make it happen. For the next few years, I think we are relegated to a low-bandwidth Web and we would do better if we attacked the comparison to engaging software than the comparison to entertaining television.
I agree with the chicken-and-egg problem for mobile Web content. I hope that the solution can be something that doesn't fragment the Web, but you are probably right that the initial solution will be special-purpose content. Over the long term, your proposal for server-based solutions will hopefully come true. Scalable content will become ever more important as the diversity of Web-access devices increases.
Human Help and Unbiased Recommendations
from Tripos, Inc. writes:
Your article predicting web trends for 1999 included the quote, "it is easier to train a computer to give good service than to train thousands of human service reps." When you have a chance in a future article, please offer more explanation of this... at this point, I still prefer the humans if I can reach them quickly.
And thanks for also reviewing your predictions from a year ago! One trend I've observed on the Web is more honesty. Communication mediated via the Internet (as opposed, for example, to "mediated via television") require more openness. For one thing, if anyone tries to ignore past mistakes or bluff their way past them, it's too easy to have someone slap back a URL or email from the past! I love the
site for this reason... they make fun of their bad choices in stocks, and invite criticism. (They even had an article recently entitled "Uncompromising Honesty" concerning their internal company policies.)
BTW, one item on my wish list for the Web in 1999 is for better means to find unbiased recommendations and choices. I often turn to the web to find a review of software and find it difficult to find a list of choice with reasonably unbiased recommendations and reviews. The desire for unbiased recommendations concerns a number of entities: any merchandise sold, organizations, treatments for health, methods for handling kids, etc.
You prefer human service reps,
you can reach them quickly and
they actually provide good help. But realistically, most companies cannot afford to staff call centers with sufficiently many operators to handle peak loads without long waits. Also, the operators never know your personal situation well and often don't even know their own products well since they support a range of stuff.
Truly great personalized human service exists, of course, but mainly in luxury establishments that can afford to invest in the necessary quality and quantity of operators. The Web turns the tables and makes quality service affordable in many more cases since the cost of developing a high-quality automated service can be amortized over a large number of users. The problem with human service is that it doesn't scale: if you get ten times as many customers, you need more than ten times as large a service staff (the staff goes up more than the use because of management overhead).
I fully agree with your desire for unbiased recommendations. This is one reason I am so eager to see
succeed: when the user is the paying customer, then the site will be motivated to offer trustworthy reviews and recommendations. In general, I think there is a great business opportunity on the Web for offering
reputation management services
that would keep track of the quality of other sites and services and advise users where to go.
Colleen Kehoe from Georgia Tech follows up:
Joe Grant's comments asking for unbiased recommendations reminded me of a site I came across recently:
It's not exactly what Grant was asking for - it rates merchants rather than products - but I thought it was in that same spirit.
I had just placed an order with
(which I was really impressed with) and at the end of ordering, I had the option to answer a short survey from BizRate about the order I had just placed. Talk about being in the right place at the right time -- at no other moment would I have been more motivated (either by a good or bad experience) to answer the survey than just as I completed the order. It also raised my confidence in etoys that they would participate in this kind of ratings program. Another nice touch from BizRate was showing me how others had rated the site and then returning me to etoys where I had left off. Finally, just today (3 weeks later) I received an email from BizRate asking me to complete a short survey on how the rest of the order went -- did I receive what I ordered, what it what I expected, etc.? Very nice follow up on the original survey and very short, as they had promised.
Year 2000: More Problems Than We May Think
Ben Tilly from
The Trepp Group
You claim that 2 digit date-fields only matter if programs do computations with them.
I have already been struggling with the issues that they pose for various infrastructure automations. Allow me to demonstrate in Perl.
Suppose that we want to produce your
filename automatically. A typical code-snippet would be
($yy, $mm, $dd) = (localtime)[5, 4, 3];
$mm = "0$mm" if $mm < 10;
$dd = "0$dd" if $dd < 10;
$file_name = "$yy$mm$dd.html";
Suppose that we want to go out and automatically find that URL. A typical code-snippet would look for files that match the pattern
(6 digits and
In 2000 the first program will produce a URL of
and the second program will not find it.
Both solutions are equally valid, but a lot of automatic FTP posts and downloads are going to find that one used the one convention and the other used the other, and there is no good solution. Unless you ask you have no way of knowing which one someone is about to use. Sure, you can work around the issue, but not many programmers do.
Even worse off are people who assume their site is safe because they use a 4 digit number. This does not mean that you are safe. If that number is generated automatically, there is an excellent chance that you are about to go from 1999 to 19100. The reason is that they got a 4-digit year by "19$yy" instead of adding 1900.
And the problem is not just Perl. Perl's behavior is inherited from C, and C is the ancestor of a lot of other languages.
I stand corrected. In general, I am sure there are plenty of such Y2K bugs hiding in places where you would least expect them.
I think the Web is pretty robust and will handle Y2K relatively well. But the network nature of the Web will haunt us: since the Web
systems together, even compliant systems may be impacted by bad data received from non-compliant systems. Even if most large sites fix their systems, the Web may still have problems due to less diligent services. My own site was almost brought to its knees a few months ago by somebody in India who downloaded the same file about ten thousand times within a few minutes. Being connected to people with bugs is almost as bad as having bugs yourself. The Internet as a whole will probably continue running, but segments may be taken down.
What Good are Patents for the Fast-Moving Web?
Jay Virdee of the
British Hotel Reservation Centre
I am at a loss to follow your argument - "Web Patent Bonanza". Surely the Internet is still evolving rapidly from a business model and user interface perspective and I find it difficult to conceive how a patent would offer any kind of protection on both counts. Furthermore policing and enforcing such patents would be an enormous if not impossible task given the nature of the Internet. In the 3-4 years time it takes for the patent to make its way through the Patent Office, the Internet will surely be a different beast from the one the patent is designed to protect.
You are right that it doesn't make much sense to patent a current "hot" idea: long before the patent issues, we will all be off to doing something else. So, for example, there is little value in having patents on the various tricks to do animated Web pages that were in use before animated GIFs infected the Web.
This only proves the need to be
extremely far-sighted in the patent game
. A patent is good for twenty years, so the
best inventions are those that may not see common use for about ten years
: far enough out that it is possible to patent fundamental inventions rather than small twists; yet early enough that there are many years left in which to collect royalties.
Jay Walker from Walker Digital described his patent strategy in a recent interview: the first part of the article is the usual boring stuff about how much money he may make on his IPO, but scroll to the bottom of the page and start reading with the last three paragraphs (and continue with the "next page" button). Walkers basic approach to patents is to
start by analyzing unmet user needs and then patent a way to satisfy these needs
. This is the opposite approach of traditional labs like IBM Research which starts from the technology end: they ask what they might be able to invent given their research interests.
Walker's approach is exactly the same as my own. If you start with current technology, then you are never going to be able to push out into the part of the design space where the good patents lie. Walker is very proud that his 25-person research team is filing two patents per week (whereas the hugely larger Lucent Technologies only files 15 patent applications per week). I am actually not that impressed: during a year when Bruce Tognazzini and I had a contest to see who would be Sun Microsystem's most prolific inventor, we each filed a patent per week on the average. Of course, having a creativity contest with Tog is a doomed endeavor, but the point is that it is possible to invent a huge number of technology improvements if you base the project in future user needs and not in the technology itself.
You don't need to police the entire Internet and crack down on every single infringement. Just go for the big ones: that's where the royalties lie. Or use your patents as a trading leverage when you want to get free access to use some other company's patent. If you have enough patents, then this other company will surely infringe on at least one: now you have them and can get their inventions for free.
Self-Referential Feedback When Calling Pages "Popular"
Commenting on the notion of Self-Optimizing Content in the
, Eric Scheid from
Ironclad Networks Pty Limited
I wonder a little regarding the self-referential feedback that is happening here. For example, you have listed certain pages on your site as "most popular" and prominently positioned links for same, but how many of those hits are because of the prominent positioning? If you were to prominently position a page which is actually not-so-popular but refer to it as "a popular page", by how much would the hits increase?
To a limited degree you could determine some extent of this effect by examining your logs to see how many of those hits were referred by the front page compare to normal links which may be scattered about your site. That would still be inexact since you don't know whether that front-page referrer hit was because it was highly recommended by group consensus, or because of the inherent interesting nature of the link itself -- how to discriminate between the context of the mention vs the subject of the mention?
In a similar vein, there are other mis-uses of hit-stats. The most popular seems to be "we don't get many hits from earlier browsers, therefore we don't need to design for them". In this instance the proper measure of "many hits" would be the first-hit per visitor, not the subsequent trolling-through-site hits.
There is no doubt that a feedback loop starts as soon as you denote something as popular:
So to be "fair", continued presence on a most-popular pages list should be determined after modifying the site access statistics to downplay hits generated due to the "most popular" status. For a high-traffic site, it would be possible to estimate the effect of the popularity highlighting by letting a very small percentage of users see pages that did not have any indication of popularity. Doing so would make it possible to compare the clickthrough rates gathered by an article when it is highlighted as "popular" as opposed to when it is simply listed on an equal footing with other articles.
Even though popularity statistics have an inherent weakness due to this feedback bias, I remain a firm believer in their utility. Once a site becomes very big, it becomes necessary to prioritize the information and allow easy access to the most popular elements. Even if popularity cannot be determined exactly it can still help users. Just remember the feedback loop and be prepared to downgrade items even if they continue to attract a fair number of hits.
Early Readers Don't Discriminate Consciously
Jason R. Fink from
I was noting the "standards on the rise" prediction (no surprise) which is, of course, great. One of the Flame wars that ensued at
the Web Standards Project
was about showing compliant pages and informing the general public. I felt (as I do about a great many things) if someone chooses to spend their time trying to educate those who may not wish to be educated, feel free. I also agree with the other side of the argument that companies and developers (although for hypertext only technology I prefer to call them
) need to be conscience and aware of standards as well.
It has been my limited experience that creating a valid and well written "webspace" will attract the average as well as "discriminating" reader. Along with this of course I provide professional background information and links to other more informative sites to bolster the validity of actual content and the design philosophy.
The proof was quite literally in the pudding. When I began the arduous task of building the site (a long time ago on a server far away) I was totally clueless and received very few visits or subscribers. This was because I wrote fluff, regurgitated links and knew nothing about HTML. I got a perverbial slam when I requested a Usability Engineer look over my site. As is typical of my personality, I got mad and learned basic HTML (and have gone onto learn some other neat stuff like XML and SGML).
When I began to write about aspects of technology I actually had experience with (Name servers, DHCP, Operating Systems etc.) I began to draw a crowd. It is my belief that I drew in more readers not just because the content was better but my actual HTML was validated and there was a means for readers to discover that.
On one last note, the comparison of Web to ANYTHING is invalid. I recommend Terry Sullivan's Essay on
Metaphors for the Web
. If a user prefers pictures over words (I like the little quip about crime novels) then I won't be the least bit offended if they don't visit my site and am sure you are not sweating bullets over that audience either.
I totally agree with the point about writing about something you know and care about. No design can compensate for lack of insight or value in the content. Once you have good content, it furthermore needs to be presented in a way that will work in all browsers and not just the one you happen to have on your own computer.
Benefits of Open Software
David Jao from MIT writes:
You mentioned that the next Netscape browser promises to be 100% standards compliant. I am surprised that you did not mention in passing the reason for this improved standards support. When Netscape released the source code for their browser on March 31, 1998, the ensuing open development effort engaging hundreds of volunteer programmers all over the world all but ensured that their next browser would be standards compliant.
Open projects promote open standards
, and frankly, anything less than full standards support in an open project would have been completely unacceptable.
Anyone connected to the internet today has got to be aware of the revolution that is happening with the rise of open source, collaboratively developed free software. Full standards compliance is but one of the myriad benefits that will result from this new software development paradigm.
Share :Twitter | LinkedIn | Google+ | Email