In the beginning: Why the media couldn’t charge for content.


If only, if only, my colleagues say, if only the news media had started charging for content when they launched their first websites.

If only the media had charged, then none of the current problems of free content would have happened, the public would know that content costs money and the newspapers and TV stations would have a second, strong income stream and all would be well. There would be lots of good, high paying jobs and the money to do real journalism rather than celebrity silliness.


If only……


So now there is a search for scapegoats. Media managers who have shown that they are completely incompetent in running traditional print and broadcast are an easy and obvious target.

Others blame the tech community and a misunderstanding of the truncated quoting of Stewart Brand, “information wants to be free.”

Then came Wired editor Chris Anderson’s nasty tract, Free. The main flaw in “free” is the assumption that the concept can transfer outside the tech and science fiction communities.  Unlike commodity (or atom)  based corporations, for creative individuals and most of the media, “Free” usually doesn’t work outside those arenas, an inconvenience that the advocates of “free” constantly ignore. What is left is  basically a schoolyard bully taunt: “So there, free is the future, so take your medicine and work for nothing.”

Most of the people who ask the question and provide answers were not around in the earliest days of online news media. So that is why there is a belief that if somehow the media had charged in the early days, today all would be well.

Yes, there was one day and just one day, when, if the media had got its act together, it could have started charging for online news, September 1, 1993. The trouble was that  there were no major media on the Internet in a big way come that September.


I was present at the creation of online media


I was working in “online media” long before the launch of the World Wide Web, back in the days of videotex and teletext from 1981-1985.

The Internet played a role in my science fiction short story Wait Till Next Year, published in Analog in November, 1988 (although I got some of the tech details wrong).

I got my first Internet account in August 1993. Note I am a very early adopter and got in just before the Internet tsunami a month later in September 1993.


I co-wrote the first book on Researching on the Internet, published in the fall of 1995. So I was researching the state of the internet, the web, and the media at the first moments of news on the web.

I was the third employee assigned to CBC News Online, April 1, 1996.

The cold, hard fact is that web evolved with free content. It had little to do with Stewart Brand. So when the media first ventured onto the web, the media had to play by the rules at the time. Those rules appeared to say, “commerce on the Internet is a no- no.”

The Genesis of the media on the Internet



In the beginning, (in 1968-1969) US Department of Defence created ARPANET.

And DOD saw that it was good.

DOD said let the military and the scientists communicate.

And the military and the scientists communicated.

And DOD saw that it was good. The American was getting a good return for their money.

But then there was darkness on the face of ARPANET,

DOD saw that too many people had access to the ARPANET and most of the users didn’t have security clearances.

DOD said in 1983, we will create a separate MILNET and give the scholars ARPANET
In 1984, the National Science Foundation created NSFNET.

And DOD and NSF saw that it was good.

Thus TCP/IP spread to universities around the world.
And the scholars saw that it was good.

The techs improved a system called UUCP and created protocols for e-mail, ftp and newsgroups.

And the techs saw that UUCP was good and said GNU, thus, this protocol shall be free to all.

The campus deans said let us have more access to ARPANET, NSFNET,TCP/IP and UUCP NET via private sector telecoms who can do the wiring.

Verily the private sector telecoms wired the universities and the laboratories and created dial up for scholars in their homes.

The telecoms reaped great profits of gold and silver and precious things from those wires.

And DOD and NSF and the scholars and the techs and the telecoms saw that it was good.

NSF decreed that NSFNET and ARAPNET shall be free from commerce, for it was the will of the community that the networks are for education and the spread of human knowledge.
And so NSF said we shall cast out UUCP NET because it can be used for commerce (but we will still use the free software they developed).

And thus UUCP NET was cast out.

The telecoms and the nations of the world far from North America agreed that this networked system was good and created their own networks.

And they all saw that it was good.

Thus it came to pass that the universities which had journalism schools gave their students access to what was now known as the Internet.

And lo and behold it appeared to be free (although their accounts were paid for, in part, by tuition fees). The students were taught that the Internet was educational and thus should be free for all.

At the same time their elders in journalism who loved tech were using another system called CompuServe (which the elders had to pay for with their credit cards).
The journalism students and j-professors came on to CompuServe said “Behold I bring you tidings of great joy. There is this wonderful thing called the Internet and it is free.”

It came to pass that Tim Berners-Lee at CERN created the World Wide Web.
And all saw that the World Wide Web was good.

So the professors, and the students and the reporters and the editors, all of whom loved tech, all rejoiced when they saw the World Wide Web. For they thought they had found the perfect way to deliver the news.
Out of a whirlwind came Netscape.245-netscapes.jpg

At first only the techies loved Netscape.

Then Netscape said we shalt have an IPO.

In the year of our Lord 1995, on the ninth day of August, the IPO came to pass, and it was wonderful and the Netscape stock set a record on Wall Street.

So Netscape became front page news and was high on the evening newscasts.

The media barons and all priests and scribes of the news temples saw that much gold and silver was going to Netscape and asked “What is going on?”

So the barons and the priests and the scribes summoned those of their followers who were techies and said “Tell us, what is this Internet? What is this World Wide Web? Why is Wall Street giving gold and silver and precious things to Netscape?”
The techie reporters and editors said to the barons, the priests and the scribes, this is the Internet, this is the Web.

The techie followers showed the barons, the priests and the scribes their personal websites. Thetechie editors showed the barons, priests and scribes the under the table news sites they had created. They told the exalted ones this World Wide Web is perfect for delivering news, you can have text, you can have pictures, you can have audio and you can even have video.

The barons and the priests and scribes decreed to their techie followers and editors. “Thou shalt build websites for our news operations.”

So the techie news people and the tech techies laboured mightily and created websites. They presented the websites to the barons, priests and scribes.

The barons, priests and scribes looked at the websites and saw that they were good. So they told the news people and techies that they had done a great service and would be rewarded from the gold and silver we get from this new World Wide Web (although the barons, scribes and priests, like all their kind, were lying and did not intend to really reward their followers).

The techie news people and the tech techies trembled and quaked but bravely told the barons, priests and scribes, “No, oh exalted ones, that is forbidden. It has been decreed from on high that there will be no commerce on the Internet.” And they were sore afraid.

The barons, priests and scribes said to themselves, “What the fuck is going on?”

So that’s the story.
 From creation to evolution

There are two key points.

First, as is well known, the Internet did evolve from military, scientific and university communications systems which were, on the surface, free, although, of course, largely paid for by the American taxpayer and university endowments

The culture of free exchange of information is the basis of scholarship, but is, of course, paid for behind the scenes, by government, foundation and endowment funding. Thus the culture of freeinformation existed at the core of Internet use at the time the media first began to be interested in putting news on the web.


Second, in the early 1990s, before the rise of the independent Internet Service Providers and the expansion of services by the telecoms, large and small, the main communication network for the Internet in North America was the NSF Backbone, the high speed Internet communications network run by the U.S. National Science Foundation, which as part of its policy, forbade the use of the backbone for commercial purposes.

Thus in theory, and the conventional wisdom believed, no one using the Internet for commercial purposes, and that would have included charging for news, could use the main North American Internet information communications backbone.

But, in reality, the situation was a lot greyer and not so black and white.

I kept all my research material from the time in 1993-1994 (which I recently donated to the York University Computer Museum)when I was writing Researching on the Internet.

Here is what a couple of the leading books of the time said (books which most libraries, I suspect, discarded long ago and so are no longer available to those who lament the media if only)

Internet Companion A beginners guide to global networking
Tracy LaQuey with Jeanne C Ryer, Addison Wesley, May 1993, put it this way:

Probably the best known and most widely applied is NSFNETs Acceptable Use Policy , which basically states the transmission of “commercial” information or traffic is not allowed across the NSFNET backbone, whereas all information in support of academic and research activities is acceptable.

It sounds somewhat complicated, but you need to remember the original Internet began as US government‑funded experiment and no one expected it to become the widespread, heavily used production network it is today.

It’s going to take a while for commercialization and privatization of these networks to occur. The Internet as whole continues to move to support‑‑or at least allow access to‑‑more and more commercial activity. We may have to deal with some conflicting policies while the process evolves, but at some point in the Internet future, free enterprise will likely prevail and commercial activity will have a defined place, making the whole issue moot, In the meantime, if you’re planning to use the Internet for commercial reasons, make sure the networks you’re using support your kind of activity.


Another book, just a little later, Kevin M Savetz Your Internet Consultant The FAQs of Life Online. Sams, 1994

Commercial activity isn’t allowed on the Internet? It’s purely an academic and educational network, right?

People who advertise and sell stuff on the net should be flogged, right?
Yes and no. As mentioned earlier in this book the Internet is composed of a variety of different networks. Each network has its own set of rules, called acceptable use policies.

Certain networks [particularly the National Science Foundation network, the NSFnet, have strict acceptable use policies that ban most types of commercial use.

On the other hand, another backbone network within the Internet world has been finding considerable interest among commercial internet users‑‑the Commercial Internet Exchange (CIX). The acceptable use policies of CIX are much more broad and advertising and selling are both within its purview. So although commercial activity isn’t allowed on certain parts of the Internet, it is allowed on others.

People who advertise on the Internet should only be flogged for heinous violations of Internet culture, such as sending unsolicited junk e‑mail or posting commercial messages to Usenet groups that aren’t supposed to be used for commercial messages.

In the same book, another writer, Michael Strangelove, answered the question (key for the media in retrospect and somewhat prescient as well)

Is advertising allowed on the Internet?

…many people see internet as a noncommercial, academic, purely technical environment. Not so: today about fifty per cent of the Internet is populated by commercial users, The commercial Internet is the fastest growing part of cyberspace,

Businesses are discovering that they can advertise to the Internet community at a fraction of the cost of traditional methods. With tens of millions of electronic mail users out there in cyberspace today . Internet advertising is an intriguing opportunity not be overlooked. When the turn of the century rolls around and there are one hundred million consumers on the Internet, we may see many ad agencies and advertising supported magazines go under as businesses learn to communicate directly with consumers in cyberspace.


Those were print books aimed at the newbie Internet user.

But it also means that if the media had had the foresight to get on the Internet in the earliest years of the 1990s, they would have had to become part of the proposed Commercial Internet Exchange.

But in 1991, 92, 93, online in a newsroom was confined to what was called in many American (and Canadian) newsrooms, the “geek in the corner.”

The situation was already changing even as those books went to press.

Here is how Wikipedia explained the changes.

The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNet connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation….

In 1992, Congress allowed commercial activity on NSFNet with the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), permitting NSFNet to interconnect with commercial networks.[31] This caused controversy amongst university users, who were outraged at the idea of noneducational use of their network


So, the US Congress had opened up the Internet to commercial activities in that country.


The geeks, bearing content


Most of the media was still clueless and didn’t jump to the opportunity, even if they ran Sunday feature stories on the geeks or closing items on the evening news about this thing called “The Internet.”

It is likely that the vast majority of executives with their eyes on Wall Street and paying consultants pushing 1970s media models had no idea that they employed a “geek in the corner,” much less what the geek was doing.

Apart from tech companies, both hardware and software’s growing giants plus the small office start ups and computer science grad students with big ideas, which made up most of Strangelove’s “commercial activity,” the private sector around the world was slow to take up the challenge.


The CBC, as Canada’s public broadcaster, had, at least in those days, a mandate to experiment and innovate. So in 1993, CBC began an experiment working toward streaming radio on the Internet, in cooperation with the Communications Research Council. But as an experiment and coming from a public broadcaster there was no thought of charging for the service. (The history of the early days of shows the kinds of problems that executives faced. And it was a lot harder for the private sector which was expected to make money and even harder now  in the era of bean counting consultants and their talk of profit centers).

When business executives finally realized that the Internet was open to commerce, the news media was one of the first industries to make a major effort to invest on posting their material, most of it repurposed on the World Wide Web. The move was most often driven by those managers and employees who were still around from the videotex and teletext days, who saw web based news might succeed where the 1980s projects failed. Usually, these experiments were not sanctioned by head office and the money came from a little creative budgeting.

That meant the content had to be free, right from the beginning.


There’s one factor, that today’s audience metrics obsessed media bean counters have never considered when they say “If only. ” Their all important audience. The audience for online news in the mid-1990s were Internet and Web early adopters and most had adopted the culture of free information. In those early days, no media was willing to make an investment in online content that was actually worth paying for. Most of the news was repurposed from existing print, radio or television, which was readily available (for a price, of course)

So when the first media pioneers ventured on to the Internet in the mid-1990s (including CNN, NBC, the CBC where I worked, the Raleigh News and Observer, The Globe and Mail and The Toronto Star and others) the media was caught in an evolutionary feedback mechanism.

To attract the early adopter audience, the news had to be free. The audience that might have paid was not yet online (although the richer business types were using proprietary electronic services–which meant they didn’t need to pay for web content either. That pre-web willingness to pay for business information is why the  Wall Street Journal paywall has worked while others failed). 250-timecover2s.jpg

Where was the money to come from? The early click through rates for the first banner ads (which many in the audience actually objected to) were dismal.

Development of good websites cost time and money and the media was already facing the culture of free. (I predicted trouble for newspapers when I was interviewed by Craig Saila for the Ryerson Review of Journalism in fall 1996, an article published in spring 1997 (Registration required)   (Also available on Craig Saila’s site)

The headline pretty much sums up the attitude the students of the time had to media management which was failing to adapt to the fast changing environment.

Looks like the students were right. The Ryerson article was just about the media that had had the courage to venture on to the web by the fall of 1996.

Most of the news media were late comers and took almost a decade to catch up in page views with the early starters. The late comers couldn’t charge for their content because 95% of those early online services, their competitors, were free. Neither were putting that much money into real web content.

If only


There was one day that all the media could have made sure they could charge for content. September 1, 1993.

For it was in September 1993 that the Internet (not yet the web) took an evolutionary leap from a government, military and academic information network and communication system to one used by the public.

In September 1993, America Online, then the largest paid service, opened a gateway to Usenet, the “newsgroups” of the Internet for its subscribers. It was a time for those who then thought the Internet was their exclusive domain remember with horror, called by some the tsunami or the beginning, as described by Wikipedia as the “Eternal September,” when their private party ended.

Yes there were a few news organizations with a presence on CompuServe or America Online on September 1, 1993 but far too few and the content was far too thin.

If the media wanted to charge for content, after September 1993, when the thousands of AOL subscribers ventured on to the adolescent Internet of the time and embraced the culture where they expected free content, it was already too late.


A tectonic collision occurred that September, the leading edge of one continent collided with another.

Invasive species penetrated the long balanced media ecosystem and disrupted it beyond imagination. So will evolutionary forces work, will the news media adapt to the new environment​?


Thirty Years in New Media

Thirty Years in New Media Part II The Veteran Strikes Back


Enhanced by Zemanta

The future of TV news will be decided on Sunday

On Sunday, July 11, 2010, the future of
television news will be decided in South Africa.


As billions watch the Netherlands and
Spain play for the FIFA World Cup, a few tens of thousands will be
watching that all important match in 3D actually “stereoscopic
3D” (the official term, you’ll hear a lot from now on, as well as

The experts are already saying that the
main electronic item this Christmas will be the dual capacity 2D and
3D HDTV set, so the consumer can watch 3D broadcasts (with glasses
for now) from cable, satellite or off air, and switch to standard 2D
for the rest of the program schedule.


Three-D is coming
faster than anyone expected. The experts, those who are already
shooting 3D, say technical requirements of 3D will demand a highly
professional approach that, done properly and skillfully, will
return the photographic profession to the standards of the film era.

In May, I attended
the HotDocs conference in Toronto.

There were two
sessions on emerging 3D, one focused on production, the second on
technical issues.

I walked into the
first session, “Crafting 3D,” expecting to hear about megabuck
high tech equipment and a future years away, only to find out that
the future is now.

It began with the
release of James Cameron’s Avatar in London on Dec. 10, 2009. The
next phase of evolution began on June 11, 2010 when the first games
of the FIFA World Cup opened in South Africa. Twenty-five of the
games were to be broadcast in “stereoscopic 3D” using Sony
cameras, the $100,000 HDC-1500 and the new $30,000 P1, plus the
backend software required to put it all together. The 3D games will
be broadcast by EPSN’s, Sky’s new 3D networks and Al Jazeera.
(There’s a summary in this report from Broadcasting and Cable )

The final game will
also be broadcast in Canada in 3D by the CBC which has the Canadian
rights for the World Cup. Unfortunately the CBC (once long ago a
leader in technology) was, under its current management, late to
announce they would broadcast the 3D game -and my usually reliable
sources in the Toronto Broadcast Centre that it was pressure from the
Rogers cable company which is also one of the sponsors of the
broadcasts that forced CBC Sports into the 21st century,

The buzz was out
there for 3D coverage long before the first whistle of the World Cup.
Three-D sets were already in sports bars and sports pubs (the
Masters was broadcast in 3D) and according to the members of the
second, tech panel, “Stereoscopic 3D from script to screen” what
fans were seeing in UK sports pubs in May was already driving
consumer demand for the sets far beyond anything Avatar could have
done. It was that panel that predicted that the biggest electronic
item this Christmas will be a dual capacity 3D/2D HDTV monitor. The
standard HDTV set is already obsolete.

This week,
newspapers around the world are full of ads for 3D sets. (But one
has to wonder if the bean counting corporate publishers are paying
any attention beyond the revenue from those ads.)

The networks around
the world are keeping a close eye on the World Cup and there are
already demands for 3D content as the world telecoms put together 3D
offerings on satellite and cable. (This is also going to be a huge
headache for network bean counters who, just a couple of years ago,
spent hundreds of millions implementing HDTV, only to find that
investment has be made all over again with 3D).

There are already
shoulder mounted 3D cameras, about the size of the first heavy video
cameras or a large, professional 16mm film rig.

At “Script to
Screen” I asked the panel when there would be news crews using 3D
cameras. The consensus answer was “‘within two years.” Discovery
already plans a 3D channel
for nature and science programming,
which was also the first first attractive market for HDTV. 

The consensus of
the panel was that like HDTV, the first efforts in 3D by news
organizations will be high-end, prestige documentaries, then the
current affairs programs and finally the evening newscasts. The
panel said that there were rumours in the 3D community that 3D
planning by CNN was already well underway.

140-walleesk.jpgPanasonic is
expected to launch a smaller, lighter 3D camera costing $21,000 this
autumn, a camera that reminds one of the movie robot Wall-E.

At the
recent Profusion trade show in Toronto, both Sony and Panasonic had
3D displays. The Sony display was a mind blower, a large 3D HDTV
with a video of fish in an aquarium, quality that came close to
Avatar. Panasonic had a prototype camera that did not impress the
tech savvy crowd, whether it was the technology or the sales tech staff that set it up. The glasses didn’t work well and there were ghost
images on the screen. (But it is likely those bugs will be worked out
by the official launch)

The electronics business wants  a consumer-friendly 3D market ( amateurs and family
shooters are now an estimated at 90 per cent of the photo and video
market) and wants those photographers to shoot 3D, and already have
announced low end 3D equipment. But the experts on that panel said
that shooting 3D so that it creates an environment that draws in the
viewer–and doesn’t make them sick or trigger a headache–will
require high skill levels to shoot.

In others words it
could be a return to the film era. There were millions of amateur
photographers during the film era, but in 95% of cases, the
professional was paid for the professional product.

141-Panasonic_3d_camerask.jpgProfessional photographers and
videographers have been facing the future with
fear and loathing for the past few years as the value of their work
has declined in competition with the prosumer and amateurs whose work
is easily available for a just a dollar or more often for free.

By Christmas, 2010,
that too will begin to change. The best present for those who want
to create visually and earn a decent living, is that a blue alien and
the beautiful game will revive and reinvigorate professional

The professional
will have to master parallax, depth cues, stereoscopic depth
perception and depth resolution, interocular distance, depth
placement, convergence, orthostereoscopy and the audience’s 3D
comfort zone. (All beyond the scope of this blog).

So whether you are
cheering for the Netherlands or Spain, give a couple of cheers for
the 3D crews as well. Because if it works, it’s a whole new ball


Panasonic introduces 3D  videocamera.

Enhanced by Zemanta

The iPad is an evolutionary link, leading to a new species, a hypo-active computer

    I got to play with an iPad during a business lunch yesterday.  I have to say that I was impressed. I’m still not going to run out and buy one–at least not right away.
    The iPad is a step on the evolution toward a new, simpler, less active,  species of computer system, one that follows the axiom of Keep It Simple Stupid. 
    Call it hypo-active computing (as opposed to today’s hyperactive over-featured systems)
    A hypo-active computer tablet can do what computers once promised to do, make life simpler.
    The hypo-active tablet will be the death blow to newspapers printed on paper.   Whether “newspapers” will die with the newsprint or whether there will be a renaissance will depend on how today’s corporate management adapt to a new world. (I’m not optimistic. If news media corporate management still don’t “get” the web, they’re certainly not going to understand tablet computing)
    It’s also an open question whether the iPad and Apple will survive  and win the evolutionary race as the new species of hypo-active tablet emerges.
    The iPad is not yet available north of the border, although lots of people lined up in Buffalo and Bellingham to get one last week.    My luncheon companion had a friend send an iPad up from the United States.
    (Apple has just announced it’s delaying the international launch of the iPad  due to high consumer demand in the United States. The Canadian iPad launch was originally rumoured to be about 10 days from now. )
    As a photographer, I fell in love with the Guardian’s photo of the day app. Crisp, gorgeous resolution and colour. 
    I checked out the teaser edition of the New York Times (a few top stories). But for the Times to work it should have a couple of more teaser editions, one for sports fans and one for the arts.
    I reread part of the Winnie the Pooh that Apple bundles with the iPad.  The colour illustrations appear much better than faded editions on a printed page.
    Google maps in satellite mode are much better than on my current home monitor.
    Those critics of the iPad who wanted a laptop with camera and phone are caught in old-style, hyperactive computer mode, although there will likely be a hyperactive version of the iPad offered to those users.
    I can see myself reading the morning news on a tablet device of some type, rather than leafing through the morning paper (and ignoring the hyperactive morning news shows on TV) .
    I would like to get my photography magazines on a tablet. Wouldn’t take up so much space in my office and might spare a few trees.
    As a hiker, I would love a GPS-enabled tablet device with not just Google maps and satellite image but full  topographic map capability (perhaps tied into those satellite images). The iPad is about the size and shape, and just a little heavier, than a plastic map case.  It would need a robust housing, but unlike maps (unless they’re  plasticized) it won’t dissolve in a heavy rainstorm.  A night and storm proof display system would be a big help. (Today’s hand-held GPS hiking devices are too small and the automobile GPS are not really suited for hiking)
    Yes, I would pay for all three of those applications.

    At this point, it looks like Apple is cramming too much into the iPad to be a true make life simple, hypo-active computer system.
    A good KISS hypo-active computer tablet should have

  •     Lots of memory (Moore’s law applies here, memory capacity will increase)
  •     Good display for text and graphics   
  •     Flexible and powerful connectivity, through Wifi and 3G  and USB.
  •     The ability to operate completely independent of  any wireless or wired communication system.  (In Canadian, terms you can take it to the cottage and read  Harry Potter on the deck overlooking the lake?
  •     Programming apps and features that enhance its simplicity. That means ease of use.  Programmers and software managers must have a Zen-like approach to the hypo-active. Give up your ego. Write simple programs that do basic things (remember the days of MS-DOS programs that did just that?)
  •     The user decides how the hypo-active computer works for them.  That means the person with the hypo-active tablet can read a book bought from any e-book store.   Watch a movie with an external Blu-Ray device plugged in to that USB port.

    A hypo-active tablet computer and higher level hyperactive tablets will mean the death of broadcast television entertainment once you can download and watch your favourite shows directly from the original producer.  
    It will also bring changes in broadcast television news, sports and specials   All the tablet would need would be a built in tuner and a USB HDTV antenna or connection to a mini satellite dish. For sports fans, it means watching the big game anytime, anywhere. 

For news,  it brings more uncertainty. No one could have foretold the changes that cable made to news.  

    If I can venture one prediction, a hypo-active tablet with TV capability will finally bring the end of the hyperactive always breaking breaking news nonsense.   Especially if a viewer has Twitter available on the same tablet, they’re going to know  that “breaking news” story happened five hours earlier.

    (Also might be time to consider selling your cable company stock unless it has other telecommunication arms)

    The key point in the evolution of a popular hypo-active tablet  is price.

     The iPad is too expensive.  With prices starting at $499 US for a Wi-Fi, connection, a 3G version  starting at $629 for the 16-gigabyte version up to $829 for one with 64 gigabytes of storage, the iPad is competing with the work horse, the laptop. Consumers, apart from Apple evangelists and early adopters don’t need both.
    Apple is pricing itself out of the key  market,  teenagers and college students.   Can teenagers and students and young  cubicle workers afford  afford a laptop (and at this point the iPad is not a substitute) plus an iPhone plus an iPod? The digital generation may love Apple products but the iPad, at the moment, may be one device too many.

    There are other rivals coming to the market soon, much cheaper rivals. The Canadian bookstore chain Indigo is pushing the Kobo reader, priced at  $149  (Kobo products are already available for the Blackberry and smart phones). There are reports of a $99 reader later this year.

    If  I can venture a guess, a hypo-active, keep it simple stupid, tablet computer that wins in the marketplace is not going to come from Apple or Amazon.   That computer will come from some small company in Asia: China, Hong Kong, Singapore, Taiwan or India, where the demand  for cheap hardware is highest. If that company comes up with a hypo-active tablet computer in the $80 to $100 range, one that has ease of use, simple, minimal features but a powerful memory and display system, it will capture the market.

    That form of hypo-active computer will be the winner. It will be a compliment, not a substitute for a laptop or a smart phone.

    Imagine this.   Breakfast time on a weekend.  You get your morning coffee or tea.   You  put your tablet on a little stand and read the morning wires and tweets. Since, it’s the weekend,  you’ve got time, you decide to call up that fancy omlette recipe you always wanted to try, so you take your tablet into the kitchen (something you really wouldn’t want to do with a laptop and your smart phone screen is too small), move your hypo-active tablet into the kitchen counter, call up the recipe and whip up that omlette.  Back at the dining room table, you then read through the feature section of the paper and finally call up a map for your afternoon outing.

    This scenario has been written about by futurists and tech writers for the past 30 years. Perhaps, now, it’s here. Perhaps. We’ll see.

    (Note in a tweet in response to my blog on books and apps, Cody Brown noted: “I wouldn’t imagine an iPad app/book being that different than a video game for the first gameboy-It’s bound to a delivery device.” Smart thinking on a slightly different track than where I’m going, but certainly prescient)