Info.cern.ch - the website of the world's first-ever web server. Hard to believe that here, in Geneva, among this leafy campus-like setting, the world wide web was born.
How else to mark the occasion, but by connecting.
The Wi-Fi logon form requires a few details and the name of a Cern person to verify access. In a flash I'm online, and not surprisingly it's stonking fast - mainlining, no doubt, into very fat fibre and big servers at the centre of the internet universe.
My only worry is whether my netbook has enough battery - my European adaptor plug is unable to negotiate the idiosyncratic Swiss wall socket in my sparse, but functional Cern hostel room.
Cern, an acronym derived from Conseil Européen pour la Recherche Nucléaire, is the European Organisation for Nuclear Research, and home of the somewhat broken Large Hadron Collider.
But it's also where, in 1989, Cern scientist Tim Berners-Lee wrote a proposal - labelled "vague, but exciting" by his boss - to connect the fledgling, but largely impenetrable, internet and personal computers with "hypertext".
He reckoned a single, distributed-information network might be handy for the Cern physicists to share all the computer-stored information at the laboratory.
Hypertext - those links we all click on at will to warp-drive through screeds of interconnected information - was just the beginning.
By Christmas 1990, Berners-Lee had defined the web's basic building blocks and more acronyms - URL, http and html - plus had written the first browser and server software.
The bright idea - sharing knowledge, information, maybe power - caught on fast. By 1993-4 the information revolution was under way, and the rest is history.
But what to call this new phenomenon, an invention that has already had more impact than the printing press?
Early thoughts were naff. The Mine of Information. The Information Mesh. In May 1990, it became www - the WorldWideWeb - a mouthful that, for a while, was shortened to dubdubdub, but is now a prefix no one bothers with any more. Www is so pervasive, it's implicit. Thinking big is what they do at Cern.
The troubled Large Hadron Collider has brought up another monster information problem needing an innovative solution.
When they're running again, the twin proton beams circulating underground almost at the speed of light will be brought together at various points around the 27km ring to smash into one another. Why? Something to do with figuring out how the universe began and just what matter is made of.
The problem is that the machine will make something like 40 million collisions every second. That really is too much information. So what the underground computing power - the first level trigger - does is to pick out just the interesting collisions that have occurred in the ring's massive magnetic detectors and ignore the rest. In essence, making very fast decisions about what snapshots of crashing protons to keep - bringing the numbers down from 40MHz to 100KHz.
How do the computers decide? From simulations of events that could appear in the collisions.
True, if they haven't simulated something, then they won't be looking for it. But these physicists have plenty of theories about how smashed-up particles might behave and the kind of mostly invisible objects that could result.
The detectors' sensors are on the look-out for things like high velocity and transference momentum - flickers of decay in an extremely short lifetime of being smashed to smithereens. By having several magnetic detectors around the ring - all looking in slightly different ways - the physicists reckon the chances of missing something are pretty low.
So the first-level trigger gets things down to 100,000 potentially interesting selected events - snapshots in memory of various particles' location and passage through magnetic fields - that travel via optical fibre to the above-ground control room. The second selection, which takes a little longer, gets the numbers down again to just 100Hz - 100 events per second, the maximum the scientists figure they can keep for further analysis.
That's still quite a lot of data - something like 10 iPods-full per second, or 15 petabytes (15 million gigabytes) of data a year - enough to fill more than 1.7 million dual-layer DVDs. Crikey. What to do?
Enter the grid. Truckloads of data will be transferred from Cern over dedicated fibre-optic links to 11 large computing centres around the world at rates of up to 10 gigabits per second.
Those large centres then send and receive data from 200 smaller centres worldwide via research networks and sometimes the standard public internet.
Altogether, the Cern grid will harness 100,000 processors at 140 scientific institutions in 33 countries - spreading the computational load so scientists in Chicago, for example, can analyse a collision about 10 minutes after it's happened in the collider.
Like the world wide web, the world wide grid is an application of the internet.
If the web democratised information, then the grid democratises the processing of it - providing everyone with unbelievable number-crunching power.
Back in 1989, no one had any idea what Tim Berners-Lee's www would spawn. On the brink of the next revolution - wwg - who knows what will be unleashed?
* Chris Barton travelled to Geneva on a travel scholarship courtesy of the World Conference of Science Journalists.
<i>Chris Barton</i>: From web to grid - the next revolution
AdvertisementAdvertise with NZME.