It is the computing equivalent of the old adage "many hands make light work".
Researchers with gigabytes of data to process worked out years ago that there was an alternative to prohibitively expensive number crunching on supercomputers: they could share the computational load around the world.
This approach has seen home computers across the globe helping out in the search for extraterrestrial life.
The SETI project (Search for Extraterrestrial Intelligence) harnesses idle PC time to analyse radio telescope data in a search for intelligent broadcast patterns that would indicate someone is out there.
Hundreds of thousands of internet-connected computer owners have been happy to donate some unused processing time to what is seen as a worthy project through an unobtrusive screensaver-based program.
SETI is one example of grid computing in action: making use of spare processing cycles on a connected cluster of computers to create a powerful "virtual supercomputer".
While definitions of grid and cluster computing vary, the general concept of harnessing the power of a connected group of processing units is one that has increasing appeal to researchers and businesses as well.
Paul Bacon, an Auckland-based account manager for Oracle, says the majority of clients now view a cluster of low-cost servers as the best platform for running Oracle's databases and business applications.
"Customers are very interested in the ability to reduce the cost of their IT infrastructure," Bacon says.
"The whole grid story is about lowering a customer's total cost of ownership by deploying on industry standard servers. Most enterprise software has traditionally been rather expensive to deploy and a large part of that is the need to purchase large and expensive hardware platforms to accommodate it."
Bacon says a key benefit of running a grid of cheaper servers rather than a larger single server is the ability to boost resources as they are needed.
"In the traditional model if you bought a server and the capacity required by that application was underestimated, then you performed a fairly expensive 'forklift upgrade': bring the forklift in, roll that server out, roll a bigger one in.
"That is an expensive and embarrassing exercise for the IT department. One of the nice parts about the grid model is that you can incrementally add resources as you need to."
Oracle has been strongly pushing enterprise grid computing using standard servers, going as far as labelling it a fundamental shift in IT architecture and "the fifth major paradigm in the history of computing" (after mainframe, midrange, client-server and internet-based architectures).
Roland Slee, an Oracle Asia-Pacific vice-president, says: "It's about taking the smallest, most affordable computing elements available and using collections of these to deliver a higher quality of service, better performance and better reliability at a dramatically lower cost."
Mainframe and supercomputer manufacturers would probably have a different opinion.
One local organisation embracing the grid concept is Public Trust, which is moving to a "cluster" architecture through a major project, currently under way, to upgrade its server infrastructure.
Changing from a single large server to three clusters will reduce maintenance and licensing costs, says Public Trust's enterprise architecture manager, Ross Payne.
The upgrade will also consolidate a number of Oracle databases on to one cluster, allowing them to be managed much more effectively.
"One of the significant differences between the old way we've been doing things - using the single large server - is that previously we'd have to buy or lease equipment of a certain size and we'd have to work out three or four years in advance what our needs were going to be," says Rod Orr, a principal database specialist at Public Trust.
"If we underestimated what the requirements were going to be, we'd end up with a box that wouldn't perform. And if we over-estimated, we'd end up spending too much money on something we didn't really need. With the cluster model we only buy blades as we need them. If we need more capacity we can go and buy a new cheap blade and slide it in."
One of the world's most well-known examples of cluster computing is Google. Its search engine is powered by several thousand connected PCs at various locations.
Futurist Jeff Wacker, of global computing giant EDS, says the rise of cluster computing will not spell the end of supercomputers.
Wacker predicts a future where business meets its information processing requirements through a mix of technology including:
* Grid computing (which he defines as a system that captures and uses spare processing capability).
* Cluster computing (which could involve standard servers, mainframes or supercomputers).
* Utility computing (on-demand processing power supplied by an external provider).
The man dubbed "the father of grid computing" happens to be a native New Zealander, although he left home in the 1970s.
Ian Foster is a professor of computer science at the University of Chicago and associate director of the mathematics and computer science division of Argonne National Laboratory. He developed Globus, the most widely deployed grid software.
The European Organisation for Nuclear Research (CERN) is relying on a grid project to analyse data generated by its Large Hadron Collider, a particle collider which is probing the Big Bang theory.
CERN researchers have hooked into a grid network of 100,000 PCs which will store and process this torrent of data, in the process, they hope, discovering some of the secrets to how the universe began.
How linking PCs spreads load and saves money
AdvertisementAdvertise with NZME.