Money  /  Comment

The Future of Search: Will We Still Google It?

Google grew from a Stanford project into a $3T tech giant, pioneering search, data scaling, and AI, now challenged by regulation and chatbots.

In the mid to late 1990s, Google’s co-founders, Larry Page and Sergey Brin, were PhD students at Stanford University’s Computer Science Department. One of the problems Page was working on was how to increase the chances that the first entries someone would see in the comments section on a website would be useful, even authoritative. What was needed, as Page told Steven Levy, a tech journalist and historian of Google, was a ‘rating system’. In thinking about how websites could be rated, Page was struck by the analogy between the links to a website that the owners of other websites create and the citations that an authoritative scientific paper receives. The greater the number of links, the higher the probability that the site was well regarded, especially if the links were from sites that were themselves of high quality.

Using thousands of human beings to rate millions of websites wasn’t necessary, Page and Brin realised. ‘It’s all recursive,’ as Levy reports Page saying in 2001. ‘How good you are is determined by who links to you,’ and how good they are is determined by who links to them. ‘It’s all a big circle. But mathematics is great. You can solve this.’ Their algorithm, PageRank, did not entirely stop porn sites and other spammers infiltrating the results of unrelated searches – one of Google’s engineers, Matt Cutts, used to organise a ‘Look for Porn Day’ before each new version of its web index was launched – but it did help Google to improve substantially on earlier search engines.Page’s undramatic word ‘recursive’ hid a giant material challenge. You can’t find the incoming links to a website just by examining the website itself. You have to go instead to the sites that link to it. But since you don’t know in advance which they are, you will have to crawl large expanses of the web to find them. The logic of what Page and Brin were setting out to do involved them in a hugely ambitious project: to ingest and index effectively every website in existence. That, in essence, is what Google still does.

One way to approach the problem would have been to buy the most powerful computers available. When Google launched it had around $1 million in the bank. It raised $25 million from venture capitalists in 1999, but that still wasn’t enough to pay for a decent number of expensive machines. Instead, Google’s engineers lined metal trays with electrically insulating cork, and packed them with low-cost computer components of the kind found in cheap PCs. One early Google employee, Douglas Edwards, remembers visiting the Santa Clara data centre where Google was renting space for the hardware. ‘Every square inch was crammed with racks bristling with stripped-down CPUs [central processing units],’ he writes. ‘There were 21 racks and more than fifteen hundred machines, each sprouting cables like Play-Doh pushed through a spaghetti press. Where other [companies’] cages were right-angled and inorganic, Google’s swarmed with life, a giant termite mound dense with frenetic activity and intersecting curves.’