Google has indexed billions of web pages. In fact, according to Statistic Brain – Google had 67,000,000,000 (67 Billion) pages indexed in 2014. But while, that’s an impressive number – it still means that Google has yet to live up to it’s name.

Google originally got it’s name from the math word, googol, which is a number. A really big number. A googol is a 1 with 100 zeros after it. It looks like this:

**10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,**

**000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000**

But usually we just write it the way Google does in its calculator:

So – how long will it take for Google to index 1 googol web pages? A lot longer than you think. If you hate math, then you can skip to the bottom section and I will give you the answer – it won’t happen in your lifetime.

## Calculating the growth rate

Figuring out how many pages Google has indexed is annoying – mostly because they don’t exactly make that public knowledge. I did way too much research on this; and then gave up. Thankfully I found Wonder and was able to have them do the research for me. After all the research, I eventually had to just choose one of the estimates and run with it or I would lose my sanity. The data set that I chose to work with was the one I quoted above by Statistic Brain.*

I calculated the average growth rate of these numbers using the formula:

Where**(the ending number of pages indexed with the zeros left off),**

*f = 67***(the beginning number of pages indexed, again with the zeros left off) and**

*s = 11***(the number of years). I actually performed these calculations in the Google search bar just for the irony.**

*y = 6*##### The end result is P = 35% rounded up.

But – the problem with this is that it assumes Google can continue to keep up a 35% growth rate for a really really long time – which is unlikely. So to be fair, I also figured out what the lowest rate of growth was for Google over the last few years so we can see two options – one where Google goes on an indexing spree and maintains 35% growth rate, and one where Google just maintains a more realistic growth rate (which will still become increasingly hard with each following year).

The slowest indexing growth rate Google saw over that data set was between the year 2011 and 2012 – which looks like this:

**50,000,000,000 / 46,000,000,000 = 4,000,000,000**

**4,000,000,000 / 46,000,000,000 = 0.08695652**

##### Which is 8.7% when converted and rounded up.

## I have a headache, are we there yet?

In order to figure out the year that Google would actually index 1 googol web pages, we would need to use a formula like this:

Now – I’m not exactly a math professor so I might not do this the fastest way – just the way that makes sense to me. Remember, we have to get our unknown variable all by itself – which in our case is *y*. So we divide both sides of our equation by 67 Billion and we get this:

Getting closer. This is the tricky part – and the part that I had to have a friend look at to make sure I was correct (so if I’m still wrong – blame my math friend, not me). In order to get *y *in a position that is no longer an exponent, we need to use logarithms. As scary and frustrating as logarithms may be for many of you, there is actually a group of psychologists that believe we naturally do math logarithmically before learning to count linearly. Anyway, this is what our equation looks like now:

OK, let’s keep moving. Our next step (and final step for getting *y* all by itself) is to divide both sides of our equation by log(1.35). The new equation looks like this:

Now this is the easy part. To figure out how much 1 googol / 67 Billion is – I simply typed it into Google. It equaled *1.4925373e+89* – which is obviously rounded up quite a bit – but that is the best number I could get from using a calculator and will probably be accurate enough for what we’re doing.

Wait. Back up a bit. What the heck does the *e* mean in that number? You can think of the *e* as basically just meaning “exponent”. Another way that we could write that number is *1.4925373×10^89*. And lastly, another way to write that would be:

**149,253,730,000,000,000,000,000,000,000,000,000,000,000,000,000,**

**000,000,000,000,000,000,000,000,000,000,000,000,000,000**

Next step then is to figure out what log(1.4925373e+89) and log(1.35) equal. Again, according to Google, they equal 89.1739251934 and 0.13033376849 respectively. So now our equation looks like this:

Pop that bad boy into Google and you finally have your answer.

In case you forgot *y = years*. So approximately 684 years from the end of 2014 which would be the year, 2698, we should expect Google to finally live up to it’s namesake and have 1 googol pages indexed.

##### “If we assume a 35% compounded annual growth, Google will index 1 googol web pages in the year 2698.”

But that was assuming the crazy fast rate of 35% growth year after year for the next 684 years. If we wanted to be perfectly accurate, we would have to show that growth rate slowing down year after year because it would likely become more and more difficult to grow that same rate year after year. But then again, maybe technology continues to grow just as fast and they find that it’s not difficult to maintain that rate.

Either way, here is the math for the slower rate we came up with earlier of 8.7% compounded annual growth rate. We would start off with almost the same thing as above, the only difference is that we will be multiplying by 1.087 instead of 1.35:

I will save you from having to see all the math worked out again. The only different number will be the log(1.087) which equals 0.03622954408. Our final answer using the 8.7% growth rate is:

Again, *y = years*, so approximately 2,461 years from the end of 2014 which would be the year, 4476 – which will also be America’s 2700th Anniversary, or its Vigiseptacentennial.

##### “IF WE ASSUME An 8.7% COMPOUNDED ANNUAL GROWTH, GOOGLE WILL INDEX 1 GOOGOL WEB PAGES IN THE YEAR 4476.”

## Tangent

So the name for the 1,000th anniversary is the Millenary. But that would mean I would have to figure out how to name the 2.7 thousandth anniversary which seems harder than still going by 100’s.

**“So, if I were to name the 2700th anniversary, I would call it the ***VIGISEPTACENTENNIAL***.”**

*VIGISEPTACENTENNIAL*

Vigi- is from the prefix vigint- Latin for 20, septa- is the Latin prefix we use for 7, and centennial is the word we use for 100 (again from Latin to keep it all consistent).

## Back to Google

Google likes to change the very meaning of an already established word. Googol became Google – and now they just created a company called Alphabet. They didn’t even bother changing the spelling this time, which is going to be really confusing teaching my 2-year old about the differences between saying her Alphabet (singing the letters) and using the Alphabet (Google products).

Does this matter? No. Not even one bit. But I find it intriguing. If we assume that it will take Google 2,461 years to actually index one googol web pages, then we have to ask ourselves a few other hypothetical questions.

*Q. Will Google even exist as a company 2,461 years later?*

A. According to Business Insider, most companies are only around for 40-50 years. In that same article they talk about some of the world’s oldest established businesses, including a hotel in Japan that has been operating since the year 705 A.D. Still not 2,461 years – but maybe it tells us that there is a chance.

*Q. What will the internet look like in 2,461 years?*

A. Seriously, I have no idea. Some guys over at BBC postulated 2 alternate possibilities for the internet in the year 2040. One is happy and bright – quite utopian, while the other is sad and bleak, much more dystopian. David Gelernter at Wired suggests that we are moving from a “space-based” internet to a “time-based” internet. And of course there is all the chatter and growth surrounding the Internet of Things (IoT).

**Q. If Google exists 2,461 years later, will they even care about indexing web pages?**

A. Google may have started out as a search engine, but they continue to branch out into other things, like Glasses, Cars, and secret stuff. How much will they care about indexing web pages? Probably not that much – it will be perhaps some department in a dusty office in the basement, not even run by human beings anymore – just some automated A.I. program that was last coded in the year 2501.

**Q. Why didn’t you depreciate the rate of index?**

A. Mostly because there is so much that is undeterminable right now. Can the web even grow to a googol web pages? If it can – will it – or will something replace it before then? What if graphene and quantum computers take off – then will Google be able to index even faster instead of slower? There was just too much to really get into it without going off the deep end, so I decided to keep it simple.

*What do you think? What will the internet look like in the year 4476? Will Google still be around?*

*+ **Also, if anyone knows **Brady Haran** from **Numberphile** – can you get him to find someone that is willing to double check my math? Or maybe even model it better?*

* Editor’s note: I just had a great conversation on Twitter with Gianluca Fiorelli and Rand Fishkin about this post. Rand pointed out that Google likely had closer to 500 Billion to 1 Trillion webpages indexed last year. Rand is a genius and his logic makes sense based on the number of pages Moz has indexed. Regardless, I had to eventually just pick a number that was backed up by a decent resource since no one really knows how many pages Google has indexed. The overall outcome wouldn’t change much – it would still take place so many years from now that it’s impossible to fathom what the internet will be like. Here’s the conversation though for those that are curious.

Luiz CentenaroYou are indeed a nerd William! Love it, haha especially the ADHD tag, but 684 years you are totally right who knows what the internet will look like in 20 years but 684 years should be super interesting.

William HarrisLOL – seriously though. I just hope my math is correct. I would be pretty bummed to find out that I am off by a couple of decimal places 🙁

AliceThis was a fun read Will! Very nerdy 🙂

William HarrisHa – thanks Alice!

TedelI seriously doubt Google will still be around that year… If MySpace fell… so can Google.

Anyway, I must admit that foretelling is not one of my strong areas.

William HarrisHahaha – yeah I agree, BUT Google is doing one heck of a job diversifying itself. We don’t have the slightest idea how long a “software” company can last. If the continue to reinvent themselves like Madonna – they could last indefinitely.

Matt GreenerThis is hyper awesome (and ultrameganerdy). The whole calculation is based on the assumption that there will there be a googol of pages to index and somebody/something has to make them.

World population is currently growing at 1.13% per year. There are myriad variables that can throw this off, but to continue for simplicities sake, let’s assume that it continues at the same rate for the next 2461 years. Therefore, using our current population of over 7 billion gives us the following formula:

7,362,000,000 * (1.013)2461

And if my math is correct the resulting world population in the year 4476 is: 4.6972194e+23

(469,721,940,000,000,000,000,000)

1 googol divided by the world population gives us the amount of web pages per law abiding citizen = 2.1289191e+76 or 21,289,191,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 (that’s 76 zeroes VS googol’s 100 zeroes)

So keep on writing content!

William HarrisWow – I didn’t double check your math yet – but that is one heck of a great comment – and something to really think about! Thanks Matt!

Nick MolineI’m glad to see you theorize at the end that by the years you calculated, the internet (or whatever replaces it) may look very different. You mention IoT which I think is important. We are vastly approaching a time when the Internet isn’t really about pages as much as it is about things.

Google already is starting their shift away from being an index of pages to being an index of facts linked to the pages they came from.

IPv6 blocks that are assigned to homes are so vastly large that you can easily assign unique external IPs to toasters, TV Remotes, and Bananas (Ah for the day when every banana has its own individualized IP), and so you will at some point see an internet made up of objects that were previously considered inanimate but as IoT becomes the norm, will no longer be.

When that happens there will be an explosion of what data a search engine would have to index.

Also, on the tangent here, Google, named after Googol, isn’t just about a web search engine, the amount of data they have indexed includes every email stored on gmail servers, every post and comment on G+ and a ton of other data.

One really key example is Location History. For everyone with an android phone attached to a Google account who has turned this feature on (which is thousands if not millions of people), Google indexes where they are every minute of every day. These aren’t pages, but they are indexed data points.

The amount of data points google has indexed could very well already number in the hundreds of trillions, and despite the assumption in the article that the growth rate will shrink, I would argue that the growth rate would continually increase as Google adds more things that they are indexing, more data sets.

William HarrisHey Nick – just checked out your website – congrats on all accomplishments! Keep it going!

You are absolutely right about all the other data points. I get so excited when I think about it – everything, connected – it’s beautiful. I look forward to a day when my heart rate, SpO2, and other biometrics are sent to my file to my doctor in real time – constantly monitoring for any changes to be checked into further. Or even a urinalysis that checks the chemistry every time I use the bathroom – I’m being extreme – but we aren’t far from it, and I get excited about all of that data. And not just the data itself – but the ability for me to connect all of that data and visualize it in a way that helps me prevent issues, and take things and make them optimal – from health, to business, to daily routines, etc.

Bananas with their own IP – haha – I like that. Not sure that each food item would need it’s own IP address, but perhaps there is application to each food item being “tagged” in some way. Would the banana need to be connected, or simply just the pantry/fridge – so that it can take pictures of the banana and mention if it is ripe or not, or over ripe, etc? It can take dimensions and when you remove it, it knows to add that to your recipe, or to your calorie tracker, etc.

Bill BThis is exactly what I flashed on when you asked how long it would take…https://www.youtube.com/watch?v=WWOnBWxNfYM

William HarrisLOL – that’s a perfect clip!

Marcel - Interaction DesignerThe thing Google started with, indexing pages, is not enough anymore. Data for making a better world (usability and accessibility for everyone) is now a new view on things. Google is now just a ‘small’ G in the Alphabet. Nice post.

William HarrisToo true – thanks Marcel!

Michael CorfmanThere are various estimates of the number of atoms in the observable universe. The estimate given on the following web page is that there are 10**82: http://www.universetoday.com/36302/atoms-in-the-universe/. I doubt it will ever be possible to index more web pages than there are atoms in the observable universe, and a googol (10**100) is much larger than that, so I think Google will never reach that number, even if it is around as a company for billions of years.

William HarrisHey Michael – you’re on the right track. The examples I laid out are strictly mathematical – without regard for physical restraints. Some have even suggested that if we were to index plank integers per second, it would still be physically impossible. I fully agree that the physical world poses a big challenge to indexing a google web pages – but it’s a little too early to say that it is impossible. Our current knowledge of the physical world is still not fully developed. What if we are able to actually count backward in time – so instead of indexing up to 1 googol – we index backward from 1 googol? What if we figure out a way to store 1×10^50 data points within 1 atom? I prefer not to limit the science of the future 2,000 years from now based on our current understanding of physical limits 🙂 There are a lot of things we do today that were considered physically impossible 2,000 years ago.