The answers so far have focused on the *data* itself, which makes sense with the site this is on, and the flaws about it.

But I'm a computational/mathematical epidemiologist by inclination, so I'm also going to talk about the model itself for a little bit, because it's also relevant to the discussion.

In my mind, the biggest problem with the paper is *not* the Google data. Mathematical models in epidemiology handle messy data all the time, and to my mind the problems with it could be addressed with a fairly straightforward sensitivity analysis.

The biggest problem, to me, is that the researchers have "doomed themselves to success" — something that should always be avoided in research. They do this in the model they decided to fit to the data: a standard SIR model.

Briefly, a SIR model (which stands for susceptible (S) infectious (I) recovered (R)) is a series of differential equations that track the health states of a population as it experiences an infectious disease. Infected individuals interact with susceptible individuals and infect them, and then in time move on to the recovered category.

This produces a curve that looks like this:

Beautiful, is it not? And yes, this one is for a zombie epidemic. Long story.

In this case, the red line is what's being modeled as "Facebook users". The problem is this:

**In the basic SIR model, the I class will eventually, and inevitably, asymptotically approach zero**.

It must happen. It doesn't matter if you're modeling zombies, measles, Facebook, or Stack Exchange, etc. If you model it with a SIR model, the inevitable conclusion is that the population in the infectious (I) class drops to approximately zero.

There are extremely straightforward extensions to the SIR model that make this not true — either you can have people in the recovered (R) class come back to susceptible (S) (essentially, this would be people who left Facebook changing from "I'm never going back" to "I might go back someday"), or you can have new people come into the population (this would be little Timmy and Claire getting their first computers).

Unfortunately, the authors didn't fit those models. This is, incidentally, a widespread problem in mathematical modeling. A statistical model is an attempt to describe the patterns of variables and their interactions within the data. A mathematical model is an *assertion about reality*. You can get a SIR model to fit lots of things, but your *choice* of a SIR model is also an assertion about the system. Namely, that once it peaks, it's heading to zero.

Incidentally, Internet companies do use user-retention models that look a heck of a lot like epidemic models, but they're also considerably more complex than the one presented in the paper.

26Well, the number of Facebook searches may spike upwards based on this article. ;) – RobertF – 2014-01-23T17:25:03.850

1@RobertF if this makes a notable difference in searches, it's safe to say RIP Facebook! – Marc Claesen – 2014-01-23T22:40:47.417

8https://www.facebook.com/notes/mike-develin/debunking-princeton/10151947421191849 – Glen – 2014-01-23T22:57:06.670

15@Glen Mr. Develin appears to have thoroughly missed the point of the study. Firstly, it is not simply forecasting a trend in searches, but using them to validate and calibrate a model from the well-known SIR family, which is thought to be a good descriptor of fad adoption and abandonment. Second, his "clever" counterexamples fail because unlike Facebook, neither Princeton nor air are used primarily online. He chants the correlation-causation chant, but the correlation is over MySpace to Facebook, not over Facebook's historical data. Also, there is a conflict of interest. – Superbest – 2014-01-24T00:00:12.520

6The analysis is tongue-in-cheek. The point of extrapolation as if nothing changes is valid, as the two answers have described. – Glen – 2014-01-24T00:35:49.973

"doesn't mean that it will work for any social network". No, but it seems like a useful hypothesis that's worth testing, and the best way of testing it is to make a prediction that can be falsified. – david25272 – 2014-01-24T00:44:39.530

3

“If you assume that Facebook is like Myspace, bring in a fancy model adapted from epidemiology, and crunch the numbers, it turns out that Facebook is a lot like Myspace.” - Will Oremus at Slate (The best takedown of the study that I've found)

– samthebrand – 2014-01-24T04:42:39.890The youtube channel Veritasium has a video on this topic. Worth watching http://www.youtube.com/watch?v=l9ZqXlHl65g

– sElanthiraiyan – 2014-01-24T09:12:25.9531As facebook merely offers the

lowest common denominatoramong a few ubiquitous technologies it shouldn't be much longer before the naive are peer enlightened or NSA-frightened back into a healthy privacy valuation. Similarly the end of life for outdated sms texting fast approaches with peer education via $0 xmpp texting, cost consciousness, and the same rational fear of NSA-overreach. – DRiftingONg – 2014-01-24T06:25:37.5605This doesn't answer the question but is merely a bunch of personal opinions, totally unrelated to statistics. – ziggystar – 2014-01-24T08:58:30.920

3

Quasi-related: When, if ever, will Facebook contain more profiles of dead people than of living ones?

– Bobson – 2014-01-24T14:12:47.3231I think that, in the case of MySpace, it was replaced by FaceBook. So, even if the correlation is accurate [that FaceBook will come to an end], then it is likely to be because it will be replaced by something else. I think that's the "complete" correlation. – user1477388 – 2014-01-24T14:49:51.127

2@user1477388 Thanks. Indeed, there are many factors that were omitted in their study, such as spam, change of administration, other competitors, publicity, ... They also forgot to present a quantification of the uncertainty of their study, which important in forecasting. – LessFaceMoreBook – 2014-01-24T14:52:47.407

Interesting discussion. Just to contribute with data, Tuenti, a spanish social network specially used by youngers has lost about 60% of the users in the last 6 months, which is a dramatical situation. – hlfernandez – 2014-01-25T09:22:46.513

1

Why would people

– Curious – 2014-01-25T11:03:09.940searchfor something that everyone already knows? :-) The same way we have declining interest in computers or even air, but I guess we don't stop breathing :-)1

@Curious Agree. Also, check the forecast for the term "facebook". I guess the authors should have checked this, and compare it with their results.

– LessFaceMoreBook – 2014-01-25T12:29:59.723`Is Facebook coming to an end?`

Briefly said, hope never dies. – Kai Noack – 2014-01-27T13:16:11.217I am surprised with how they treat the jump of october 12. "If we remove +20% jump, the trend is going down". Does anyone have an explanation for this jump ? – Were_cat – 2014-01-27T14:31:03.547

@lmorin I think it is explained in the last paragraph of pp. 4. – LessFaceMoreBook – 2014-01-27T14:52:46.477

2

I believe that facebook is coming to an end, and that's because the main source for income is the ads and no one clicking on them anymore, the number of people using an adblock is getting bigger, last year gmc lost about 10 million $ on fb ad campaign facebook is the second company behind google, in the numbers of servers, they have thousands of servers, so facebook needs more money, users aren't paying, youtube faced similar issue, i don't know if they still in trouble now

– Lynob – 2014-01-28T02:11:39.9272The result is pretty useless as it is only based on number of Google search. Most people already have their account signed on their mobile or their personal computer. Even if some did not, i believe it is most likely in their bookmark or their homepage. With the above reasoning, why would one need to search facebook in google search ? – BeyondProgrammer – 2014-01-28T04:55:32.490

2Note that the referred analysis was deposited on a preprint server and did not pass through peer review. – Itamar – 2014-01-28T07:52:33.533

fyi see also se area51 stemreview proposal for open science reviews of scientific papers/preprints

– vzn – 2014-02-23T17:15:02.880