Google Flu Trends

Google Flu Trends (GFT) was a web service operated by Google. It provided estimates of influenza activity for more than 25 countries. By aggregating Google Search queries, it attempted to make accurate predictions about flu activity. This project was first launched in 2008 by Google.org to help predict outbreaks of flu.

Google Flu Trends stopped publishing current estimates on 9 August 2015. Historical estimates are still available for download, and current data are offered for declared research purposes.

History
The idea behind Google Flu Trends was that, by monitoring millions of users’ health tracking behaviors online, the large number of Google search queries gathered can be analyzed to reveal if there is the presence of flu-like illness in a population. Google Flu Trends compared these findings to a historic baseline level of influenza activity for its corresponding region and then reports the activity level as either minimal, low, moderate, high, or intense. These estimates have been generally consistent with conventional surveillance data collected by health agencies, both nationally and regionally.

Roni Zeiger helped develop Google Flu Trends.

Methods
Google Flu Trends was described as using the following method to gather information about flu trends.

First, a time series is computed for about 50 million common queries entered weekly within the United States from 2003 to 2008. A query's time series is computed separately for each state and normalized into a fraction by dividing the number of each query by the number of all queries in that state. By identifying the IP address associated with each search, the state in which this query was entered can be determined.

A linear model is used to compute the log-odds of Influenza-like illness (ILI) physician visit and the log-odds of ILI-related search query:
 * $$\operatorname{logit}(P) = \beta_0 + \beta_1 \times \operatorname{logit}(Q) + \epsilon$$

P is the percentage of ILI physician visit and Q is the ILI-related query fraction computed in previous steps. β0 is the intercept and β1 is the coefficient, while ε is the error term.

Each of the 50 million queries is tested as Q to see if the result computed from a single query could match the actual history ILI data obtained from the U.S. Centers for Disease Control and Prevention (CDC). This process produces a list of top queries which gives the most accurate predictions of CDC ILI data when using the linear model. Then the top 45 queries are chosen because, when aggregated together, these queries fit the history data the most accurately. Using the sum of top 45 ILI-related queries, the linear model is fitted to the weekly ILI data between 2003 and 2007 so that the coefficient can be gained. Finally, the trained model is used to predict flu outbreak across all regions in the United States.

This algorithm has been subsequently revised by Google, partially in response to concerns about accuracy, and attempts to replicate its results have suggested that the algorithm developers "felt an unarticulated need to cloak the actual search terms identified".

Privacy concerns
Google Flu Trends tries to avoid privacy violations by only aggregating millions of anonymous search queries, without identifying individuals that performed the search. Their search log contains the IP address of the user, which could be used to trace back to the region where the search query is originally submitted. Google runs programs on computers to access and calculate the data, so no human is involved in the process. Google also implemented the policy to anonymize IP address in their search logs after 9 months.

However, Google Flu Trends has raised privacy concerns among some privacy groups. Electronic Privacy Information Center and Patient Privacy Rights sent a letter to Eric Schmidt in 2008, then the CEO of Google. They conceded that the use of user-generated data could support public health effort in significant ways, but expressed their worries that "user-specific investigations could be compelled, even over Google's objection, by court order or Presidential authority".

Impact
An initial motivation for GFT was that being able to identify disease activity early and respond quickly could reduce the impact of seasonal and pandemic influenza. One report was that Google Flu Trends was able to predict regional outbreaks of flu up to 10 days before they were reported by the CDC (Centers for Disease Control and Prevention).

In the 2009 flu pandemic Google Flu Trends tracked information about flu in the United States. In February 2010, the CDC identified influenza cases spiking in the mid-Atlantic region of the United States. However, Google's data of search queries about flu symptoms was able to show that same spike two weeks prior to the CDC report being released.

“The earlier the warning, the earlier prevention and control measures can be put in place, and this could prevent cases of influenza,” said Dr. Lyn Finelli, lead for surveillance at the influenza division of the CDC. “From 5 to 20 percent of the nation's population contract the flu each year, leading to roughly 36,000 deaths on average.”

Google Flu Trends is an example of collective intelligence that can be used to identify trends and calculate predictions. The data amassed by search engines is significantly insightful because the search queries represent people's unfiltered wants and needs. “This seems like a really clever way of using data that is created unintentionally by the users of Google to see patterns in the world that would otherwise be invisible,” said Thomas W. Malone, a professor at the Sloan School of Management at MIT. “I think we are just scratching the surface of what's possible with collective intelligence.”

Accuracy
The initial Google paper stated that the Google Flu Trends predictions were 97% accurate comparing with CDC data. However subsequent reports asserted that Google Flu Trends' predictions have been very inaccurate, especially in two high-profile cases. Google Flu Trends failed to predict the 2009 spring pandemic and over the interval 2011–2013 it consistently overestimated relative flu incidence, predicting twice as many doctors' visits over one interval in the 2012-2012 flu season as the CDC recorded. A 2022 study published (with commentaries) in the International Journal of Forecasting found that Google Flu Trends was outperformed by the recency heuristic, an instance of so-called "naive" forecasting, where the predicted flu incidence equals the most recently observed flu incidence. For all weeks from March 18, 2007, to August 9, 2015 (the horizon for which Google Flu Trends predictions are available), the mean absolute error of Google Flu Trends was 0.38 and of the recency heuristic 0.20 (both in percentage points; linear regression with a single predictor, the most recently observed flu incidence, had a mean absolute error of also 0.20, and the benchmark of random prediction had 1.80).

One source of problems is that people making flu-related Google searches may know very little about how to diagnose flu; searches for flu or flu symptoms may well be researching disease symptoms that are similar to flu, but are not actually flu. Furthermore, analysis of search terms reportedly tracked by Google, such as "fever" and "cough", as well as effects of changes in their search algorithm over time, have raised concerns about the meaning of its predictions. In fall 2013, Google began attempting to compensate for increases in searches due to prominence of flu in the news, which was found to have previously skewed results. However, one analysis concluded that "by combining GFT and lagged CDC data, as well as dynamically recalibrating GFT, we can substantially improve on the performance of GFT or the CDC alone." A later study also demonstrates that Google search data can indeed be used to improve estimates, reducing the errors seen in a model using CDC data alone by up to 52.7 per cent.

By re-assessing the original GFT model, researchers uncovered that the model was aggregating queries about different health conditions, something that could lead to an over-prediction of ILI rates; in the same work, a series of more advanced linear and nonlinear better-performing approaches to ILI modelling have been proposed.

However, followup work was able to substantially improve the accuracy of GFT through the use of a random forest regression model trained on both the incidence of influenza-like illness and the output of the original GFT model.

Related systems
Similar projects such as the flu-prediction project by the Institute of Cognitive Science at Universitat Osnabrück carry the basic idea forward, by combining social media data e.g. Twitter with CDC data, and structural models that infer the spatial and temporal spreading of the disease.