Social media analytics

Social media analytics or social media monitoring is the process of gathering and analyzing data from social networks such as Facebook, Instagram, LinkedIn, or Twitter. A part of social media analytics is called social media monitoring or social listening. It is commonly used by marketers to track online conversations about products and companies. One author defined it as "the art and science of extracting valuable hidden insights from vast amounts of semi-structured and unstructured social media data to enable informed and insightful decision-making."

Process
There are three main steps in analyzing social media: data identification, data analysis, and information interpretation. To maximize the value derived at every point during the process, analysts may define a question to be answered. The important questions for data analysis are: "Who? What? Where? When? Why? and How?" These questions help in determining the proper data sources to evaluate, which can affect the type of analysis that can be performed.

Data identification
Data identification is the process of identifying the subsets of available data to focus on for analysis. Raw data is useful once it is interpreted. After data has been analyzed, it can begin to convey a message. Any data that conveys a meaningful message becomes information. On a high level, unprocessed data takes the following forms to translate into exact message: noisy data; relevant and irrelevant data, filtered data; only relevant data, information; data that conveys a vague message, knowledge; data that conveys a precise message, wisdom; data that conveys exact message and reason behind it. To derive wisdom from an unprocessed data, we need to start processing it, refine the dataset by including data that we want to focus on, and organize data to identify information. In the context of social media analytics, data identification means "what" content is of interest. In addition to the text of content, we want to know: who wrote the text? Where was it found or on which social media venue did it appear? Are we interested in information from a specific locale? When did someone say something in social media?

Attributes of data that need to be considered are as follows:
 * Structure: Structured data is a data that has been organized into a formatted repository - typically a database - so that its elements can be made addressable for more effective processing and analysis. The unstructured data, unlike structured data, is the least formatted data.
 * Language: Language becomes significant if we want to know the sentiment of a post rather than number of mentions.
 * Region: It is important to ensure that the data included in the analysis is only from that region of the world where the analysis is focused on. For example, if the goal is to identify the clean water problems in India, we would want to make sure that the data collected is from India only.
 * Type of Content: The content of data could be Text (written text that is easy to read and understand if you know the language), Photos (drawings, simple sketches, or photographs), Audio (audio recordings of books, articles, talks, or discussions), or Videos (recording, live streams).
 * Venue: Social media content is getting generated in a variety of venues such as news sites and social networking sites (e.g. Facebook, Twitter). Depending on the type of project the data is collected for, the venue becomes very significant.
 * Time: It is important to collect data posted in the time frame that is being analyzed.
 * Ownership of Data: Is the data private or publicly available? Is there any copyright in the data? These are the important questions to be addressed before collecting data.

Data analysis
Data analysis is the set of activities that assist in transforming raw data into insight, which in turn leads to a new base of knowledge and business value. In other words, data analysis is the phase that takes filtered data as input and transforms that into information of value to the analysts. Many different types of analysis can be performed with social media data, including analysis of posts, sentiment, sentiment drivers, geography, demographics, etc. The data analysis step begins once we know what problem we want to solve and know that we have sufficient data that is enough to generate a meaningful result. How can we know if we have enough evidence to warrant a conclusion? The answer to this question is: we don't know. We can't know this unless we start analyzing the data. While analyzing if we found the data isn't sufficient, reiterate the first phase and modify the question. If the data is believed to be sufficient for analysis, we need to build a data model.

Developing a data model is a process or method that we use to organize data elements and standardize how the individual data elements relate to each other. This step is important because we want to run a computer program over the data; we need a way to tell the computer which words or themes are important and if certain words relate to the topic we are exploring.

In the analysis of our data, it's handy to have several tools available at our disposal to gain a different perspective on discussions taking place around the topic. The aim here is to configure the tools to perform at peak for a particular task. For example, thinking about a word cloud, if we take a large amount of data around computer professionals, say the "IT architect", and built a word cloud, no doubt the largest word in the cloud would be "architect". This analysis is also about tool usage. Some tools may do a good job at determining sentiment, where as others may do a better job at breaking down text into a grammatical form that enables us to better understand the meaning and use of various words or phrases. In performing analytic analysis, it is difficult to enumerate each and every step to take on an analytical journey. It is very much an iterative approach as there is no prescribed way of doing things.

The taxonomy and the insight derived from that analysis are as follows:
 * Depth of Analysis: Simple descriptive statistics based on streaming data, ad hoc analysis on accumulated data or deep analysis performed on accumulated data. This analysis dimension is really driven by the amount of time available to come up with the results of a project. This can be considered as a broad continuum, where the analysis time ranges from few hours at one end to several months at the other end. This analysis can answer following type of questions:
 * How many people mentioned Wikipedia in their tweets?
 * Which politician had the highest number of likes during the debate?
 * Which competitor is gathering the most mentions in the context of social business?
 * Machine Capacity: The amount of CPU needed to process data sets in a reasonable time period. Capacity numbers need to address not only the CPU needs but also the network capacity needed to retrieve data. This analysis could be performed as real-time, near real-time, ad hoc exploration and deep analysis. Real-time analysis in social media is an important tool when trying to understand the public's perception of a certain topic as it unfolding to allow for reaction or an immediate change in course. In near real-time analysis, we assume that data is ingested into the tool at a rate that is less than real-time. Ad hoc analysis is a process designed to answer a single specific question. The product of ad hoc analysis is typically a report or data summary. A deep analysis implies an analysis that spans a long time and involves a large amount of data, which typically translates into a high CPU requirement.
 * Domain of Analysis: The domain of the analysis is broadly classified into external social media and internal social media. Most of the time when people use the term social media, they mean external social media. This includes content generated from popular social media sites such as Twitter, Facebook and LinkedIn. Internal social media includes enterprise social network, which is a private social network used to assist communication within business.
 * Velocity of Data: The velocity of data in social media can be divided into two categories: data at rest and data in motion. Dimensions of velocity of data in motion can answer questions such as: How the sentiment of the general population is changing about the players during the course of match? Is the crowd conveying positive sentiment about the player who is actually losing the game? In these cases, the analysis is done as arrives. In this analysis, the amount of detail produced is directly correlated to the complexity of the analytical tool or system. A highly complex tool produces more amounts of details. The second type of analysis in the context of velocity is an analysis of data at rest. This analysis is performed once the data is fully collected. Performing this analysis can provide insights such as: which of your company's products has the most mentions as compared to others? What is the relative sentiment around your products as compared to a competitor's product?

Information interpretation
The insights derived from analysis can be as varied as the original question that was posed in step one of analysis. At this stage, as the nontechnical business users are the receivers of the information, the form of presenting the data becomes important. How could the data make sense efficiently so it could be used in good decision making? Visualization (graphics) of the information is the answer to this question.

The best visualizations are ones that expose something new about the underlying patterns and relationships contain the data. Exposure of the patterns and understating them play a key role in decision making process. Mainly there are three criteria to consider in visualizing data.
 * Understand the audience: before building the visualization, set up a goal, which is to convey great quantities of information in a format that is easily assimilated by the consumer of information. It is important to answer "Who is the audience?", and "Can you assume the audience has the knowledge of terminologies used?" An audience of experts will have different expectations than a general audience; therefore, the expectations have to be considered.
 * Set up a clear framework: the analyst needs to ensure that the visualization is syntactically and semantically correct. For example, when using an icon, the element should bear resemblance to the thing it represents, with size, color, and position all communicating meaning to the viewer.
 * Tell a story: analytical information is complex and difficult to assimilate, thus, the goal of visualization is to understand and make sense of the information. Storytelling helps the viewer gain insight from the data. Visualization should package information into a structure that is presented as a narrative and easily remembered. This is important in many scenarios when the analyst is not the same person as a decision-maker.

Impacts on business intelligence
Recent research on social media analytics has emphasized the need to adopt a business intelligence-based approach to collecting, analyzing, and interpreting social media data. Social media presents a promising, albeit challenging, source of data for business intelligence. Customers voluntarily discuss products and companies, giving a real-time pulse of brand sentiment and adoption. Social media is one of the most important tools for marketers in the rapidly evolving media landscape. Firms have created specialized positions to handle their social media marketing. These arguments are in line with the literature on social media marketing that suggests that social media activities are interrelated and influence each other.

Moon and Iacobucci (2022) focused on the marketing applications of social media analytics. Such applications include consumer behavior on social media, social media impact on firm performance, business strategy, product/brand management, social media network analysis, consumer privacy and data security on social media, and fictitious/biased content on social media. In particular, consumer privacy and data security are becoming more and more important in the social media universe given the increasing risk stemming from social media data breaches. In a similar vein, suspicious social media postings have significantly increased along with the growth of social media. Luca and Servas (2015) reported that firms have a potential incentive to use fake postings when they have increased competition. Therefore, upgrading our ability to identify and monitor suspicious postings (e.g., fake reviews on Yelp) has become an important part of social media platform management.

Muruganantham and Gandhi (2020) proposed a Multi-Criteria Decision Making (MCDM) model to prove that social media users' preferences, sentiments, behavior, and marketing data are related to social media analytics. Internet users are closely connected and show a high degree of mutual influence in social ideology and social networks, which in turn affects business intelligence.

Role in international politics
The possibilities of the dangers of social media analytics and social media mining in the political arena were revealed in the late 2010s. In particular, the involvement of the data mining company Cambridge Analytica in the 2016 United States presidential election and Brexit have been representative cases that show the arising dangers of linking social media mining and politics. This has raised the question of data privacy for individuals and the legal boundaries to be created for data science companies in relevance to politics in the future. Both of the examples listed below demonstrate a future in which big data can change the game of international politics. It is likely politics and technology will evolve together throughout the next century. In the cases with Cambridge Analytica, the effects of social media analytics have resonated throughout the globe through two major world powers, the United States and the U.K.

2016 United States Presidential Election
The scandal that followed the American presidential election of 2016 was one involving a three-way relationship between Cambridge Analytica, the Trump campaign, and Facebook. Cambridge Analytica acquired the data of over 87 million unaware Facebook users and analyzed the data for the benefit of the Trump campaign. By creating thousands of data points on 230 million U.S. adults, the data mining company had the potential to analyze which individuals could be swayed into voting for the Trump campaign, and then send messages or advertisements to said targets and influence user mindset. Specific target voters could then be exposed to pro-Trump messages without being aware, even, of the political influence settling on them. Such a specific form of targeting in which select individuals are introduced to an above-average amount of campaign advertisement is referred to as "micro-targeting." There remains great controversy in measuring the amount of influence this micro-targeting had in the 2016 elections. The impact of micro-targeting ads and social media data analytics on politics is unclear as of the late 2010s, as a newly arising field of technology.

While this was a breach of user privacy, data mining and targeted marketing undermined the public accountability to which social media entities are no longer subject, therefore twisting the democratic election system and allowing it to be dominated by platforms of “user-generated content [that] polarized the media’s message.”

2020 United States Presidential Election Controversies
Analysis of Facebook political groups and postings by social media analytics firm, CounterAction, have shown the role of social media giants in protest movements such as attempts to overturn the 2020 United States presidential election and the 2021 United States Capitol attack.

Brexit
During the 2016 Brexit referendum Cambridge Analytica attracted controversy for its use of data gathered from social media. A similar case took place in which a breach and Facebook data was acquired by Cambridge Analytica. There was concern that they had used the data to encourage British citizens to vote to leave the European Union in the 2016 EU referendum. After a three-year investigation it was concluded in 2020 that there had been no involvement in the referendum. Besides Cambridge Analytica, several other data companies such as AIQ and the Cambridge University Psychometric Centre were accused of, then investigated by the British government for their possible abuse of data to promote unlawful campaign techniques for Brexit. The referendum ended with 51.89% of voters supporting the withdrawal of the United Kingdom from the European Union. This final decision impacted politics within the United Kingdom, and sent ripples across political and economic institutions worldwide.