Social media bots—automated programs on social media platforms—have been an object of national interest and analysis for nearly four years. Although there has been a flood of reporting on social media disinformation and manipulation, CNA initiated this study to explore the specific implications of social media bots for US special operations forces (SOF) and the broader national security community. In this report, we highlight the tremendous capacity that the successful deployment of social media bots (or networks of bots, known as “botnets”) has to influence public discourse. If deployed effectively, bots could be a powerful asset in the US government’s toolkit, with benefits that outweigh the attendant risks.
The primary object of analysis for this report is the different ways that social media bots can be used to influence conversations between social media users online. In this sense, our report is different from most of the analysis released since the 2016 US presidential election. Many reports about bots focus on the broader issue of disinformation, the disruptive ends of malicious users, or the technological challenge of stopping the activity.
Although we provide examples of disinformation and malicious social media bot activity, the focus of this report is on neither the content of automated social media posts nor the long-term goals of these accounts’ programmers. Social media bots can be programmed for malicious ends, but their activities can also be directed toward neutral purposes or even prosocial goals. For example, a social media bot can be programmed to share the weather just as easily as it can be programmed to share misinformation about a nation’s leader.
It is tempting to think of social media bots exclusively in terms of spreading information (or disinformation). However, our analysis of the objectives of social media bots (i.e., why bots are used) identified six distinct activities.
We populated a taxonomy of these six social media bot activities (taking care to include neutral, prosocial, and malicious examples) to highlight the incredible range of this tool. In some cases, historical examples were difficult to identify. Thus, we have included examples in which analysts believe strongly that social media bots and botnets were responsible for the activity; examples in which the activity was very likely conducted by social media bots or botnets; and examples in which the activity was almost certainly conducted by humans but could have been conducted by social media bots or botnets.
Download reportDISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. 9/18/2020
Details
- Pages: 96
- Document Number: DRM-2020-U-028199-Final
- Publication Date: 9/18/2020