4.3.1 “Dark” Network Data

Military and terrorism applications provide a prominent example of the complex ethical considerations when using social network data. Some published examples include mapping of the network among 9-11 hijackers (Krebs 2002), and analyses of Russian organized crime (Finckenauer and Waring 1998). Previous scholars have highlighted (e.g., Everton (2012); Goolsby (2005)) a number of heated exchanges addressing scholars’ views on such uses on SOCNET, the primary listerve for social network researchers.86 Within these exchanges, many scholars have strongly argued that using SNA for such purposes is unconditionally inappropriate for scholars to participate in, because of the potential harms that could be supported on the basis of the analysis (Kadushin 2005). Many scholars are not comfortable with the notion that network research could lead to literal life or death consequences (e.g., in targeting terrorists87) or criminal prosecutions (e.g., in gang networks research). Proponents counter either with the potential lives saved from such uses, or that military or terrorist adversaries may already be making use of the same methods (Everton 2012). But even those proponents warn that any such uses must proceed with extreme caution [e.g., see Kadushin (2005), p. 148-9).

These debates raise a number of important issues. First, some assume that methods themselves are ethically neutral (Nisbet 1969), a perspective on which Goolsby (2005) leans heavily when arguing for the potential of researchers’ ethical involvement in military applications of SNA. However, academic researchers increasingly dismiss any claims that methods themselves are ethically neutral (Vayena 2015). Scholars also note that the mere presence of data does not make all of their potential uses ethically supportable (Poor and Davidson 2018).

Second, these debates raise the question of what level we can reasonably expect analytic conclusions to apply. When it is noted that bridging nodes in a network span otherwise disconnected segments of a population, it is often coupled with the statement that their removal could disrupt the network. Node removal in the context of a study of sexually transmitted infections generally implies behavioral changes (e.g., dropping concurrent partners or beginning to use condoms). In this case, node’s are only “removed” from their potential contributions to the process being evaluated (i.e., infection spread). In such a case, network position (e.g., higher centrality) becomes a filter for prioritizing the targeting of interventions to individuals in particular structural positions (Dezsö and Barabási 2002), e.g., with elevated effort for new prevention or treatment protocols. In that case, the mis-identification of “high risk” nodes does not bring any additional potential risks to such nodes; in fact, they may preferentially benefit, even though population level benefits are likely optimized when interventions are made broadly available (Moody, adams, and Morris 2017).

However, the targeting that can arise from criminal or military applications can have life-or-death consequences. As noted above, some scholars unequivocally object to work that could support such ramifications. Others note that as a field, social network analysis has developed to a point where we can reliably predict general patterns at the population level. However, when it comes to identifying or making predictions about individual nodes, the errors involved (as is the case for any methods designed to account for population-level trends) could lead to dire outcomes, for the wrong person. In sum, social network data, like most social science data, is messy and conclusions drawn from these data to make predictions about an individual case is dangerous and can lead to unethical applications.