We now turn to direct comparisons of reports purportedly regarding the same ties (e.g., if I say you are my friend, do you agree?). The Colorado Springs “Project 90”, was a study of STI risk among a population of commercial sex workers, intravenous drug users, and their partners. Within this study, once a set of alters was established,108 researchers used a series of relational interpreters to ask what types of behaviors (sharing needles, having sex, or social interactions)109 respondents reported engaging in with each alter. Because the Project 90 staff had been working among the population for a long time, many of the relationships enumerated had the potential to be reported on by multiple respondents.110
We used these data to assess the concordance among multiple reports of the same relationships. We found two general patterns among the comparisons of these multiple reports (adams and Moody 2007). First, is that people seem to be reporting on relationships within “fuzzy” temporal reporting windows that attempt to align with the question wording. This particular project asked people to report on their own relationships from the past six months. We demonstrate how apparently discrepant reports actually are substantially resolved once you account for the potential non-overlap between reporting windows (e.g., if Ren is interviewed today, and says he was sharing needles with Stimpy within the last six months, but Stimpy was interviewed eight months ago and says he never shared needles with Ren; they both could be telling the truth, e.g., if Stimpy only started shooting heroine four months ago, by sharing a needle with Ren.). Across all tie types we examined, reporting concordance rates were approximately 10% higher once eliminating reporting windows that were non-overlapping from the comparisons (see adams and Moody 2007 Table 1, p. 51 in original). Second, more socially salient ties were reported with higher agreement than those with less social salience. How the data exhibit social salience differs depending on who is reporting and what type(s) of ties they are reporting on. So, for example, for one’s own ties, reporting agreement was higher on sexual encounters, than needle sharing, both of which showed higher levels of reporting concordance than other social relationships (ibid). When people were reporting on the ties their partners have with others, they are more likely to agree with the self reports of those same relationships for activities that were more likely to take place in groups (sharing needles) than with only the partners (sex - see Tables 3 & 4, p. 53 in original). In sum, the reporting agreement in this study was relatively high, especially when properly temporally conditioning the comparisons made, and accounting for potential data censoring that may have arisen due to high activity.111
These sorts of studies have lead to an important addendum—in many settings where we’ve had the capacity to compare multiple reports of the same relationships, scholars have bolstered (rather than decreased) their confidence in the potential utility of having been able to successfully capture relationship information from only one of the members of relationships of interest (Brewer et al. 2006). This is particularly useful given situations where we only have access to reports from one partner, or when efforts require limited data to combat the escalating costs associated with gathering high quality social network data (McCarty, Killworth, and Rennell 2007).
When members of the same population are purportedly providing information on the same set of relationships, data quality assessments akin to that described above are possible to address comparisons at a variety of levels. Perhaps most common are comparisons made directly at the dyadic level, as with the above examples. But one of the most memorable examples attempting to corroborate reports from members of a population conducts the assessment at the population level. Among the more replicated findings in studies of sexual partnership data is that when you compare men’s and women’s reports of their number of sexual partners, the simple arithmetic comparisons between the sums of those reports would lead to the conclusion highlighted succinctly in the title of a paper by Nnko and colleagues (2004), “Secretive females or swaggering males?” Conventionally these sorts of studies draw on the observation that women report an aggregate number of sexual partners that is substantially lower than men’s aggregate reports, to conclude that either women must be under-reporting their sexual activity, or men are over-reporting theirs—or both.
While that conclusion may seem expedient, it assumes sufficient sampling coverage of the target population to warrant considering these comparisons as meaningful. However, show the importance of node-level sampling for evaluating these comparisons. They noted that in a general population sample, you are likely to under-sample the highest degree actors from that population, which in their case was commercial sex workers (CSWs). Further, they show that once you account for the rate at which CSWs would be under-sampled in an approach that randomly samples from the population, and how their disproportionate contributions to the population-level degree distributions would be missed, the apparent disparity between men and women is no longer significant. In other words, the rate at which CSWs are under-sampled from the population explains the apparent discrepancy between men’s and women’s reported number of sexual partners.