This academic year I have had two different conversations about destroying one’s own archive. News this last week has left me feeling grimly justified in upholding an unpopular position.
The first was a conversation with university IT managers about supporting our research work. They usually do their best to help with the open-source tools we use with our collaborators, although it’s not university policy. At a certain point the conversation turned to the question of protecting the archive—fieldnotes, recordings, photographs, database—that accumulates during our fieldwork. Was it backed up? Yes, of course. On university servers? No, of course not. But why ever not? Because there is confidential data there, and it’s not just ours. But the university guarantees confidentiality and security! That, I said, is not guaranteed—a statement that caused some annoyance.
The second was interviews that we conducted for a PhD position for the MedPlant project, a wonderful EU initiative that connects a wide range of ethnobiologists, botanists, phylogenists, and drug discovery projects. The question of fieldnotes came up again: what obligations were created if informants told researchers overt secrets, that is, the informant made it clear that what they were telling the researcher was a secret? What was the researcher’s responsibility to the archive, and how should they ensure its security and confidentiality? When, if ever, should a PhD student destroy their own notes?
(That informants do tell researchers secrets is a fact of fieldwork; sometimes it’s a way of saying things that shouldn’t be said safely, and sometimes it’s a way of testing and creating obligations between collaborators. Half the time the researcher’s language skills are so poor that they only realise what they’ve recorded afterwards, and the other half the time they think they’ve been told something precious that was in fact banal. So too, researchers often tell their informants details about their own lives in poorly-formed sentences that lead to strange perceptions. The precarious intimacy of fieldwork is woven thus.)
My insistence in both of those conversations that the archive could rightly be destroyed if there was a real threat to its ownership or confidentiality felt somewhat extreme, and yet recent events in Boston and Northern Ireland fully bear out my worries. In 2009, researchers at Boston College set out to create an oral history archive of the Troubles in Northern Ireland, recording the words of those who had been involved on both sides of what might be described as a decades-long low-intensity civil war. The interviews were conducted with firm guarantees that the recordings would never become public until after the death of the interviewees. Those guarantees were made by the researchers, with the assumption that their university could and would stand by those promises. Yet those guarantees were abandoned in the face of subpoenas issued on behalf of the Police Service of Northern Ireland (PSNI), who got wind that there might be useful evidence in the tapes that would allow them to pursue prosecutions. The academics who made the archive may have been prevented from destroying the tapes by their own university, which handed them over to the PSNI. Subsequent developments, including the arrest of a presently serving deputy first minister and death threats being issued towards the academics originally involved, suggest that this is a breach of academic trust that will have repercussions for many years to come in Northern Ireland, but perhaps also across a wider range of fields. Certainly the relationship between researchers and their employers has been profoundly challenged across the whole academic sector.
The affair has been widely covered. The problem first emerged in 2011; see coverage here; here with a revealing comment from a law librarian; and here in the Chronicle of Higher Education, a thorough and yet troubling piece. Interested readers might want to look at the Boston College account of the legal proceedings and the BBC coverage of the events; the BBC (as the state broadcaster for one of the parties in the past conflict) can be partial on Northern Irish questions, just as it is on the Scottish independence referendum (where it may become the state broadcaster for only one of two future states)—but it reflects the norm that would have informed the PSNI’s actions.
We discussed this case in the Mahayana Ethics class this week. In that class we consider intentions carefully. The intentions of the Boston College researchers that set up the project were, so far as I can gather, largely altruistic though framed by the suspicion and desire for justification that inevitably comes with civil conflict. They were taking part in a worldwide move to enable truth and reconciliation processes in war-torn countries; they were struggling to find some valid means by which to record desperately important history; and they were also seeking a means of conducting this research that would prevent that archive from re-opening wounds or the project being shut down by powerful interests. The PSNI showed a stronger attachment to criminal justice than stewarding peace, and Boston College’s administration clearly chose a safe, but selfish, path in not encouraging (or at least condoning) the real or feigned disappearance of the archive the moment it was threatened. (The interesting proposal that the archive should be hidden or taken out of reach as though it had been destroyed—shot into space, encrypted beyond practical recovery—was a popular one in class.) The academic ideals that informed the researchers were set aside by the college administrators; the Chronicle article referenced above called it ‘grand ambitions undermined by insular decision-making’.
What does this have to do with ethnobiology? Our work in Nepal on whole systems of medicine and medicinal trade has put the question of drug discovery and the rights of medicinal and ecological knowledge holders at the centre of our research ethics. Although I am fairly sure that we have not happened on any potential miracle cures in our own research, students in the MedPlant project are directly concerned with drug discovery—and how an ethnobiologist working at a university balances their ethics and their contract just got a lot more difficult. On the one hand, a modern academic contract usually contains a clause explicitly assigning the worker’s products (in this case, the intellectual property) to their employer, by analogy to the way that an assembly line worker in Detroit doesn’t get to keep the car they help build or a diamond miner in South Africa can be searched to make sure they’re not trying to ‘steal’ the diamonds they mined. Crucially, though, the knowledge generated in ethnobiological research does not belong to the ethnobiologist, and it cannot therefore be assigned to the ethnobiologist’s employer. Hence, where there is the threat of expropriation, then the ethnobiologist is morally obliged to keep the co-produced knowledge in a safe ‘escrow‘—that is, encrypted and backed up—where it cannot be appropriated by their own employer or anyone else without the consent of all the communities who have a stake in its future.
The fundamental principle here applies equally to anthropological fieldwork of any sort, which is fundamentally a collaborative exercise in the joint production of knowledge narratives. That knowledge does not belong to the academic simply because they have the wealth to fly into a field site and the power to write and be published in a global language.
For ethnobiologists it is crucial that whatever knowledge they encounter is credited to the communities that generate and steward that knowledge. Of course, tacit or latent knowledge of the environment may well only become commodifiable (whether as information in a peer-reviewed article, a pamphlet published for the community, or as a new drug patent) through the interaction between the researcher and the community—but when that knowledge is transformed into a potential commodity (not ‘discovered’ but ‘made awkwardly visible’), then the intellectual property rights must remain with the community.
This has to be considered together with the way in which universities now see themselves as vehicles through which research can be commodified and transformed into income. The purpose of the university has become, in part, to sell research in whatever way is most profitable; taken in this light, academics that work in a university are regarded as producers but not owners of their research: they are cows to be milked for students, funding, patents and spinoffs. The ethical consequences for our research collaborations with indigenous and local communities are often not carefully considered as a distinct case separate from, for example, lab research. University ethics review boards are concerned just as much to protect the corporate interests of the university as they they are to ensure that the research is undertaken through just means and for just ends. The legal frameworks within which universities operate, though, assume that there is such a thing as commodifiable knowledge in all kinds of research. That tension means that, for the university, research into knowledge that cannot justly be expropriated, such as ancestral lineage knowledge or community-based ecological or medical knowledge has to be squeezed into a system that nonetheless creates commodifiable knowledge. For us, the same knowledge processes and patterns have to be practically protected from exposure to this corrosive legal regime.
Corinne Hayden, in When Nature Goes Public (Princeton 2003), challenges the way in which research in marketplaces can be a way for ethnobiologists to evade their responsibility to communities as they seek to understand medical systems and perhaps find new medicines. Our research took place in markets not because they are a space for easy expropriation of local medical knowledge, but because markets in the Himalayas are a key site for the creation and distribution of medical knowledge. That small difference in intent does not protect our data in itself, though; and it is possible that an unscrupulous bioprospecting company might try to use legal tools to force my university to divulge secure information from our fieldwork. For example, a company (or even the university itself) might argue that because the data came to light in the marketplace it was no longer proper to any particular community. If we put our representation of that data—fieldnotes, photographs, what have you— onto university servers, we would have already given up our ability to choose our informants over my employer: the choice would belong practically to the system administrators controlling the media, and effectively to the university’s managers. As we have seen above, university managers are not, and do not claim to be, bound by the same professional and moral standards that we academics uphold.
For us, then, the threat that our research, which belongs as much to our informants as it does to us, might be expropriated through the corporate interests of my university is a small but serious risk—and putting our data onto the university servers would remove our ability to guarantee our trustworthiness to our collaborators.
How many other projects must there be underway right now where academics are worried about balancing their moral obligations to their collaborators with their contractual vulnerabilities? How many other academics have realised exactly how seductive and foolish using university services for confidential archives might be? Where the archives are tangible–—what does this do to the actual archivists, the custodians of manuscripts or collections, who are themselves researchers balancing ethical challenges over preservation and repatriation?
The argument that the archive must not be destroyed is not a universal argument at all, and here smacks of colonial academic inquisition. In conversations about secret knowledge I have had around the Kathmandu Valley with elder tantric lineage holders as well as younger practitioners, all the elders I asked and most younger practitioners said that it is preferable to let a transmission lapse than to transmit it just for the sake of its preservation. It would seem that some number of secret lineages have actually lapsed in the past two centuries because the auspicious circumstances for transmission just did not occur.
It seems to me that those who insist the archive must not be destroyed cannot simply rest on a moral claim, but have to provide a real solution to this challenge. What sorts of practical solutions might there be? For example: are there any modes of encryption—that can be implemented on potentially suspect hardware—that have the equivalent of a dead-man switch, such that the absence of a key (or indeed two or more matched keys, representing the shared stakes of the co-producers of the archive) causes the archive to self-destruct, or less drastically, become re-encrypted such that decryption takes impractically great effort? This would leverage Moore’s law to catapult the archive safely into the future. Or do we assume that we will always have to reserve the capacity and the willingness to destroy the archive?