A significant thing has happened. In that this significant thing is a kind of academic fight that has taken place in a leading but niche academic journal called Security Dialogue, it’s easily missed, both in totality and in terms of its significance. What has happened, in brief, is that through much difficulty, two (critical) security theorists, Ole Wæver and Barry Buzan, published a paper (“Racism and responsibility – The critical limits of deepfake methodology in security studies: A reply to Howell and Richter-Montpetit”) striking back at a characterization of their contribution to security theory as “foundationally racist,” which appeared in an earlier article by Alison Howell and Melanie Richter-Montpetit in Security Dialogue.
In fact, the paper Wæver and Buzan are responding to (“Is securitization theory racist? Civilizationism, methodological whiteness, and antiblack thought in the Copenhagen School”) characterizes their approach to security theory as being “structured not only by Eurocentrism but also by civilizationism, methodological whiteness, and antiblack racism.” Quelle surprise. Woke academia is doing what Woke academia does—bully its way into apparent prominence by calling something something-ist to its core. What’s different—and significant—here is that Wæver and Buzan struck back, apparently with significant difficulty (Wæver tweeted, “The theory’s creators #BarryBuzan & I now got reply publ’ed w many hurdles”), and the academic journal, Security Dialogue, published it, no doubt at their own risk.
How can we be so sure about this risk? Well, Woke academia is going to do what Woke academia does, so we can effectively guarantee that the journal, its editors, the authors, their friends and colleagues, and on down the line will all be accused of complicity in racism for this little stunt (by which I mean responsible academic practice that’s long overdue). As noted by the authors in their acknowledgements: “Thanks to those who helped. We will not name them, because we don’t want to have happen to them what happened to us.” Ominous, but also typical now. Woke academia is as Woke academia does, after all.
In their response, which is only a short summary of the response they needed to write (due to length restrictions by the journal), Wæver and Buzan call for the paper that accuses them of racism to be retracted for gross failures in methodology, if not for the libelous accusation, which may or may not happen. (My own feelings on that matter are mixed for complicated reasons I’ll come back to in a postscript to this essay.) They also deliver a much-needed and scathing critique of the “methodology” that allowed the accusatory authors to reach the conclusions they undoubtedly started with. There, Wæver and Buzan poignantly brand the lazy, accusatory “critical” approach that calls them racist as a deepfake methodology. This bit of brilliance is nothing short of a metaphor that the world has been needing, and it does much to expose the flawed approach in what I, following their own Theorists, have been calling “Critical Social Justice” for what it is.
My purposes here are to highlight this concept of “deepfake methodology” and contextualize in terms of what I have been learning about these lazy critical methods. This, I hope, will illuminate not only how badly off the scholarly rails Critical Social Justice and its Theory are but also allow us to start to draw some lines around responsible and irresponsible approaches to critical theories in general. In general, I hope to demonstrate here that Critical Social Justice is, in fact, not scholarship at all, which would put legs under Wæver’s and Buzan’s call for retraction (thus justifying a rather complicated discussion in the postscript).
First, to appreciate just how typically absurd the accusation against Wæver and Buzan is, it requires knowing that they are, in their own words, the “main architects” behind this branch of security theory, which appears within the so-called Copenhagen School (disclosure: I don’t know much about security theory, and this paragraph is mostly for painting a context-giving picture in broad strokes). The Copenhagen School is, in fact, a critical theory approach to security theory, though it seems not to be an insane one (a similar division seems to arise between sociologists who hew toward Pierre Bourdieu versus ones who accept Michel Foucault’s views of power as fundamental). That is, the project is likely to be pretty leftist, so far as these things go, and already questions, if not resists, the status quo in security theory, which appears to be nationalistic. Buzan, in fact, identifies his affiliation as being the London School of Economics and Political Science, which is very famously a slow-burn (incrementalist) socialist think tank built upon the Fabian model.
So, Wæver and Buzan are already critical theorists in their own fashion, and they appear to be rightly fed up with the lazy, paranoid, identity-politics rooted style of critical theory that has, with the rise of intersectionality in the 1990s, come to dominate more or less of all critical methodology. Good for them. It’s long past time for academics to start standing up to this crap, and their metaphor of “deepfake methodology” is likely to go a long way to helping people understand just how hollow, irresponsible, and unethical that approach—which is the hallmark of Critical Social Justice—really is.
Understanding why this approach is so bad requires us to understand what Wæver and Buzan mean by “deepfake methodology.” As most of us will understand now, a deepfake is something like an extension of a doctored photograph to include audio and especially video, which is only really possible using advanced machine-learning algorithms and fairly powerful computers. In the audio version, a deepfake will take snippets of a person’s voice and reconstruct how they speak, enabling the production of an audio clip that has them saying anything the creator wants. With video, it’s even more insidious, combining both the audio feature and the ability to replicate one’s face, facial movement, and facial expressions in a video setting (we’ll all be deepfake porn stars within a few years, which is bad enough without even having to get into how insanely dangerous this will be in politics). With deepfake technology, as Wæver and Buzan point out, it is possible to chop up a person, their identity, or, relevantly here, their words and ideas, and make them very plausibly seem to say (or do) whatever you want them to say or do for whatever strategic or maleficent purpose you might have with a simulation of their likeness.
And Wæver and Buzan don’t mince words about how bad this approach is. “H&RM’s article could perhaps best be used as a teaching tool for how not to make an academic argument. The kind of deepfake methodology it employs should have no place in academic debates and should certainly not be published in a reputable journal,” they write at the very top of their rebuttal. We should all be so lucky that these words come true.
Their accusation, then, is that the Critical Social Justice approach to “analysis” amounts to the creation of a deepfake in which evidence of racism, colonialism, sexism, or whatever other -isms or -phobias the activists at hand are obsessed with, can be found in essentially any piece of writing, body of work, or anything else. This is, of course, exactly what my colleagues and I have been saying for years now, but Wæver and Buzan do a great job of giving the metaphor life by showing how it’s done. They describe these approaches this way:
If there is a methodology at play, it is deepfake in the sense that if you break a corpus of text down into small fragments, you can reassemble it to say anything you want. Deepfake as analogy does not imply any claim about intentional falsehood. The analogy is to the technique: making somebody ‘speak’ by using splinters from them reassembled to produce meaning disconnected from the original texts.
These techniques, at least as they’re utilized in Critical Social Justice, have names: discourse analysis and, especially, close reading, both of which can, of course, be done responsibly instead of as a means of creating deepfakes. The objective of discourse analysis is to carefully pick apart the way words are used in a text and examine how they’re used in relationship to other words, including words that aren’t there (drawing upon poststructuralist philosopher Jacques Derrida’s concept of absence). This renders it potentially highly interpretive, to say the least. Wæver and Buzan clearly identify this problem in the “H&RM” paper (acronym for the authors’ names):
H&RM misattribute positions to securitization theory to produce additional guilt by association. For instance, they say we cite ‘frequently and favorably . . . Huntington’s racist . . . “clash of civilizations” thesis’ (H&RM, 2020: 7 ). In most of the cited instances, we reference Huntington as cases of securitization – that is, we analyse empirically how various security discourses articulate threats and referent objects. H&RM fail to distinguish between citations of practices analysed empirically and citations of academic work drawn upon. That we study a securitization performed by Huntington does not signify our agreement with his thesis – on the contrary, we critically analyse its political performance of securitization (and at length contrasted our approach to Huntington’s thesis in Buzan and Wæver, 2003: 41).
When combined with close reading, discourse analysis really does render an analysis of a text close to deepfake levels in terms of being able to contort it into saying whatever one wants to read into it (here: racism and colonialism). Close reading is, at least in Critical Social Justice praxis, scouring a text for particular ways that problematics appear and then highlighting those as indicative of the “true meaning” of the whole text, even if such a reading of the text would genuinely be impossible by any honest means. Indeed, the quoted example just above indicated the paper being rebutted uses both close reading and discourse analysis to conclude the opposite of what the authors of the original text had intended by them. As they summarize (in the paragraph defining the deepfake methodology, which I’ll repeat in their words):
H&RM generally use quotations radically out of context. They never study what is done in the texts they are ‘reading’. They say nothing about their own methodology or data selection and give no principles for interpretation. They do not define racism (see response by Lene Hansen ), and they don’t discuss at all what it means to read a theory and judge whether it is racist. Given that this is the theme of the article, it is disturbing that Security Dialogue has published it. Despite H&RM’s repeated assertions about something being ‘foundational’ to securitization theory, they do not follow any standards for how to find what is ‘foundational’ for, or ‘structures’, a theory. If there is a methodology at play, it is deepfake in the sense that if you break a corpus of text down into small fragments, you can reassemble it to say anything you want. Deepfake as analogy does not imply any claim about intentional falsehood. The analogy is to the technique: making somebody ‘speak’ by using splinters from them reassembled to produce meaning disconnected from the original texts.
This is a trenchant analysis that results in a very useful analogy, then, for identifying and characterizing the sloppy, lazy, and profoundly unfair Critical Social Justice approach, which is mostly used to bully their way into having produced “scholarship.” Likening it to deepfakes particularly exposes just how unethical it is, not just how hollow.
Wæver and Buzan clearly understand how this bullying works, as they raise the key idea in a bullet point of items needed to be addressed at the very top of their paper, and it’s literally shocking to see this question asked in print in an academic journal given the undeniable hegemony of Wokeness in academia in any year later than 2010. They ask, “Is racism such a uniquely damaging force that the academic struggle against it warrants violating scholarly norms and potentially sacrificing the private and professional integrity of non-racist colleagues?”
That question alone—even hanging there in the air, asked merely rhetorically—has the potential in and of itself to pull down the entire Critical Social Justice facade of moral superiority that has all but conquered academia as a whole and turned it Woke. The answer in Woke academia is obvious: yes, it quite obviously does and it rather obviously has had that power for some time now. Thus, by raising it, Wæver and Buzan point yet another a finger at the naked emperor as she/her parades through the public square. Are academics really willing to sacrifice all of their norms, methods, and integrity in the name of a moral witchhunt that’s threatening to burn down the house if it doesn’t get its way?
To be sure, this question has this power for two reasons. First, it’s because of course they are, at least so far as almost anyone can tell. Second, it’s because it lays bare what I’ve only come to understand recently myself about the Critical Social Justice approach. It’s that the fundamental epistemological methodology of Critical Social Justice is problematizing. Problematizing plays the role, in Critical Social Justice, that falsification plays in science, and that’s how we got here and how we can figure out how to walk back from the ledge before it’s too late. This takes a moment’s explanation, though.
When it comes to producing knowledge—epistemology—there are fundamentally two processes: forwarding new ideas and destroying “bad” ones. It’s actually this destructive force that makes knowledge happen, and, as couldn’t be otherwise, there are better and worse ways to go about it. At some point, this process was mostly the province of theology. “Does it agree with the Word of God?” was the central question that eliminated “bad” ideas, and this eventually got perverted into “Does it threaten faith?” which seems to have led to the Inquisition. This, for whatever community cohesion it might produce, isn’t a great method. One will also pause to appreciate that it’s largely a moral method, as religions are in large parts sociological structures that provide and enforce moral law (whether they also enforce state law as well or not).
Intermingled with theology, especially once the Renaissance occurred and the Enlightenment got started, was philosophy. Philosophy took up the charge of knowledge by asking questions like “Is this logically consistent, valid, sound, and so on?” That is, philosophy eliminated bad ideas by showing them to be in contradiction with other ideas convincingly argued or already deemed to be good. The relevant condition is called defeasibility, and philosophy ultimately rejects ideas it can conclude have been defeated (which seems never to really happen fully). This step from theology to philosophy was, in fact, an improvement, but it still left a lot to be desired. It’s entirely possible to think of six impossible things before breakfast every day, after all.
Sooner or later, we stumbled upon science in the moment we stumbled upon empiricism, which works by a process that was later named (by Karl Popper) falsification. Logical consistency isn’t good enough, so we take our logically consistent ideas (theories, in the philosophical sense) and test them against the world by observation and experiment. That which the world rules as being false is falsified, and those ideas are considered bad by scientific standards, however beautiful the theory that proposes them. This led to a lot of advancement very rapidly and was generally accepted as good except by people who wanted to be able to forward their own falsified or unfalsifiable ideas, sometimes without having to do any more real work to determine their truth than to feel them really strongly (which we might call “subjectivism,” to be set against objectivity).
In the 1920s and 1930s, the Critical Theorists in the Frankfurt School realized that defeasibility followed by falsification is a pretty cold, inhuman way to go about figuring out what is and isn’t good to believe, and that, because of Hume’s great divorce, it doesn’t do anything to get us to moral truths. They were, living in the afterglow of Marxism and under shadows of fascism and the failures of the ideologies and regimes of the 19th and early 20th centuries, rather concerned with the way that this cold, mechanistic approach fails to prevent the great sin of the modern era: “oppression.” Among other developments, they therefore laid down the need to add another (I’d argue fundamentally religious) dimension to the idea-vetting process: problematizing, which roughly means identifying hidden biases, assumptions, and problematics (more shortfallings) that various ideas are based upon or contain that lead them to maintain oppression and thus prevent liberation from oppression.
Though it wasn’t the intention of the Critical Theorists in Frankfurt (and later Geneva, New York, and beyond), in critical theories in general but especially in the highly anti-intellectual critical Theory of Critical Social Justice, problematizing has become the only relevant vetting tool for ideas. Thanks to the developments in radical activism and the skeptically anti-realist contributions of postmodernism, which Critical Social Justice adopted more or less in full and utterly to their activist purpose, truth and falsity really ceased to matter. We can hearken back, for example, to the feminist writer Kelly Oliver, writing in 1989, for some idea, where she wrote:
I propose that in order to be revolutionary, feminist theory cannot claim to describe what is, or, “natural facts.” Rather, feminist theories should be political tools, strategies for overcoming oppression in specific, concrete situations. The goal, then, of feminist theory, should be to develop strategic theories—not true theories, not false theories, but strategic theories. This strategy for theory making does away with the monodimensional power structure which polarizes all theories as true or false. Theory legitmation no longer rests with the absolute authority of the Truth of nature. … Feminist theories…can be conscious strategies with which to undermine and dissolve the oppressive “strategy” of patriarchy, one part of which is the construction of the absolute authority of recalcitrant nature.
That was 1989, though, and we’re more than 30 years down that road from that revolutionary ambition. Today, things are different, and problematization rules supreme. Problematic ideas, whether true or false, cannot be allowed to stand, and thus we come to a point where leading academics in their own field stand so libelously accused, watching their lives’ work kneeling before what they know is a loaded gun, that they are compelled to publicly ask: “Is racism such a uniquely damaging force that the academic struggle against it warrants violating scholarly norms and potentially sacrificing the private and professional integrity of non-racist colleagues?” The question is almost unbelievable in today’s climate, and that’s why these are the kinds of questions that only get asked when, if not at the point of a gun, the person asking feels the rope starting to tighten around their neck.
It’s really not a difficult intellectual exercise to realize how the quest to problematize—and to do so strategically—lends itself directly not to the abandonment or violation of scholarly norms and all the rest but to their intentional disruption and rejection. Decolonize the curriculum! Give us academic, epistemic, citation, and research justice! Equity equals disrupt and dismantle! The fundamental epistemological approach has changed, after all, and so scholarly norms built upon the old way have to go. They are, in fact, problematic. They are, in the highly offended new world order, bad ideas. Their truth or falsity miss the point.
This is the part of the story where the strategic bit spelled out by Oliver becomes relevant. One does not merely overthrow centuries of academic progress and replace it with a completely new approach that suits you and yours. Feminists, perhaps more than anybody else, discovered that a fun combination of social constructivism and problematizing is, at least in academia, quite the crowbar, one that will let them dislodge even rather stubborn nails like objectivity, science, and “the construction of the absolute authority of recalcitrant nature.”
Unfortunately for them, the black feminists figured out how to turn the tool back on them—”white feminism” is systemically racist, after all—and so intersectionality was brought into the world with the sole objectives of advancing “lived experience” as a “way of knowing” and scouring academic literature (working from law outward) to find and unmask the problematics that might stop it from working. This is a strategy that can break the alleged hegemony of people who, having real methodologies in hand, don’t want to hear any of this immature radical bullshit unless it could offer the two things it can’t possibly produce: genuinely convincing arguments that stand on their own merits and good, hard evidence.
Thus it is that activists, liberated from any obligation to truth and falsity and seeing objectivity as a myth (that’s used to maintain oppression, even), developed and deployed a deepfake methodology: “making somebody ‘speak’ by using splinters from them reassembled to produce meaning disconnected from the original texts.” It’s all part of that “unmasking” project that starts with core questions such as we heard articulated by critical whiteness educator Robin DiAngelo (bestselling author of White Fragility): “The question is not did racism take place but how did racism manifest in that situation?” This is the alchemy by which they find proof of the hidden systems of oppressive power in anything they decide they want to supplant. In this case, it’s an already critical approach to security theory, but one need only look around to see that it’s just about anything and everything that has any kind of power, status, or influence attached to it that they’re eager to infect.
What should be said in conclusion? I think, though it’s bad form to close on a long blockquote, Wæver and Buzan have a pretty good grasp on the matter and should just be quoted through the last two paragraphs of their article, in full:
H&RM’s article is dangerously counterproductive to the important task of dealing with systemic racism in international relations. Debasing the currency of academic analysis will steer the discipline into a post-truth direction antithetical to its epistemological integrity and social purpose. The power of racism in the world today and its partaking in our discipline are far too serious to be channelled into polemics against made-up targets. H&RM water down the meaning of racism so that it captures practically everyone in social science. Having deemed postcolonial scholarship not radical enough, they have set up a machine that will judge any theory racist unless it foregrounds race in their specific jargon of ‘methodological whiteness’ and ‘antiblack racism’. Any theory not centred on racism in their sense is racist – not just more or less capable of analysing racism, but ‘racist’, ‘antiblack’ and ‘white supremacist’. International relations certainly needs to engage the question of racism – both as crucial in world politics and as an internal challenge entrenched within the historical constitution of the discipline – but not like this.
We think Security Dialogue should retract the article because its deepfake methodology can be used to ‘prove’ anything. H&RM’s practices, like falsely attributed quotes and systematic disregard of countervailing evidence, void their central argument and amount to serious academic misconduct. Such flawed work should not warrant publication in a leading academic journal.
That really is the long and short of it. What role problematization should play in our knowledge-production processes may be an open question, but as it forces a moral consideration into an epistemological arena, the answer may necessarily have to be none. Let’s leave that game to the churches, be those secular or otherwise.
Postscript: On retraction?
The question now raised is whether or not Wæver and Buzan are right in their call for retracting the paper they are responding to, and as I noted at the beginning of this piece, my feelings on the matter are mixed. I even have skin in this game, in a sense. On the one hand, I strongly agree with them. The “deepfake methodology” is not a real methodology at all and basically all work that relies upon it should be removed from the academic record for the same exact reason that papers bearing fabricated or manipulated data or cooked analyses should be removed. They don’t add to scholarship; they muddy and besmirch it. They do constitute academic misconduct of the rankest type.
On the other hand, though, I have three major arguments against retraction to weigh against this view. The first is mostly practical in nature. If the H&RM paper under scrutiny here is to be retracted for “deepfake methodology” (and potential libel), so should a lot of other papers, perhaps most or all of the papers published in many journals and across entire fields and subfields, which are papers upon which entire tenured careers are established and thus hang in the balance. The review process for this would be lengthy, difficult, and ethically fraught, but it would, perhaps, if it could be funded, provide an army of underemployed academics with reasonably worthwhile academic janitorial jobs that start cleaning up the literature of this long-standing activist-scholarship mistake.
The second of these reasons is, I admit, a bit spiteful and not very strong. It is that perhaps journals should have to wear the shit they stain themselves with, as a matter of public record and future evaluation of that journal. This reason is redolent of Hawthorne, though, and might be a better argument for retractions: because those retractions should still be listed in the guilty journals, where they’d count pretty heavily against them. Could you imagine entire issues, if not whole volumes, of journals in fields like feminist geography just being comprised of retracted articles because they all make use of deepfake methodology? It’s almost too delicious to contemplate and couldn’t happen for a better reason—one that is methodological, not ideological, as defenders would insist.
My own issues aside, a much stronger reason that would constantly threaten indigestion over such gluttonous indulgence is that academic journals should very much be a place where academic debate should happen, and this requires forwarding bad ideas to the scrutiny of the wider academic community. Academic freedom should be damn near sacrosanct throughout all of academia, and that may even need to stand to protect that which would cheat and destroy it.
The Critical Theorist Herbert Marcuse famously wrote about the need for a “discriminating tolerance” or a “repressive tolerance” by which we are intolerant of that which is intolerant, lest it become a problem. This approach is not one I can support, especially where the advancement of knowledge depends upon people being able to forward that which is flawed, including methodologically, so that others can make use of the example and lead us all to do better down the line. It should be enough to be able to identify a deepfaked paper for what it is wherever it appears and then condemn it as such (and that’s the use of problematization as a primary epistemological device along with tools like close reading applied to discursive analysis).
Between these arguments, I still hedge slightly to agreement with Wæver and Buzan, if for no other reason than that it would be good and standard academic practice to retract papers that are known to be methodologically flawed, a status that applies to every paper that utilizes deepfake methodologies. Just as we wouldn’t accept in a court of law a deepfaked confession, knowing that it is a deepfake, we probably shouldn’t accept deepfaked scholarship either.