Authorised Speech and Token Restraint

At the Bafta film awards last Sunday, a man in the audience shouted the N-word while two Black actors were on stage. The BBC broadcast it. The fallout was considerable.

The man was John Davidson, a Tourette syndrome activist whose life story had inspired one of the nominated films. He has Tourette’s. He didn’t choose to shout it. He was, by his own account, distraught. His statement afterwards was careful and precise: “My tics have absolutely nothing to do with what I think, feel or believe. It’s an involuntary neurological misfire. My tics are not an intention, not a choice and not a reflection of my values.”

Most people accepted this. But it raises a question that is harder than it looks. The word came from his brain, through his vocal tract, in his voice. It was linguistically formed—not a grunt or a spasm but a semantically loaded utterance. Something in him produced it. If it wasn’t him, who was it? And if it wasn’t him, where exactly does he end?

The standard move here is to invoke volition. We hold people responsible for their words because we assume intent. Remove intent, and the moral framework dissolves. Davidson didn’t mean it; therefore, it wasn’t really his; therefore, he bears no responsibility. Case closed. But this doesn’t actually answer the philosophical question. It just sidesteps it. Because here is the thing about Davidson’s tics that deserves closer attention. They are not random. They are contextually coherent. Unpleasant, shocking, certainly disruptive, but coherent. At the ceremony, host Alan Cumming made a joke involving Paddington Bear and his own sexuality. Davidson’s tics responded with homophobic slurs and the word “paedophile”—triggered, he explained later, because Paddington is a children’s character. Something in his system was tracking the semantic content of what was being said. It identified what was transgressive in context. It reached for the worst available word. Then it fired. That is not noise. That is a process with its own logic, running in parallel with Davidson’s conscious attention, with access to his semantic knowledge, and occasionally—when the usual controls fail—with access to his voice.

There is a useful way to think about this borrowed from how AI large language models work. A language model operates in high-dimensional continuous space. Vast amounts of computation happen there—pattern recognition, semantic association, something that functions like reasoning. None of it is directly visible. What we see is the output: a sequence of tokens, one after another, a flat stream of words.

The projection from that internal computation to the token stream is lossy. Much of what happens in the model never surfaces as language. The token stream is not the computation. It is a particular kind of readout of the computation, filtered and serialised into the only form we can directly receive. Now consider what controls what gets into that stream. There is, in effect, a gate. Not everything the model computes becomes output. The gate is part of what shapes the model’s behaviour, its apparent character, what it will and won’t say. It is what makes people like one model and hate another.

This is roughly what neuroscience suggests about the human case, though it arrived at the conclusion from a different direction. The self is the author and publisher. The self is not the computation, but the editing function. What goes out, not what gets thought.

Michael Gazzaniga‘s split-brain research in the 1960s showed that the left hemisphere acts as an “interpreter”—it observes behaviour generated by other systems and constructs a retrospective narrative of unified authorship. We don’t experience ourselves as unified because we are. We experience it because one subsystem is very good at telling that story after the fact. The verbal self—the “I” that speaks, explains, claims authorship—may be less the source of thought than its narrator. It sees the outputs of processes it didn’t run and reports them as its own decisions. On this view, what we call the “I” is substantially the gate—the function that governs what reaches speech from “computation”, what gets claimed, what gets published as the self’s output. Normally the gate and the computation are so tightly coupled that we can’t distinguish them. Tourette’s decouples them. The gate fails for certain kinds of output, and we see that the substrate was not unified to begin with.

Davidson’s distress is entirely coherent under this account. He is not distressed because he acted against his values. He is distressed because something that looked like him did. Something with access to his voice, his semantic knowledge, his body, but not under the control of the function he identifies as himself.

This reframes the question slightly. We tend to ask: what caused the tic? And the answer—some misfiring in the basal ganglia, a failure of inhibitory control—while true is also incomplete. The more interesting question is: what normally prevents the tic? What is the gate, and what runs it?

In ordinary cognition there may be a great deal happening in the substrate that never gets tokenised into speech—not because it isn’t there, but because something governs what reaches the output. Much of the brain’s activity is never published. Although the phrase, “he has no filter between his brain and his mouth”, which my wife often says of me, suggests that control is imperfect. The verbal self is the name we give to whatever makes that editorial decision and claims authorship.

When the gate fails, we don’t see randomness. We see coherent sub-processes that were running all along, now briefly with access to the channel they’re normally denied. The tic is not an intrusion from outside. It is an internal process that has temporarily escaped editorial control.

Now consider what certain comedians do for a living.

Dave Chappelle, early Richard Pryor, in a different moral register Bernard Manning—the act is partly constructed around the comedian having deliberate access to the gate in a way the audience doesn’t. They say the thing the audience is computing but suppressing. The laugh is partly recognition, partly relief, partly the vicarious experience of the gate being lifted by someone else’s hand. The comedian is an authorised, licensed publisher of material the rest of us keep in the substrate. The skill—and it is a genuine skill—is knowing exactly how far to go, when to pull back, and how to ensure the frame holds. Controlled gate failure, performed for an audience that has consented to the performance.

Chappelle’s career is substantially about making this mechanism visible. His famous walkaway from a $50 million deal was articulated partly in these terms—he became uncertain whether the audience was laughing with the subversion or simply enjoying having the gate lifted on material they wanted to consume without guilt. Whether he was controlling the frame or the frame was controlling him. Whether his speech was authorised-as-subversion or merely authorised-as-release.

That is the knife edge. A Tourette’s tic and a Chappelle bit can produce the same word in the same room, but the intentional structure is entirely different. One is gate failure. The other is gate performance. Except Chappelle’s anxiety—the anxiety that ended his show—was that the performance might be providing cover for something closer to the former. That the laughter was coming from a place the comedy wasn’t actually reaching.

Manning is the case where the performance defence eventually collapsed. The gate, it turned out, was the man.

Dementia approaches from the opposite direction, and in some ways is the starkest case of all. In Tourette’s the gate fails selectively and intermittently. In dementia it degrades systematically as the substrate that runs it is physically destroyed. You can watch the editorial function diminish over months and years. What typically goes first is not memory in the crude sense but the social and executive apparatus—the machinery that governs what gets said, to whom, in what context. The person starts saying things they would previously have filtered: sexual remarks, racial language from fifty years ago, brutal assessments of people in the room. Families often find this the most distressing feature of the disease, more than the memory loss itself. The person seems to have become someone else—crueller, coarser, unrecognisable.

But the logic developed here suggests the opposite reading. They have not become someone else. The editor has gone, and what remains is substrate that was always there, now publishing without authorisation. The language from half a century ago was always in the network. The judgements about the people in the room may reflect something that was always computed but never passed the gate.

This is uncomfortable. It implies that a significant portion of what we think of as a person’s character—kindness, decency, tact, a person’s goodness in daily life—may be substantially gate rather than ground. Who we are is not what we compute but what we suppress. The consoling counter is that the gate is real. The suppression is itself a genuine expression of values, not mere performance. Davidson’s distress is evidence of that. The narrator who identifies with the gate is genuinely not the process that produced the tic. A publisher who refuses to print something ugly is making a real choice, even if the ugly thing exists somewhere in the system. But dementia strips that away and leaves the question uncomfortably open. How much of the person we loved was the computation, and how much was the editing?

Speech is authorised in two senses. It is permitted—cleared for publication by whatever runs the gate. And it is authored—it carries the signature of a self, it is owned, it counts as an expression of who we are. Normally these travel together so seamlessly that we treat them as one thing. Davidson’s tic, Chappelle’s comedy, and a person with late-stage dementia saying something unforgivable to their daughter—each in a different way pulls them apart.

The “I” is not the thinker. It is the publisher, the tokeniser of thought to speech. And the question of who we really are may depend, more than we would like, on what we choose not to print.

 

Ideology and the Illusion of Disagreement in Empirical Research

There is deep scepticism about the honesty of researchers and their capacity to say things that are true about the world. If one could demonstrate that their interpretation of data was motivated by their ideology, that would be powerful evidence for the distrust. A recent paper in Science Advances ostensibly showed just that. The authors, Borjas and Breznau (B&B), re-analysed data from a large experiment designed to study researchers. The researcher-participants were each given the same dataset and asked to analyse it to answer the same question: “Does immigration affect public support for social welfare programs?” Before conducting any analysis of the data, participant-researchers also reported their own views on immigration policy, ranging from very anti- to very pro-immigration. B&B reasoned that, if everyone was answering the same question, they would be able to infer something about the impact of prior ideological commitments on the interpretation of the data.

Each team independently chose how to operationalise variables, select sub-samples from the data, and specify statistical models to answer the question, which resulted in over a thousand distinct regression estimates. B&B use the observed diversity of modelling choices as data, and examined how the research process unfolded, as well as the relationship of the answers to the question and researcher-participants’ prior views on immigration.

B&B suggested that participant-researchers with moderate prior views on immigration find the truth–although they never actually say it that cleanly. Indeed, in the Methods and Results they demonstrate appropriate caution about making causal claims. However, from the Title through to the Discussion, the narrative framing is that immoderate ideology distorts interpretation—and this is exactly the question their research does not and cannot answer—by design.

Readers of the paper did not miss the narrative spin in which B&B shrouded their more cautious science. Within a few days of publication, the paper had collected hundreds of posts and it was picked up in international news feeds and blogs. Commentaries tended to frame pro-immigration positions as more ideologically suspect.

There are significant problems with the B&B study, however, which are missed or not afforded sufficient salience. To understand the problems more clearly, it helps to step away from immigration altogether and consider a simpler case. Suppose researchers are given the same dataset and asked to answer the question: “Do smaller class sizes improve student outcomes?” The data they are given includes class size, test scores, and graduation rates (a proxy for student outcomes). On the surface, this looks like a single empirical question posed to multiple researchers using the same data.

Now introduce a variable that is both substantively central and methodologically ambiguous, a measure of the students’ socio-economic disadvantage. Some researchers treat socio-economic disadvantage as a covariate, adjusting for baseline differences to estimate an average effect of class size across all students. Others restrict the sample to disadvantaged pupils, on the grounds that education policy is primarily about remediation or equity. Still others model heterogeneity explicitly, asking whether smaller classes matter more for some students than for others. Each of these choices is orthodox. None involves questionable practice, and all of them are “answering” the same surface question. But each corresponds to a different definition of the effect being studied and, most precisely, to a different question being answered. By definition, different models answer different questions.

In this setting, differences between researchers analyses would not normally be described as researchers answering the same question differently. Nor would we infer that analysts who focus on disadvantaged students are “biased” toward finding larger effects, or that those estimating population averages are distorting inference. We would recognise instead that the original prompt was under-specified, and that researchers made reasonable—if normatively loaded—decisions about which policy effect should be evaluated. B&B explicitly acknowledge this problem in their own work, writing: “[a]lthough it would be of interest to conduct a study of exactly how researchers end up using a specific ‘preferred’ specification, the experimental data do not allow examination of this crucial question” (p. 5). Even with this insight, however, they persist with the fiction that the researchers were indeed answering the same question, treating two different “preferred specifications” as if they answer the same question. It would be like our educationalists treating an analysis of outcomes for children from socio-economically deprived families as if answered the same question as an analysis that included all family types.

B&B’s immigration experiment goes a step further, and in doing so introduces an additional complication. Participant-researchers’ prior policy positions on immigration are elicited in advance of their data analysis, and then B&B used that as an organising variable in their analysis of participant-researchers.

Imagine a parallel design in the education case. Before analysing the data, researchers are asked whether they believe differences in educational outcome are primarily driven by school resources or by family deprivation. Their subsequent modelling choices—whether to focus on disadvantaged pupils, whether to emphasise average effects, whether to model strong heterogeneity—are then correlated with these priors. Such correlations would be unsurprising. If you think disadvantage is more important than school resources to student outcomes, you may well focus your analysis on students from deprived backgrounds. It would be a mistake, however, to conclude that researchers with strong views are biasing results, rather than pursuing different, defensible conceptions of the policy problem.

Once prior beliefs are foregrounded in this way, a basic ambiguity arises. Are we observing ideologically distorted inferences over the same shared question, or systematic differences in the questions being addressed given an under-specified prompt? Without agreement on what effect the analysis is meant to capture, those two interpretations cannot be disentangled. Conditioning on ideology (as B&B did) therefore risks converting a problem of an under-specified prompt into a story about ideologically biased reasoning. This critique does not deny that motivated reasoning exists, or that B&B’s research-participants were engaged in it. They simply do not show it, and the alternative explanation is more parsimonious.

The problems with the B&B paper are compounded when they attempt to measure “research quality” through peer evaluations. Researcher-participants in the experiment are asked to assess the quality of one another’s modelling strategies, introducing a second and distinct issue. The evaluation process is confounded by the distribution of views within the researcher-participant pool.

To see this, return again to the education example. Suppose researchers’ views about the importance of family deprivation for educational outcomes are normally distributed, with most clustered around a moderate position and fewer at the extremes. A randomly selected researcher asked to evaluate another randomly selected researcher will, with high probability, be paired with someone holding broadly similar views (around the middle of the distribution). In such cases, the modelling choices are likely to appear reasonable and well motivated, and to receive high quality scores. The evaluation implicitly invites the following reasoning: “your doing something similar to what I was doing, and I was doing high quality research, therefore you must be doing high quality research as well”.

By contrast, models produced by researchers in the tails of the distribution will more often be evaluated by researchers further away from their ideological view. Those models may be judged as poorly framed or unbalanced—not because they violate statistical standards, but because they depart from the modal conception of what the broadly framed question is about. Under these conditions, lower average quality scores for researchers with more extreme priors may reflect distance from the dominant framing, not inferior analytical practice. B&B, however, argued the results show that being ideologically in the middle produced higher quality research.

The issue here is not bias but design. When both peer reviewers and reviewees are drawn from the same population, and when quality is assessed without a fixed external benchmark for what counts as a good answer to the question, peer scores inevitably track conformity to the field’s modal worldview. Interpreting these scores as evidence that ideology degrades research quality is wrong.

B&B’s paper is useful. It shows that ideological commitments are associated with the questions that researchers answer. Cleanly, that is as far as it goes. Researchers answer the questions they think are important. The small, accurate interpretation is not as impressive a finding as “ideology drives interpretation”, but B&B’s research is most valuable where it is most restrained. The further it moves from firm ground describing correlations in researchers’ modelling choices towards the quick-sand of diagnosing ideological distortion of inference, the worse it gets. What they present as evidence of bias is more reasonably understood as evidence that their framing question itself was never well defined. Through its narrative style, and not withstanding quiet abjurations against causal inference, the paper invites the conclusion that researchers working on a divisive, politically salient topics simply find what their ideologies lead them to find. And taken at face-value, it licenses the distrust of empirical research on contested policy questions.

 

On becoming a decolonial scholar

I have observed some early, tentative steps of young academics to become world-class decolonial scholars in global health. This is a rich and rewarding area of endeavour that has real potential to launch a career without the baggage of narrow disciplinary boundaries, rigid methodological commitments, or premature demands for epistemic closure. When approached carefully, decolonial scholarship allows emerging researchers to engage critically with power, history, and knowledge while retaining considerable flexibility in analytic approach. What follows is offered as practical guidance for those who wish to navigate this space with confidence and coherence.

Decolonising global health has become a central ethical orientation for contemporary scholarship in the field. For early-career researchers, the challenge is not whether to adopt a decolonial stance, but how to do so convincingly within existing academic norms. You do not want a piece buried in the Malawi Medical Journal when global recognition can be found in The Lancet, PLOS, or BMJ. This brief guide offers practical advice on positioning oneself as a decolonising scholar, drawing on common techniques that are widely recognised as markers of both epistemic and moral alignment.

A successful decolonial paper begins with the scholar rather than the question. Reflexive positioning statements have become an essential opening move, allowing authors to locate themselves within global hierarchies of power, privilege, and complicity. Personal proximity to marginalisation is an asset. These declarations are most effective when they are candid yet indeterminate, signalling awareness without constraining interpretation. Once positionality has been established, subsequent analytic choices are more readily received as ethically grounded rather than methodologically contingent.

Research problems should then be framed in historical or structural terms rather than as empirical hypotheses. Questions that invite falsification risk reintroducing evaluative standards associated with dominant epistemic traditions linked to globalised practice. By contrast, concepts such as colonial residue, epistemic silencing, or inherited injustice resist closure and encourage moral elaboration. Where findings introduce ambiguity or contradiction, this can be interpreted not as analytic weakness but as evidence of the complexity produced by colonial histories themselves.

Within this framing, epistemic injustice can be treated as an outcome rather than a proposition requiring demonstration. The presence of asymmetry—between disciplines, regions, or forms of expertise—may be taken as sufficient indication of harm. Distinguishing between the unjust exclusion of valid insight and the justified rejection of erroneous claims is rarely necessary and may inadvertently reinscribe colonial distinctions between knowledge and belief. Moral recognition, once granted, does much of the epistemic work.

Lived experience occupies a privileged place in this literature and should be elevated accordingly. Personal and communal narratives can be used generously as data, though care should be taken to avoid subjecting them to processes such as validation, triangulation, or comparative assessment. Such techniques imply the possibility of error, which sits uneasily with commitments to epistemic plurality. Where accounts conflict, the tension may be presented as evidence of multiple ways of knowing rather than as a problem requiring resolution.

Ontological language offers particular flexibility. Early declaration of commitment to multiple ontologies allows scholars to accommodate divergent claims without adjudication. Later, when universal commitments are invoked—such as equity, justice, or health for all—these can be treated as ethical aspirations rather than propositions dependent on a shared reality. The absence of an explicit bridge between ontological plurality and universal goals rarely attracts critical scrutiny.

Power should be rendered visible throughout the paper, though preferably without becoming too specific. Abstractions such as “Western science”, “biomedicine”, or “the Global North” serve as effective explanatory devices while minimising the risk of implicating proximate institutions, funding structures, or professional incentives. Authorship practices, by contrast, provide a concrete and manageable site for decolonial intervention, often with greater symbolic return than methodological reform.

Papers should conclude with a call for transformation that exceeds immediate implementation. Appeals to reimagining, unsettling, or dismantling signal seriousness of intent, while the absence of operational detail preserves the moral horizon of the work. Evaluation frameworks, metrics, and timelines may be deferred as future tasks, once the appropriate epistemic shift has been achieved.

Finally, dissemination matters. Publishing in high-impact international journals ensures that critiques of epistemic dominance reach those best positioned to recognise them. Should access be restricted by paywalls, a brief acknowledgement of the irony is sufficient to demonstrate reflexive awareness.

In this way, decolonising global health can be practised as a scholarly orientation that aligns ethical seriousness with professional viability. The goal is not to resolve uncertainty or to determine what works, but to occupy the correct stance toward history and power. When that stance is convincingly performed, the work will speak for itself.

A Crime Boss is not a force for good

When US forces kidnapped Nicolás Maduro in Caracas last week they acted illegally. They broke multiple international laws. The President of the United States publicly declared that he cannot be held to account. He is not constrained by the law, he said, he is (un)constrained by his personal (im)morality.

There is no doubt that Maduro was a brutal and repressive dictator, and a majority of the people of Venezuela wanted democratic change. They had voted for it in 2024. Did Donald Trump and the United States act morally in removing this man from power?

Consider three scenarios:

Scenario 1: An honest passerby sees a thug beating an elderly person. She tackles the thug and saves the victim.

Scenario 2: A Mafia Boss sees the same assault. He notices the thug’s expensive gold bracelet, tackles him, steals the bracelet, and the elderly person is saved.

Scenario 3: An honest passerby witnesses the assault but is too frightened to intervene. She calls a known Mafia Boss for help. He tackles the thug, steals the bracelet, and the elderly person is saved.

Only Scenario 1 deserves praise. The passerby acts from virtuous motives and achieves a good outcome. But what of the Mafia Boss?

In Scenario 2, he performs a superficially right action (stopping an assault) but for entirely immoral purpose (theft). The victim benefits, but this is incidental to the Mafia Boss’s criminal purpose. Most moral traditions recognise this distinction. We praise people for their character and intentions, not merely for producing beneficial side effects. A surgeon who saves a patient primarily to steal their jewellery hasn’t acted virtuously, even though the patient survives.

The Mafia Boss might deserve some credit for not making things worse—he could have ignored the victim or joined the assault. But “not being as bad as possible” isn’t praiseworthy. At most, we might say: “How fortunate his greed led him to intervene”—but this concerns lucky consequences, not moral worth.

Scenario 3 adds complexity. The passerby achieves a good outcome she couldn’t manage alone, but she’s complicit in the theft by knowingly involving a criminal. This is the classic “dirty hands” dilemma: when achieving good outcomes requires morally tainted means.

Now apply this to Venezuela.

We are in Scenario 2 (possibly Scenario 3) territory. Trump’s own words reveal his motives with startling clarity. “We’re going to be using oil, and we’re going to be taking oil”, he told the New York Times. “We will rebuild it in a very profitable way”. He repeatedly emphasised making money for the United States, settling old scores over nationalisation (“they took the oil from us years ago”), and has already begun negotiating with American oil executives.

The pattern of decisions confirms this. Rather than recognising María Corina Machado—the Nobel Peace Prize-winning opposition leader whose party won Venezuela’s 2024 election—Trump works with Maduro’s former vice president, a regime loyalist. Why? Because “she’s essentially willing to do what we think is necessary to make Venezuela great again,” Trump said, meaning granting American companies renewed access to Venezuela’s oil industry. There’s no timeline for elections, no commitment to Venezuelan self-governance. “Only time will tell”, Trump said when asked how long US control would last. “I would say much longer” than a year.

The stated justifications—drugs, migration, terrorism—don’t withstand scrutiny. Venezuela accounts for minimal drug trafficking to the US. The intervention followed months of pressure focused squarely on oil: sanctions, blockades, and seizing tankers.

This is Scenario 2. An authoritarian leader is removed—arguably beneficial for many Venezuelans—but primarily to facilitate resource extraction. The relief for Venezuelans is incidental to the core objective.

The Mafia Boss deserves no praise for saving the elderly person whilst stealing their bracelet. He should be prosecuted for the crime he committed. Donald Trump should be prosecuted for his crimes—Congress has the power.