Morality as a problem-solving strategy

My favourite (and in fact the only) book on evolutionary biology that I have read is Joseph Heinrich’s The Secret of Our Success (2016). At risk of degrading both philosophy and science, I’ll try to connect Heinrich’s ideas to my previous discussion of Macintyre and Nietzsche.

Heinrich is notable in evolutionary biology for identifying and theorising something he “calls gene-culture coevolution.” Genes shape cultural development, culture shapes genetic development. This process is unique to human beings. Our most important genetic adaptations are those which facilitate social learning and hence culture. Culture can be thought of as a bundle of solutions to recurring problems in human life. The capacity for immensely sophisticated culture is what distinguishes makes human beings so smart and distinctive. That’s Heinrich’s main claim.

In general, we can talk about several ways that organisms solve problems:

  • Individual trial and error – individuals use their brains to reason and learn from experience.
  • Cultural evolution – generations within a species experiment and learn from experience. Requires storage and transmissions mechanisms for information regarding useful solutions to problems, such as language in various forms (e.g., oral traditions, writing) and the capacity for social learning (e.g., identification of ‘wise’ or successful individuals whose behaviours can be copied).
  • Genetic evolution – natural selection is a problem-solving mechanism, where genetic adaptations represent ‘answers’ to environmental problems (instincts, hair distribution, pigmentation, beak type, etc.)

If there was no environmental change, individual problem-solving would be redundant and genetic evolution could do all the work. If there was massive environmental change, then natural selection would be redundant, and individual learning could do all the work (obviously you would need to get the capacity for such learning off the ground in the first place somehow).

Human beings faced all kinds of change in our evolution. At first, we confronted a climatically highly unstable world, which favoured the adaptation of large brains for individual learning. But this world later stabilized, confronting us with a modest amount of change, at the generation scale, rather than the lifetime or species scale. Such change was too fast for evolution to respond to, but too slow for individuals to notice and effectively navigate. For example, cyclical draughts. An individual will probably die when faced with a once-in-four-generations dry spell, because she doesn’t know how to handle it. She has no individual experience regarding such situations. But if she can learn from her great grandparents through social learning, then she may do very well. This general situation of mid-scale change favoured adaptations for social learning. Thus, human beings evolved to profit from all three levels of learning, representing three lines of information feeding into our problem-solving capacities.

If we consider morality to be a system of ‘rules’ that solve certain problems of social cooperation, then Nietzsche’s free spirit represents the individual strategy in coming up with such solutions, whereas Macintyre’s Aristotelian community represents the cultural evolutionary strategy. I think it is clear that both are valuable, and the question is one of balance. Perhaps as the pace of change accelerates, an individual strategy such as Nietzsche’s free spirit adopts begins to make more sense, since then we enter waters previous generations have not charted. Consider: what are the long-term individual and social impacts of raising children on devices? Our store of generational wisdom may run dry on this point. Of course, there will inevitably be some continuity, such that social learning will remain important to some degree. And as I have said, Nietzsche strongly endorses social learning for society as a whole. Clean-slate innovation based on the reasoning and experience of a single generation seems foolhardy. This is why some conservatives balk at demands for immediate and deep changes in sexual mores and gender systems, for instance.

But consider a world in which the pace and scale of change really did render prior generational learning irrelevant. What would happen then? Either anarchic individualism would take over, or we would have to rapidly generate and coordinate upon intra-generationally decided solutions to newly encountered problems on an ad-hoc basis. The latter is still a social strategy – pooling brainpower – but it is not the intergenerational social learning that Heinrich described, it seems to me.

Putting that aside – even if society as a whole retains social learning as its primary problem-solving strategy, should the few depart from it? Here we must consider the question as Nietzsche did: as an existential, not primarily a social, question. It seems perfectly reasonable to me to embrace both “intellectual conscience” as a precondition for self-formation as well as the wisdom of ages. The great spirit is precisely one with insight into the tradition, its logic, its history, and the fundamental human drives that animate it, such as the will to power. These individuals can be informed by it, so long as they retain a distance from it – “great spirits are sceptics.” As Nietzsche writes, we must, if we are prudent and noble, be caution when it comes to our ‘conscience’ and the objects of faith that our culture presents for us. Instead: “Great passion uses convictions and uses them up, it does not subordinate itself to them.” The poor of spirit, meanwhile, allow themselves to become a means to some ‘transcendent’ conviction they place like an idol before them – for in their weakness they grasp after “faith, for some unconditional yes or no.” Faith and conviction uses you up, rather than the reverse and this is the disappointing expression of “self-abnegation, of self-alienation” among the mediocre. According to Nietzsche.

All Nietzsche quotes are from the Antichrist, which the book of his that I have read most recently.

Macintyre’s neo-Aristotelian moral objectivism vs. Nietzsche

It’s the old question: is morality objective or subjective? And regardless of whether it is or not, what happens to society if we treat morality as objective or as subjective?

In After Virtue – an excruciatingly meandering tome – Macintyre offers a defence of objective morality and its necessity to a social order in which individuals may lead happy lives. In chapter 18, he pits his objectivist neo-Aristotelian doctrine against a most fiery opponent – the great hammer-wielding philosopher-poet of the 19th century himself, Frederick Nietzsche.

Macintyre agrees with Nietzsche that modern morality is bankrupt. It is incoherent and riven with intractable disagreements, its scattered shards adopted by the powerful as weapons wielded for nefarious ends. But whereas Nietzsche insists we should therefore move beyond morality – that we should give up on the tradition(s) we have inherited – Macintyre thinks we should instead repair it. If “modern morality is intelligible only as a set of fragmented survivals from that tradition”, he writes, we should expect it to fail, but we can also have hope in its restoration.

Who is right?

First we must define moral objectivism. As I have previously described, Macintyre’s trick is to pull objective morality out of the hat of social reality by saying that what is good and virtuous for a person is determined by their role in the narrative structure of a community – a role that in intimately bound up with that community’s partially constitutive practices: scholarship, war, farming, athletic competition, marriage, etc. He thinks that societies really do have practices and narratives – that poets, bards and film directors do not create narratives, they (more or less) describe them; that lawyers, priests and writers of manuals and rules do not invent rules and practices, that they (more or less) codify them. Thus, Macintyre substitutes the social for the biological in his reformulation of Aristotelian teleology.

Of course, change occurs through the interaction between in these process – in the negotiation of life and reflection upon it. But the point is that meaning and value is not possible independent of the wider community and the way it is inherently structured through narratives which contain goals, relationships, basic needs and so on. Meaning and value cannot be created out of whole cloth, spun like a magic carpet from within the workshop of any individual soul, no matter how mighty and ingenious it happens to be.

Nietzsche is the champion of the “free spirit,” the individual who, through “intellectual conscience” is able to distance himself from and scrutinize his own mere “conscience” – i.e., the voice of our “second nature,” our inherited ideology – and thus scrutinize his own society and its traditions. To do this is to exert true moral agency, and it is only from this position – liberated from all fixed points – that we may flourish, that we may lead lives of integrity, vitality, and purpose, lives that overflow with energy because they are not wasted for a single second in pity for oneself or others, angst over the future, tortured rumination over the past, resentment of others, hatred of life, in fighting instinct, denying or reviling the cruelty and indifference of reality… For Nietzsche, we ought, if able, to confect our own meaning and values, precisely in the manner Macintyre denied we are able.

Macintyre has several objections to Nietzsche.

First, he correctly notes that it’s hard to be a Nietzschean, especially in the modern world. To be sure, it is not for the faint of spirit!

“The most spiritual people, being the strongest, find their happiness where other people would find their downfall: in labyrinths, in harshness towards themselves and towards others, in trials; they take pleasure in self-overcoming: asceticism is their nature, requirement, instinct. They see difficult tasks as a privilege, they relax by playing with burdens that would crush other people… knowledge – a form of asceticism…” (57, The Antichrist)

But the Nietzschean path is also ‘difficult’ because it offends so deeply our democratic, Christian instincts in denying that everyone has moral agency and the capacity to be a free spirit. This seems to be Macintyre’s real objection, because Nietzsche is not saying that we should create a society which eschews tradition. Indeed, Nietzsche is in a deep sense a conservative (ironically, an Aristotelian aristocratic conservative). Nietzsche had read Machiavelli and agreed wit him (which no surprise, as he loved the Renaissance as a temporary triumph of all noble classical values over Christendom) that rulers must still deploy religion and nationalism as galvanising forces to uplift and maintain order within a citizenry made up of ‘bound spirits’ – of mediocre workers and guardians. (Nietzsche in some passages sounds very much like a Platonist!). Nietzsche, however, has no contempt for the latter, and thinks they can lead happy enough lives within their own frame. But he thinks they must be fed a noble or “holy lie” that generates “a perfect automatism of the instinct” so that a society may become great and powerful, capable of producing the great spirits that rise above it in their esoteric wisdom, being “far-sighted and hind-sighted.”

Macintyre would still not be happy with this picture, for a reason I argue is confused. The great spirits which are in a sense the purpose, the ultimate product, of a Nietzschean caste-order are impossible, for Macintyre, for there is nowhere to ‘transcend’ to. “Nietzschean man, the Übermensch, the man who transcends . . . is wanting in respect of both relationships and activities,” Macintyre writes (p. 257). He cannot find his good anywhere in the social world, he can only find it in himself – namely, in that part of himself which “dictates his own new law and his own new table of the virtues.” The reason why he cannot find any authoritative good in the social world is because he is so asocial…[1] But this internally created ‘authoritative good’ is empty and false. The free spirit believes that all morality (i.e., the ruling ideology of his own society as well as his own morality) is arbitrary and fundamentally expresses the will to power. (Yes even Christianity is animated by this same ‘base’ instinct – that’s core to his allegation of Christian hypocrisy). The free spirit is honest to himself about all this – he is not corrupted by what lies, self-deceptions. Guided by such an understanding of moral reality, Macintyre argues, he will have impoverished dealings with others, capable only of asserting his own authority, unleashing his own instincts, corralled and organised under a unifying idea, some style that he has given to the assemblage of biographical materials fate has granted him. In such dealings, there is no shared deference to a ‘third’ standard (the Big Other; tradition). Such deference would be weakness, inauthenticity.

According to Macintyre, therefore, the ubermensch is constitutionally unable to be virtuous, because he cannot bow down to practices, traditions and all that shared moral and intellectual property that ground the goods, laws and virtues that make true happiness possible. He excludes himself from the communities of shared goods where one learns, like an apprentice, what is good and virtuous. “Nietzschean greatness” is constituted by “moral solipsism.”

This seems overstated. The Übermensch is not necessarily asocial because he does not defer to some common objective standard. He can still participate in society and its practices. Indeed, I would wager that he would be valued within them as a source of invention and novelty. Yes, he is an unknown quantity, a danger. But many such people are accepted and form relationships nonetheless. He may therefore still flourish as a social being. (I will assume that most human beings require such relationships, though to different degrees, and if one is fated for isolation, then one may still be great and flourish in the Nietzschean sense if one has the strength for it).

Thus I don’t agree that Nietzsche offers merely another guise for “modern liberal individualism” — the moral framework that Macintyre rightly notes Nietzsche so ardently opposes (p. 259), and that Marx is also accused of advocating when writes about “a society of free individuals.”[2]

The real objection is elitism. And if Nietzsche is wrong and the many can also be free spirits, how does such a society operate? Elitism seems to be a necessary empirical premise for Nietzsche’s moral prescription, since, adopted universally, the social order that makes any greatness possible would arguably dissolve.


[1] The Will To Power. ‘A great man-a man whom nature has constructed and invented in the grand style-what is he? … If he cannot lead, he goes alone; then it can happen that he may snarl at some things he meets on the way … he wants no “sympathetic” heart, but servants, tools; in his intercourse with men he is always intent on making something out of them. He knows he is incommunicable: he finds it tasteless to be familiar; and when one thinks he is, he usually is not. When not speaking to himself, he wears a mask. He rather lies than tells the truth: it requires more spirit and will. There is a solitude within him that is inaccessible to praise or blame, his own justice that is beyond appeal.”

[2] “In the first chapter of Capital when Marx characterizes what it will be like ‘when the practical relations of everyday life offer to man none but perfectly intelligible and reasonable relations’ what he pictures is ‘a community of free individuals’ who have all freely agreed to their common ownership of the means of production and to various norms of production and distribution. This free individual is described by Marx as a socialized Robinson Crusoe; but on what basis he enters into his free association with others Marx does not tell us. At this key point in Marxism there is a lacuna which no later Marxist has adequately supplied. It is unsurprising that abstract moral principle and utility have in fact been the principles of association which Marxists have appealed to, and that in their practice Marxists have exemplified precisely the kind of moral attitude which they condemn in others as ideological.”

Don’t bash escapism

The more pain you experience, the more you want to escape, other things equal. By ‘an escape’ I mean a distraction from the pain. An escape typically involves an external stimulus, though it need not (consider self-generated fantasy). To be distracting, the experience or activity needs to be sufficiently engaging; it needs to command attention in order to temporarily occlude the background pattern of pain and one’s rumination upon it.

Lots of people are in various forms of pain, i.e., states of physical and emotional discomfort. People in pain want entertainments and hobbies that are maximally absorbing. Businesses know this, and unabashedly advertise products and content as ‘highly addictive’, bingeable, etc.

Is this a bad thing?

Well, there are different ways to escape. I have been referring to ‘engaging’ activities in order to be neutral regarding the value and meaning of such activities. But I’d distinguish between two types: flow and hypnosis. Meaningful and engaging activities produce flow, while unmeaningful and engaging activities are merely hypnotic. I realise that I am assuming a lot here. In particular, I am assuming a tight, though contingent, connection between what is valuable and meaningful, and what produces flow.

What are these two modes of escape? Regarding ‘flow’, I mean what everybody in psychology means: “Flow states are defined as the optimal experience of being fully engaged in an activity and have been used throughout the field of positive psychology as a theoretical framework for intrinsic enjoyment.” Flow is typically active and appropriately challenging. Think of making art or music, playing sports, solving puzzles, surfing, reading a good book, etc. Hypnosis, meanwhile, is typically passive and non-challenging. Think of eating, watching pornography or entertainment TV shows, some forms of exercise, taking drugs, playing certain kinds of computer games, etc.

I don’t want to be too severe on this point. I think some hypnosis is fine. Indeed, I think hypnotic activities can be meaningful. Just about anything can be meaningful or valuable in some contexts, at the right dose. As I said, the meaning-value-flow nexus is contingent, as is the meaningless-trivial-hypnosis nexus. I can hypnotize myself quite easily when intoxicated or simply very tired by listening to low-energy music and watching something rhythmic or playing a repetitive computer game. I can find that sort of thing quite soothing and somehow emotionally meaningful. But it’s still a different thing from flow, and its value and meaning are more fragile for me.

Returning to escapism: ideally, you escape into flow. When you escape into flow, that is rarely a bad thing. It is only pro tanto bad when it inhibits you from addressing your pain, whether practically (fixing or ameliorating it) or spiritually (accepting, sublimating, or redeeming it). This may be because escapism becomes excessive. This badness is, however, strongly mitigated when the activity is meaningful and valuable to you, so that even if you don’t do anything about your pain, at least you’ve increased value in your life despite it. In contrast, little compensating value is conferred by hypnosis.

In any case, if you have done what you can, practically and spiritually, to address your pain, I see nothing wrong with escapism, including in the form of hypnotism. Sometimes life is hard and the best we can do is try to forget about it.

The ethics of authenticity

Charles Taylor’s The Ethics of Authenticity (1992) is a characteristically ambitious work of cultural criticism, history and political philosophy rolled up with something like existential phenomenology. In it, he outlines what he refers to as the three “malaises of modernity”, i.e., contemporary sources of dissatisfaction, anxiety and potential disaster:

  1. Individualism
  2. Overbearing instrumental reason, and
  3. Political enervation/the loss of political freedom.

As the book’s title suggests, (1) is probably the key to understanding the rest, and concerns what he calls the “ethic of authenticity”. It is this source of malaise that I want to focus on in this post.

In my reading, ‘individualism’ refers to several things: the romantic commitment to self-realisation (an existential ideal), the rationalist ideal of “disengaged reason” (an epistemic ideal)[1], the liberal notion of political autonomy (a political ideal) and a narrow focus on private concerns that results in atomisation. These components are interconnected and Taylor traces the way in which they have emerged together over the past three centuries.

The broad theme here is that the individual is dislodged from the larger contexts her life: natural, religious, cultural, social, political, metaphysical. These contexts come together into what I will call an ‘Order of Things’ or simply an ‘Order’ – an integrated scheme in which the world and everything in it, including oneself, ‘make sense’. In premodern times, individuals found their identity, meaning and values in and through this Order. Even the artist – who today we regard as the authentic individual par excellence – was merely a conduit for transmitting truths inherent in the universe. Similarly, in many ancient and contemporary cultures ‘the philosopher’ is a role that is less about discovering what you think, but about embodying an honoured archetype – to be like Confucius, or Laozi, or Socrates). But today, we are all tasked with creating our own meanings, values and projects as we see fit. We are not only permitted, but encouraged to look within, rather than outside of ourselves – to the heavens, to the wise, to Nature – for deciding what is important in life and what path we should pursue.

This attitude is then enshrined in political liberalism (individual rights preserve self-determination), and in particular the liberalism of neutrality, which asserts that public discussion and policy ought to eschew questions of meaning, value and culture as far as possible, leaving that to private choice.

Social conservative critics – e.g., Allan Bloom and Christopher Lasch – wish to dispense with this individualistic paradigm altogether, seeing in it nothing but shallow narcissism and moral relativism antithetical to social responsibility, community and upright values. Taylor calls these critics the ‘knockers’, and a repeated point he makes with respect to both individualism and instrumental reason is that we must reject both the knockers and the ‘boosters’ who uncritically embrace them. Instead, he urges us to recover what is most precious in these ideals and carefully distinguish them from their degenerate contemporary forms. For he agrees that indeed there is something malodourous and destructive that has taken root in our culture – something various 19th century authors warned us about. See for instance Nietzsche’s portrayal of “the last men”, Tocqueville’s observations regarding the vulgar mediocrity of American egalitarianism, and Kierkegaard’s contempt for the bourgeois loss of passion and true faith.

A nice metaphor Taylor uses here is that of ‘moral horizons’. Once loyalty and respect for an Order larger than ourselves has gone (nature, God, tradition), our sights are lowered to encompass only our individual desires and private concerns. The idea is then that the scope of our aspirations and our sense of their significance are correspondingly diminished. Instead of there being an ideal ‘out there’, always just over the horizon, that we must eternally strive after – i.e., to both understand and embody more completely in our lives – the ideal is whatever we want it to be. Nothing calls to us from that ‘beyond’. There is nothing ultimately beyond our own preferences, interests and desires, normatively speaking. (Call this thesis ‘broad emotivism’.) If brute passions are the source of value, then how can they themselves be critiqued? ‘Do what you feel’ is where this view seems to lead. Furthermore, for most of us these desires will end up being petty and small.

The implicit claim that when broad emotivism takes hold, it not only changes our conception of what makes things significant or valuable, it changes what we find significant or valuable. It is not because humans are somehow by nature petty and small-minded. Rather, once all value is seen as ultimately grounded in individual desires, our sense of what is important shrinks down to the individual. We become self-obsessed, grandiose, narcissistic, petty, small-minded, complacent.

Two implications. The first is that broad emotivism goes along with the loss of heroism and all that is glorious. There will be nothing worth dying or sacrificing for. True sacrifice is sacrifice for something larger than or distinct from oneself. It is to bear a burden so that others may benefit, or that some higher cause might prevail. Sacrificing for personal glory, for instance, is hardly sacrifice at all.

The second I would add is that we see a narrowing and a closing down of aspiration. If the buck stops with your desires, then there’s not a lot of deep work to do in either justifying our values, or in living up to them. I include ‘deep’ because although it is true that lots of labour may be involved in becoming super-ripped, wealthy, attaining academic status, political power, sexual success, etc., there is no work done on the soul, or on reflecting on what transformations of your basic coordinates are called for. I detect a lot of this in our culture, particularly in self-help, for example. The project of reaching forever higher, of approaching that ever-receding horizon, is lost, and that’s a crying shame. There’s something humbling about it. “I am this way and that’s that” breeds the worst, most sterile kind of complacency (in contrast with the deeper contentment that comes from an acceptance of death and limitations that must ultimately thwart, but do not somehow therefore invalidate or imply that one must reject, the quest for the horizon).

Taylor’s intervention here is to problematize the simplistic dichotomy of subjective-value vs. objective-value, and the corresponding dichotomy of narcissism vs. largeness of spirit. More precisely, he thinks this view needs nuancing by returning to the original ideal. There are two aspects of this nuance: (1) meaning is socially constructed, and thus not purely subjective if that means individualistically subjective, and (2) meaning can still be the output of an encounter with something real and external. For example, even the romantic artist is not creating things ex nihilo; they are rather giving their ‘reading’ of Nature, as mediated through their particular, even unique, sensibility.

A better way

I take it that Taylor’s restoration of the ideal of authenticity involves allowing individuals to decide upon (‘authenticate’) the fundamentals of the good life for themselves, but doing so in a way that is not in the least bit squeamish about or hostile towards higher and external notions. In broad terms, his project seems to be to re-embed self-discovery in the larger contexts of life mentioned earlier. In particular, we must draw upon others, both the living and the dead, in figuring out what is significant and important in this world, and what reality is like. We must never be so conceited as to think we can figure it out alone. Our desires – not even our deepest intuitions – are never the end of the story insofar as this process is really an ongoing dialogue with others and with the world. For example, let’s say I want to have a lot of money and flashy status goods. Should I be happy to say ‘I’m a materialist – I like cash and shiny things’? Not necessarily. Who you are authentically might be something other than that. How? Well, you need to look not only into to your particular ‘soul’, but also outward. To others and the larger community you are part of. ‘Is that the way I was brought up? Is that compatible with – does it aid or detract from – my role as a [dutiful son, patriot, citizen, etc.]?’ But it could also be evaluated relative to other external standards – e.g., divine requirements, Aristotelian teleology (is materialism something that enables me to realise my objective purpose – a purpose that is there regardless of whether I ‘feel’ it), etc. I allow for my identity to form a part of a larger whole: what I fundamentally am involves being [a servant of God, the universe becoming awareness of itself, a cell in a social organism, a metabolic component of an ecosystem…].

And to be clear, it’s not just that now we each get to individually ‘select’ our preferred Order. Firstly, it is implausible that we can or should ‘identify’ with an Order at will, arbitrarily. Although temperament will come into it, especially when it comes to our metaphysics, I would say with the pragmatists like William James, we will be reacting to and negotiating with others in a particular set of contexts. It is precisely the ‘pure optionality’ of value that Taylor finds untenable. We can’t decide what’s significant; we pursue things because we find them significant. And we find them significant for various reasons. If we have no reasons to give others (and here there is where dialogue is crucial, since these reasons can change as we talk), then we are unjustified in our attributions of significance, and it is at least psychologically implausible that we can go on thinking this is what’s important in that circumstance.

But secondly, we are not just finding our place in given Order conceived of in rigid premodern terms. The notion of liberty and its connection with authenticity is retained by Taylor, and liberty is incompatible with the kind of rigid premodern assignment of roles and purposes that preceded the age of authenticity. Instead, somehow Taylor thinks we – like the romantic poet – find our particular way of conceiving of, and inhabiting that Order. This suppleness and creativity wards off the oppressive and stifling aspects of the premodern era that are rightly objected to. Just because you have certain sex organs, for instance, doesn’t mean you must conform to some gender archetype imposed by your society – it doesn’t dominate your Essence in the way it once might have. More generally, we have improved in terms of creating a culture of acceptance of variation outside the norm, which is beneficial psychologically for ‘deviant’ individuals as well as the larger society who can benefit from their eccentricity.

With that said, and although Taylor doesn’t mention it, I think it’s important to mention that there are costs associated with this flexibility and proliferation of meanings and identities. Not only does it impose on us far more choices – fundamentally important theoretical, practical, aesthetic, axiological choices – but it creates coordination problems that are particularly thorny to navigate for those who are more autistic. Interestingly, this is one of the ways in which the modern world is not, contrary to the general line of thinking, congenial to the ASD cognitive style. It would be much easier for those with ASD and all of us if roles were clearly delineated and commonly understood. You would know exactly how to dress where, what various gestures meant, what the right dating protocols were, how to eat, how to be and present as a good [insert profession], and so on. Now there are infinitely many ways of being anything you care to name. There is also a danger of ‘accepting’ (and, further, celebrating) too much: of being insufficiently critical of ‘eccentricity’, deviance, and what might include psychopathology. One reason we might want to be wary is that, if we agree identity is dialogic, then by normalizing (or centering or celebrating) certain ways of life, conditions etc., we may see them proliferate. (Think of how susceptible we are to self-diagnosing various things; I think it’s plausible that before the widespread awareness of depression, for instance, there were fewer depressives).

So basically, Taylor doesn’t seem to have any problem with the notion of authenticity per se, only its crude individualistic form. His criticism of this form is intellectual, moral and political. Intellectually, it gets the nature of identity wrong. There is no ‘inner truth’ to be discovered independently of the external world. Morally, and partly as a result of this false conception of independence which leads us to screen-out external elements, an individualistic authentic self is simply a narcissistic and amoral self. Politically, a society of ‘authentic individualists’ is doomed to fragmentation, decline and ultimately catastrophe.

It’s plain to see that if everyone is selfish, things aren’t going to go very well. Society will descend into anomie and atomism. Everybody will care only about their own careers, bank accounts and personal lives. They will have no sense of social duty, sacrifice or how their lives figure into a larger project or narrative. This worry has several concrete components: such a society is going to (a) be unfulfilling for those individuals themselves, (b) deepen inequalities and social injustice, (c) erode moral standards, (d) implode politically via the withering of democracy which requires a degree of common purpose and solidarity, furnished by the nourishing fabric of civil society, and (e) destroy the natural environment that is (society) depends on.

Personal reflections

On a personal level, despite my misgivings, I still find a tough root of sympathy with the ideal of a kind of authenticity which smacks of individualism. There is a kind of lofty, free-wheeling asshole that gets top marks for authenticity. When you are too in touch with what you want, too unapologetic about it, when you follow your bliss and your creative instinct wherever it leads, society and the rest be damned… you are rightly the object of resentment and reproach. (Though I don’t want to say there’s no place in the world for such individuals). But in less extreme forms, I think a bit of this attitude can be valuable.

‘Can’t tell me nothing’ – Kayne West

Some of the best advise I got growing up related to authenticity. Think about teenagers. In large part, they find themselves by finding their ‘tribe’. But how to select the tribe? There is choice as well as circumstance and coercion involved (the coercion of cosmetic or athletic ‘membership requirements’, for example). Authenticity can be a useful corrective for the danger of losing oneself in the group – a group that may not be authentically chosen. It can be good to remember to leave a gap (a ‘critical distance’) between ourselves and others. Teenagers in particular, but all of us, may feel burdened by various expectations, pressures to conform and the judgements of peers. To recognize that one needn’t automatically regard such things as authoritative can be liberating.

‘Do what you want’ – Evanescence

I also remember being told quite young that it’s okay to express my own desires, rather than disguising them out of politeness or insecurity or some other inhibition. And I found out that to do so requires you to think about what your desires actually are – which can actually be harder than working out what the ‘done’ thing (the least offensive, expensive, imposing, impolitic thing) is in a particular situation. Much later, during a new-age shamanic healing session, I was given similar advice. The practitioner observed that I used the word ‘should’ more than ‘want’ whenever I was asked to make a decision about something (‘I should do this’ rather than ‘I want this’). This advice is not without its dangers, clearly. I have suffered because of it. But I also think it can be a useful corrective for certain personalities.

‘Turn your weakness into strength’ – Somebody write a song about that

Another piece of advice that strikes me as potentially useful and connected to authenticity is to ‘turn your weakness into strength’. Iris Murdoch makes the point that we can see the same things in very different ways, and this is reflected in our choices of vocabulary. When we are making judgements about others, therefore, we ought to pay close attention to the words we use (including in our own heads). Moral development is a continuous process of cultivating a certain ‘language of perception’. Well, the same point can be turned inward. Traits that we might feel ashamed of, embarrassed about, think of as inferior – those might be the very characteristics that make you an individual, and if you embrace them, if you ‘own it’, they might make you more attractive enable you to give your distinctive contribution to the world. You can ‘re-word’ that part of yourself. Whereas you might have been labelled a ‘lostie’ as a child – as an absent-minded, head-in-the-clouds type of person – you can redescribe that as having a sensitive, introspective or poetic nature. From there, you can explore and deepen that ethereal aspect. Don’t worry about feeling aloof or strange; indulge it, and try to express it in some interesting way. Otherwise, you might live your whole life unnecessarily spending energy pushing against that trait and never realizing its (your) potential. This advice is good insofar as there is in fact a positive aspect to any apparently unfortunate trait. And I think there usually is.

Why did we veer into this impoverished version of authenticity?

The age of authenticity and liberalism have the virtue of liberating us from oppressive social structures, but has fallen under the influence of individualism, flaccid relativism and crude emotivism (value is just about ‘what you feel’; end of story). Why? While he does acknowledge that there are material factors at work, Taylor doesn’t much emphasise or explore them. Overall, he makes it seem as though this decline is somehow the product of ignorance or an intellectual mistake. I suppose this is forgivable in a work of philosophy. And he is surely right that ideas matter to some degree. At the very least, this kind of excavation of the intellectual history and philosophy behind our malaise offers us clarity regarding our situation. And that is a valuable service because it helps us make sense of who we are and stimulates us to see how things might be different.

On the explanatory material factors, several ideas comes to mind. The writer Mary Eberstadt’s account of western secularisation seems relevant. Urbanisation and industrialisation are associated with increased geographical and occupational mobility, leaving (especially young) people ‘free’ to drop the connections, responsibilities and traditions of their small rural communities and families. Families are broken up, people fall back on their own resources and must make new radical choices about how to forge new lives, and all this within the diverse and expanding possibilities of city life. Eberstadt also argues that the welfare state – itself a kind of compensation for industrialisation and the indigence it creates, people no longer being able to grow their own food – plays a role in substituting for, and thus further fracturing the family. Her concern is with the broad decline in church attendance and religiosity, which she explains precisely in terms of family breakdown. But I think the same factors help explain the rise of individualism and perhaps its baser variants.

There’s also technology like the television, whose infiltration of every home in the world correlates with a decline in participation in politics and civic institutions as charted by Robert Putnam in Bowling Alone. Now that we can get our pleasures and entertainment from a box, there’s no need to go out (to church, clubs, pubs and town halls) or even talk to anyone except your housemates or loves one(s). This makes it harder to do political organising, because it becomes less appealing and less cool, relative to the slick, compulsive pleasures available to us whenever and however we want, at no risk (we can preserve our sense of being independent, self-sufficient, invulnerable). Strikes for example can even be seen as boring, old-fashioned, irrelevant, dull, quaint, unsexy, demanding, etc. Then there’s the smart phone, which Jean Twenge in her book the iGeneration credits with a decline in sex, social interaction and mental health, especially among young people, since it took off in 2010.

Finally, at a more general level, simply having material security is a great way to supercharge narcissism and individualism, since it ‘liberates’ you from the various forms of everyday inter-dependence that nourish cultures of sociality.

Where are we now?

If Taylor is right about where we landed in the 1990s, where have we gone since? And is it in the direction of Taylor’s true authenticity? Has his book and the work of other intellectuals inspired a healthier, more nuanced version of the ideal of authenticity?

If you want examples of ‘authenticity culture’ today, it’s true that they’re not hard to find. Browse the world of inspirational content on Instagram, wellness woo, or the self-help literature and you’ll encounter it. And in the mainstream, talk about self-care, prioritising one’s own happiness, banishing negative self-talk, discovering your authentic values, growth mindsets, etc., etc., are a mainstay of countless online magazines articles and listicles. You can see it all over the place in the unapologetic boldness, self-assertion, self-glorification and endless quest to distinguish and ‘validate’ oneself (even the ‘bad things’ that society tells you to be ashamed of like your mental illnesses) is evident in pop music, the world of celebrity (although there’s a lot of gossipy moralising here, too, which might be read as the exhaust fumes of elite hyper-narcissism – the schadenfreude of the narcissistic wannabes) and cliches like ‘You owe it to yourself’, ‘express yourself’, ‘be true to who you are’, ‘don’t apologise for being you’, ‘if you can’t love me at my worst, you don’t deserve me at my best’, and so on. The general sentiment here is that there is something original and innately given deep down that is somehow your true self. That, whatever that true self is, it’s automatically good, even if not understood by others, that shame is bad, that you are the master of your own destiny, that your life is ultimately about fulfilling your inner unique potential, that you can’t let anything burden you or hold you down, not even the needs of your own family or community (let alone the nation and its self-defence!), and so on.

But I’m not sure we thoroughly believe it anymore. Hans-Georg Moeller argues that we now find, for instance, the hippies and their naïve pursuit of authenticity endearingly goofy and doomed to ironically self-refuting conformism, as parodied in Monty Python’s The Life of Brian when adoring crowds repeat back to their guru Jesus, “We are all individuals!” Authenticity has now morphed into what Moeller calls “profilicity”, where the self is more closely identified with a media profile we co-construct and then use as a kind of yard-stick to measure our RL selves by. What’s interesting about this is that it is a kind of synthesis of authenticity and the premodern ideal (which Moeller calls ‘sincerity’), since profile-construction is a confection, but one that is strongly guided by interaction with one’s peers – their demands, expectations and tendencies to validate or invalidate aspects of one’s profile. Given the extreme self-selection the internet allows for, it’s no wonder identities are proliferating and becoming more extreme in various ways. Taylor might welcome the repoliticization of identity for some parts of the population engaged in activist online spaces, for example, except when it amounts to impotent self-branding, tribalism and signaling. We get a weird kind of self-focus, but where the self is generated by and deeply enmeshed in a narrow social space. This would appear inimical to the formation the broad civic spirit he hoped for in 1992.

A different kind of break is suggested by Ashley Colby in her article The end of the culture of narcissism (2022). Colby leans on the idea that narcissistic individualism is a function of the fossil-fueled abundance we have enjoyed during the industrial age that is now coming to a close, and that as material insecurity increases, our nastiness will begin to look more like BPD – in particular the paranoid obsession with our own imagined victimisation. Again, not exactly the restoration Taylor was hoping for.

Conclusion

Taylor’s book is still relevant. Authenticity is still a vital and contested way of understanding who we are, justifying liberalism (in part), and how we should go about our lives. I think the fundamental idea Taylor brings out is the way that this powerful and valuable attitude has tended to accelerate our estrangement from the larger contexts that give our lives meaning. We have ‘popped out’ of the matrix of being, and we need to find a way of ‘popping back in’ without crushing ourselves.


[1] I won’t discuss this dimension here, but it is an important one. We might call it authenticity or individualism in epistemology; the requirement that each of us be able to justify everything we say, individually.  I think it is connected to, but does not fully explain authenticity in politics.

After Virtue (Chapters 10-12)

This is a continuation of a series of reading notes and reflections on Alasdair Macintyre’s After Virtue (1981). You can find summaries for all previous chapters here, here and here.

In chapter 10, Macintyre begins sketching a picture of the sort of society in which life, death, value and morality make sense. This is a society in which these things are part of a certain order of things, an objective order grounded in (or reflected by) social reality, and where – although he doesn’t explicit draw this connection – moral argumentation can be rational, respectful and dispositive.

This society represents a kind of mythological moral and spiritual ‘home’ and he broadly identifies it as the ancient ‘heroic society’ or ‘heroic era’ (along with its immediate cultural descendants). The heroic era is a partially mythological era which, according to some literary theorists, all cultures emerge or pass through. It is the era of great deeds, struggles and battles. Apparently some anthropologists think that many of these heroic cultures represent a sort of violent egalitarian wild west phase of cultural development. The central function of hierarchy, evolutionarily speaking, is to contain violence through deference to authority and its established order. In other words, the outcome of any given conflict is decided by dominants rather than actual violent struggle. In effect, the system avoids a lot of ‘little violence’ by compressing it into an original ‘big violence’ – i.e., the struggle which establishes the hierarchy. In the absence of established hierarchy – in the absence of any higher authority or system of arbitration – the field is wide open for violent and dramatic quests to seize one’s imagined destiny.

The stories of these struggles are transmitted orally in the form of epic poems and legends before being written down in the classical era. The classic works of drama and tragedy therefore pay homage to a “vanished heroic age” by venerating and preserving its values, social structure and sense of mission or destiny. These works are of expressly pedagogical and moral rather than ‘merely’ aesthetic import.

Though the old ways have by this time partially faded with technological, political and cultural change, the moral (and more broadly cosmic and metaphysical) order associated with those old ways still provides a “moral background to contemporary debate.”

What was this old heroic-cum-classical moral order like? Formally (as opposed to in terms of its specific content), I would summarise Macintyre’s characterisation as follows.

  • Communal.

Morality is fundamentally communal or social in nature. What is praiseworthy is what contributes to the flourishing of the community, whether in contests, battlefields or in promoting public order. Second, heroic ethics is not impersonal or abstract, applying to all peoples or culture indiscriminately. What one should do depends wholly on who one is, and where (in what culture, in what context) one is. Morality is not something that can be thought of apart from one’s culture and relationships – as though it can come from on high and tell you how to shape your culture and relationships with some independent, transcendent authority. Talking about what you owe to individuals and groups because of your particular relationships to them is to talk ethically, and there is no way of talking ethically without talking about those relationships. In that sense, society (relationship) provides the full content of morality. Equally, morality provides the full content of society. To put it another way, morality is the soul of society – the set of understandings, meanings, duties, entitlements, stories, ideals that we might understand as morality gives society, and all the relationships it contains, shape. Morality is regulative and constitutive of society. Conversely, society is the soul of morality – evaluative and prescriptive judgements are intelligible only in terms of the relationships comprising one’s society.

  • Particular and personal.

Following on from communality, we may say that in the heroic-classical era, there is no universality in ethics (in the narrow contemporary sense nor in the broad sense of ‘what one should do with one’s life’). Ethics is particular. It is also personal in the sense that it always relates back to relationships and one’s personal commitments to a way of life with others.

  • Public.

Action is privileged over intention when it comes to the judgements cast upon individuals, and there is no sense of some deep, hidden and private self under the surface, one which does not necessarily correspond to outward behaviour, behaviour which may be somehow inauthentic. One is simply a set of deeds and therefore “[t]o judge a man is to judge his actions.”

  • Objectivity.

As Macintyre puts it, “[e]valuative questions are questions of social fact.” He also seems to suggest that these social facts have a kind of prior existence that informs narrative creations like epic poetry, rather than epic poetry informing or giving shape to social facts: “It is not just that poems and sagas narrate what happens to men and women, but that in their narrative form poems and sagas capture a form that was already present in the lives which they relate.”

The tragedians are not asserting their values and weaving meaning out of the chaos of human life – like a Nietzschean would – they are describing the world as it is.

  • Confers identity.

Morality is at the core of one’s being. But, again, it is somewhat artificial to talk about morality as a separate thing. ‘Morality’ is just the name for a particular way of life and set of relationships. So, the point here might be put in terms of ‘the relational self’: you are defined by your role in some structured social unity. This explains publicity. There is no hidden self – something free-floating and independent of one’s public, social relationships and hence morality as this grand, factive order of things. There is no external or neutral place from which to stand, judge and deliver verdicts upon one’s values, purpose, identity, and sense of meaning in life.

In terms of substance, the heroic moral order was based on the idea of virtue (or arete). But this is not ‘virtue’ in the narrow, universalist modern sense – the sense in which Christians, for example, may say that the virtues we must all aspire to are compassion, charity and humility. You must erase those familiar connotations from your mind. The heroic virtues were functionally defined and could therefore be much more wide-ranging in content: a trait is a virtue if it helps one fulfil one’s function (i.e., role or purpose). When one is virtuous, one is being an excellent type of person, i.e., is appropriately fulfilling one’s function. Thus, the virtues are particular to various functions, and are therefore particular to each individual, according to his or her function in society and how that function interacts with the situation at hand. The virtue of humility is a virtue for the ancients, appropriate in some circumstances and for some people, but too so is strength, intelligence, cunning, attractiveness, quickness, prosperity, a wry sense of humour, wit and so on – things without a clearly ‘moral’ valence in the modern sense.

  • Comprehensive, given and inescapable.

First, the givenness of this moralised identity: to a large degree, one’s rights and responsibilities are determined by features of one’s existence beyond one’s control. In the heroic and classical worlds, one is born into a position and (by and large) stays there. One has a particular nature and history (parents, class), and this sets your potential, your life goals, what you are due, what you must do to secure honour and happiness, and so on.

One of the most alien things to the modern mind about traditional ancient Greek ethics, for example, is the way in which individuals are praised and condemned on the basis of matters of pure chance. This goes beyond what we might call ‘social constructs’ or social impositions: “there are powers in the world which no one can control. Human life is invaded by passions which appear sometimes as impersonal forces, sometimes as gods.” And of course there is always death. The whole structure of one’s life is foreordained, and the great soul simply moves towards one’s fate (ultimately, death) with dignity and understanding. To resist and angst over it is folly at best.

Second, its comprehensiveness and inescapability: Macintyre explains the nature of the ancient moral order with a comparison with chess. The two are analogous insofar as in both cases the practice defines what it is to make a ‘good move’ (or be a good person or trait, etc.). In the case of chess, “it makes no sense to say, ‘That was the one and only move which would achieve checkmate, but was it the right move to make?’” There is no question here because that it is the right move is partially constitutive of chess. These rules are objective and factual – they are a descriptive matter constitutive of what the practice is. If we understand the situation and yet still pose that question, we are in implicitly exiting the practice. In other words, if you doubt that it is the right move, you either don’t understand the game, or you are rejecting it.

But Macintyre thinks is also an important disanalogy: when it comes to a whole form of life, there is no way to analyse or evaluate it from an external perspective. In the case of chess, we can wonder about whether it is good to play, or whether these particular rules, norms and conventions are good, or why we should have them. We can do this because we are not chess computers: we have many otherpurposes, values and games in our lives, and these provide us with various external frameworks with which to evaluate chess. For example, if we are politically committed to some form of egalitarianism that is suspicious of competition and hierarchy, we might question practices like chess that pit people against one another in zero-sum games. But when it comes to a whole form of life, what other framework can we use as a basis for evaluation? There is nothing else, he implies. Macintyre also asserts that there is no way to choose whether to ‘switch to another game’ (framework), because “all questions of choice arise within the framework.” That is, our grounds of choice will ultimately lie within the framework we are supposedly rejecting, and hence we won’t really be reject it at all. We won’t be escaping the framework; we will be reaffirming it.

It is hard to know exactly what Macintyre means here. First of all, it seems possible and coherent to use the resources of some framework to undermine it from within. This will be truer the more complex and diverse the framework is; the more pieces and relations are involved, the more opportunities there are for some to enter into tension with others. A determined deconstructionist could use those tensions as leverage to bring the whole thing down – or at least to radically alter it, perhaps by ‘enlarging’ some part(s) at the expense of others, or sowing confusion and doubt by pointing out terminal inconsistencies. Secondly, is the point really about breadth of cultural awareness? The less parochial one is (the more well-travelled and inquiring) the more external perspectives become available. I think Mary Midgely is right that all moral (and other) evaluation relies on comparison with something different. So, we would expect that having more examples for comparison expands the scope for critique.

It strikes me that in the background here is a question about the liberal-cosmopolitan notion of freedom. Here’s the idea. In expanding individual choice (which requires expanding our awareness of options), we eliminate the choice to be without choice. One application or entailment of this thesis is that we eliminate the choice to be a member of ‘heroic society’ or any other traditional form of life. If you try to preserve choice within a traditional context, that context no longer provides the rock-hard foundation for life and morality that makes it saleable. For it opens up a space for comparison, immediately creating a distance between you (as a critical subject evaluating your life) and your life (which becomes a ‘thing’). Can you be assured in the meaning and value of your life and morally-infused cosmology once you achieve this distance? Isn’t that precisely the anxiety and disorientation that postmodernism exposes and sharpens? And if we eliminate the choice to be a member of heroic society, we eliminate the choice to be what Macintyre calls ‘an heroic self’ – a Hector or a Gisli – since the two are inseparable; anything short of full immersion is mere cosplay.

I think there is something right about Macintyre’s thesis that a ‘true morality’ is a socially enacted set of narratives, and the implicit thesis that our sense of the meaning of life is tied up with having such an enacted morality. The surest way to an orderly moral discourse and life is to plug into a shared story. In the west at least, our stories seem to be losing integrity. Since the classical era, probably Christianity and the Enlightenment have provided the most integral collective narratives. But religious salvation and material progress through the application reason are losing their grip on us, I feel. Liberal individualism is what has come to take its place, and some would argue (presumably with a sombre nod from Macintyre) that it is a self-defeating story that actively breaks apart the social fabric that once sustained us. The grand carpet of shared story has frayed in the washing machine of liberal modernity. It has broken into numerable tatters that we can each desperately cling onto. A little piece for each, but all for none. No pattern. Only an untrammelled freedom of movement in this sterile soapy turbulence.

Interestingly, there is some evidence that pre-agricultural humanity was far more homogenous in terms of its technological and symbolic culture than post-agricultural humanity. In other words, the growth and proliferation of civilisation is coextensive with the acceleration of violent and creative processes of ethnogenesis (the fragmentation and generation of new cultures), both within and in reaction to civilisation, in the case of so called ‘escape cultures’ (see J.C. Scott’s The Art of Not Being Governed). So it seems like Macintyre may be right that the further back you go, the more objective morality would have seemed to people, given the relative universality and coherency of culture generally.

It is also interesting to reflect on how globalisation in the 20th century was expected to flatten cultural difference. But has it? Are we seeing a reversal of the trend I have just posited? It is unclear to me that we are. For whatever reason – the yearning for novelty, irrepressible creativity, technological and economic dynamism, Hegelian dialectic, larger scale emergent processes that I can barely fathom – equilibria seldom persist, and seem to undergo transformations at an accelerating pace. It’s a complex picture but the rough sense I have is that we are seeing a transition from a world of discrete and internally coherent local cultures, towards something like a globalised morass in which there is still a great deal of diversity, but it doesn’t respect geography so much. They key variables in this picture, it seems to me, are liberalism, capitalism and the internet, alongside (and this is connected to capitalism) to the brute facts of enormous population expansion and the accelerating pace of technological-economic change.

In chapter 11, The Virtues at Athens, we walk into the ancient world that looms largest Macintyre’s imagination: post-Periclean Athens. He points out that by the time of Socrates, the heroic age – of roughly the 8th century BC – was a fading memory, and argues that this explains the moral confusion that Socrates exposes in his interlocuters. While Homer might have swished him away with a deft and unflinching, imperviously self-assured set of reasons, Socrates’ compatriots are befuddled. We see here the seeds of the emotivist weed taking hold, insofar as the Athenians fall into histrionics and garbled blather at the gadfly’s provocations. I suppose the difference lies in the degree of cultural (in)coherence and the youthful vitality of the Platonic project, viz., to establish firm dialectical rather than strictly socio-poetic (i.e., embodied narrative-based) ethical foundations.

This picture of fragmentation bears some interesting connection to the popular account argued by Thucydides, in which the death of Pericles represents “a turning point in the history of Athens” – a turn away from a “community led by a virtuous elite” towards “a democratic city abandoned to the hands of kakoi—the despicable demagogues”, as Azoulay summarises in his (2014).

Macintyre uses Sophocles’ 409BC play Philoctetes to illustrate a point about the rising moral incoherence in Athens (note that Pericles dies in 429BC). Set in the Homeric universe of the Iliad and the Odyssey, the play gives voice to, but fails to resolve the tension between, “two incompatible conceptions of honorable conduct” (p. 150/156). On the one hand, there is the parochial morality in which virtue is simply doing good to one’s friends and harm to one’s foes. On the other, there is a more cosmopolitan or universalist morality which regards honour as orthogonal to loyalty. One can be honourable without being loyal, and loyal without being honourable.

This ideological development reflects a material socioeconomic shift away from the kinship group as the ultimate moral and existential horizon. That horizon is creeping out further, towards the polis. The polis is a larger, more highly “differentiated” unit. Greek tragedy gives us insight into this development by posing questions such as ‘Should one be loyal to one’s family, or to one’s polis?’ The people of the time – or at least their tragedians – do not simply opt for the latter, even if Athena and Apollo decide in some cases, apparently in the Oresteia, for example, that that loyalty is decisive. The process is additive, and hence confused and confusing. If you value the social role of the king, what happens when there is no kingdom? More profoundly, if you are starting to think in terms of universalist ethics (‘what is due to a man?’), what happens when the morality of social roles prescribes particular duties and rights? The space of possible evaluation is enlarged to include, it seems, the whole ‘framework’ or ‘game’ as it was put in chapter 10, since now we can ask ‘is our whole form of life living up to justice as such?’ But this again sits uncomfortably with the recognition that one learns the virtues (what ‘justice as such’ amounts to) through one’s particular community – something that all the new contending ‘schools of ethics’ are agreed upon. The very idea of the universal truth (as I suggested above) opens up a new world of anxiety: how do I know that Greek ethics are superior? That what I have learned from my particular community is in fact superior (as the Greeks undoubtedly still insisted it was)?

More granularly, Macintyre relates that there was much disagreement within the later Athenian world about what each of the virtues required, and why it was a virtue, and notes that this speaks to growing pluralism within Athens as well as Greece generally. Nevertheless, there was a bedrock of agreement about the importance of society to morality: as just noted, to acquire virtue requires cultivation within a community, and to act virtuously and be a full human, a flourishing human being, requires a community, unlike the interiorised morality of Christianity. For the Greeks, to be a good person is to be a good citizen, and being a good citizen does not (and cannot) look like being a Christian hermit or anything like that.

While we should recognise the complexity and evolution within Greek ethics, there is a profitable contrast with Christian virtues such as humility, thrift and conscientiousness. The drift of the former is towards a certain ideal of the noble and well-resourced (male) public figure, one who exemplifies generosity, fearlessness in truth-telling, willingness to take responsibility for one’s actions, straightforwardness, courage, industry, self-control, discernment, etc. What seems to take place in the discursive crystalisation of this morality is that the tie between virtue and social role is progressively attenuated.

Take Aristotle. When Aristotle speaks of virtue, it is the virtue of human beings as such. He wants to know what it is for a human being as opposed to an animal or plant to flourish, and use that as the ‘function’ relative to which we assess whether actions or qualities are virtuous or not. Of course, there is still a strong social element in his thinking insofar as what it means for a human to flourish is to live a certain kind of political, social life. Moreover, there is a large degree of flexibility in how virtue manifests, and which virtues it is most important to cultivate, depending on context. Here context may include social role and personal circumstances. This is why he never attempts to lay down a categorical rule book. For example, in a characteristically practical remark – so uncharacteristic of contemporary moral philosophy, I think – Aristotle advises that we over-correct according to our own tendencies: “we must consider the things towards which we ourselves also are easily carried away… we shall get into the intermediate state by drawing well away from error, as people do in straightening sticks that are bent.”

More to the point, which virtues one can exercise depends on one’s standing, resources, capacities and occupation. The virtue of magnificence is only available to the wealthy, for example. Magnificence is the virtue of spending on a grand scale for the social good – on building tombs, public works, funding festivals, etc. For a poor man to do this would be foolish, but for someone with means, it may be noble, if they do it to the right degree (neither too much nor too little) and in the right way (with an eye for beauty, and not with overmuch calculation or efficiency-mindedness).

The logical implication would be that, if all virtues are like this, then some people, as an empirical matter, may not be able to be virtuous; they may not be positioned to exercise any of the virtues, if for example, they are cursed with endless social isolation, sleep or severe illness. Consider the following list of virtues (‘means between x and y’) from the Nichomachean Ethics, and imagine how we might be deprived of the opportunity for exercising them (e.g., what if nothing challenging ever happens to us so we can be neither courageous nor cowardly?):

  • Courage – sits between fearlessness and rashness (about “feelings of fear and confidence”)
  • Temperance – between the self-indulgent and the insensible (there is often no name for one of the vices). About pleasures and pains.
  • Liberality – between prodigality and meanness. About the giving and taking of large sums of money.  
  • Magnificence – between tastelessness/vulgarity and niggardliness. (About smaller sums).
  • Proper pride – between ’empty vanity’ and undue humility – about honour and dishonour.
  • ‘Having the right amount of ambition’ (no name for this virtue) – between being ambitious and unambitious.
  • Good temper – between irascible and inirascible. About anger.
  • Truthfulness – between boastfulness and mock modesty.
  • Ready-witted – between buffoonery and boorishness. With regard to pleasantness in the giving of amusement.
  • Friendliness – between being obsequious or flattering (where flattery is being pleasant to secure one’s own advantage), and being quarrelsome or surly.
  • Modest – between the bashful and the shameless.
  • Righteous indignation – between envy and spite. Concerned with the pain and pleasure that are felt at the fortunes of our neighbours… “the man who is characterized by righteous indignation is pained at undeserved good fortune, the envious man, going beyond him, is pained at all good fortune, and the spiteful man falls so far short of being pained that he even rejoices.”

In any case, the fact remains that, in principle, the best life is available to all: “All who are not maimed as regards their potentiality for virtue may win it by a certain kind of study and care.” And this life of virtue has a single, universal general shape.

I hope Macintyre later discusses Aristotle and his connection to the heroic and classical moral order, since there is a little bit of a tension here. As I’ve said, Aristotle has a universalist flavour while the heroic culture eschews universalism. The flexibility of the virtues closes the gap somewhat, but the emphasis has clearly changed. As I understand what has been said, if Homer were to talk about virtue, he would talk about the virtue of this or that person/type of person, while Aristotle would not. Although I’m not sure of how much this matters, if it is true that the two moral schemes are conceptually consistent. After all, Homer might agree to the above list of virtues as virtues for everyone, so long as one fleshed them out appropriately for each circumstance. For example, perhaps a ‘ready wit’ looks different in a king, a solider or a baker, with different standards being wrapped up with the stories articulating those roles.

Macintyre then considers Plato, a philosopher who reaches an even higher level of abstraction in talking about virtue as the result of harmony within persons. A just (and therefore happy) person has each part of his soul ordered properly, which means that it is fulfilling its function, and its function alone. The appetites, spirit and reason are to ‘stay in their lane’ just as the workers, soldiers and philosopher kings are to stay in theirs. In his view, there is no irresolvable ambiguity or tension among the various goods and values. There are objective and fixed ‘places’ for things, and hence right answers to ethical questions. This helps explain, Macintyre suggests, Plato’s antipathy towards tragic drama, with its agonised exploration of moral dilemmas.

In the final part of this chapter, Macintyre addresses the problem of moral conflict and moral objectivism. I confess I found this difficult to follow, so I won’t dwell on it. Basically, he contrasts three positions on the question: can there be conflicts between the ‘duties of loyalty’ (to friends and family) and duties of justice, compassion or truthfulness? The clearest dichotomy is between someone like Plato who says ‘no, the moral order is objective and internally unified’, and the relativist(ish) position of Weber and the 20th century philosopher Isaiah Berlin, who say ‘yes there are such conflicts and there is no right or wrong with respect to how we resolve them.’ But then there is Sophocles, the tragic playwright, who thinks there are such conflicts but there is a right way of resolving them, even if it escapes human understanding. These conflicts remain tragic, however, because we cannot escape doing genuine wrong (even if we at the same time do right). Such a situation is a dilemma in the purest sense. At least, that’s my understanding.

I’m not sure what hangs on this problem of moral conflict other than the very interesting point that it might highlight broader sociological and existential shifts occurring at this time. In facing such moral conflicts, we are facing a personal choice: which among two or more equally authentic moral imperatives do we bind ourselves to? In being presented with such a choice, we are thereby thrust into a kind of self-creation. Is Macintyre saying that this kind of thing was unavailable to the denizens of Homeric antiquity, where the moral order was straight and clear? Perhaps. But the point needs nuancing. It is not that tragic conflict is alien to the heroic age, but that they do not represent personal choices in the momentous way that some moral philosophers today might conclude. I would have liked more clarity here.

Macintyre’s argument leading on from this point is again a little mysterious. He wants to distinguish the ‘Sophoclean self’ from the emotivist self by insisting that the choices made by the latter are accountable to an objective moral order in a way that the choices of the latter are not. But if there are competing moral claims, do we say that the moral order is incoherent? And does this map onto a fragmentation (incoherence) within the polis? He says some things about the importance of narrative in this virtue-based moral order, since the ends of human life determine the virtues. Again, a virtue is instrumentally defined as that which facilitates the satisfaction of our true ends. So presumably there are conflicting narratives which account for tragic moral conflicts. Is it a question of choosing a narrative? If so, what makes one choice correct? Are there correct narratives? I guess this is why Macintyre says at the very end “we must now to turn from poetry to philosophy…”

In chapter 12, Aristotle’s account of the virtues, Macintyre finally tells us more about Aristotle.

Aristotle, according to Macintyre, has an ahistorical picture of philosophy as a kind of science in the orthodox (or pre-Kuhnian) modern sense. It is progressive. The latest body of scientific knowledge is unequivocally the best and can be understood without reference to the history of the development of scientific methods and theories etc., let alone the sociology or philosophy of science.

Macintyre is an historicist in that he rejects precisely this claim that knowledge can be understood ‘abstractly’ and as an uncomplicated progressive accumulation of data: “each particular theory or set of moral or scientific beliefs is intelligible and justifiable… only as a member of an historical series” where “the latter is not necessary superior to the earlier.”

He then describes the Nichomachean Ethics, noting that its attempt to ground morality in human nature commits ‘the naturalistic fallacy’ (roughly, inferring ought-statements from is-statements). One thing I took away from this section is the idea that virtues are not ‘merely instrumental’ (as some of my remarks above imply), but are also constitutive. There is no other way (no other instrument or short-cut) to fulfil your ultimate end, as though that end is some independent state of affairs that can be theoretically be achieved in multiple ways. The end in question partially consists in the virtues.

I also found illuminating the discussion of the virtues themselves, and it makes me think perhaps Aristotle was somewhat innovative in the ancient Greek context in making clear the conditions under which actions were praiseworthy. You can do what is right without having the relevant virtue, but doing so wouldn’t be praiseworthy, because you have not cultivated the proper dispositions; you are merely lucky, and are unreliable. You must do the right thing because it is virtuous and achieves real rather than merely apparent goods. This brings up the issue of rules. Having virtue of character is about having ingrained the correct disposition through experience and habitual action. It is not about following rules. Nor is it about having an ingrained disposition to follow rules. Why? Because rules aren’t always right. Indeed, there is no set of rules that can algorithmically determine what you should do in any given circumstance, so a disposition for blind rule-following would be positively harmful. What you need is to be able to see whether the rules are just and abide by or transgress them accordingly.

With that said, there is some narrow range of action falling under absolute proscription. It’s just that to follow such rules is far from sufficient for living a virtuous, flourishing life. A timid life lacking in virtue may indeed make it easier to refrain from great evil, including violation of such rules. Conversely, one may shine with (non-absolute) virtue, and yet slip up on occasion, running afoul of even these absolute proscriptions (murder, theft, perjury, betrayal) – and indeed may do so precisely because of the virtuous features of one’s character.

Macintyre then links this discussion of virtue back to technocratic modernity. For Aristotle, intellectual excellence is moralised and intertwined with “virtues of character.” From what I gather, the latter requires the former developmentally: you acquire the virtues of character through cultivating habits and this means using your capacity for rational judgement to act in the right ways, repeatedly. Conversely, the former requires the latter constitutively: to exercise rational judgement, you need to “link means to… those ends which are genuine goods for man.”

Here is another place where I feel the breadth of Macintyre’s project impairs its depth. What does this last sentence really mean? Is it about the ends for which you employ your intelligence? That intelligence put to base ends is not really intelligence? I suspect that it’s a little more than that. Perhaps the idea is that Aristotle is including something like what Murdoch calls ‘vision’ into intelligence: the ability to correctly identify and discriminate between actions and ends according to a thick, normative sense of what is good and just.

You might call all this word-games, for clearly there is some kind of intelligence that is detachable from moral virtue. But Aristotle agrees that there is ‘amoral intelligence’ (he calls it cunning) and by ‘renaming’ things he calls attention to the way that being morally smart is a real thing. I think this is both important and insightful and agree with Macintyre that failure to make this connection is a significant loss – one that I hadn’t consciously appreciated. Nor had I noticed how it is embodied in the Kantian conception of the moral agent as one who has a pure will. For Aristotle but not for Kant, it is impossible to be unintelligent and yet morally good.

For Aristotle, to be morally good you need to have correctly identified the mean, and that is not an arithmetical matter but a matter of understanding one’s situation – in particular, one’s social situation. It is crucial to understand that one is part of community based on friendship, and one that has certain shared ends and obligations. One has to work out how one’s present behaviour may best contribute to those ends.

The popular estrangement of morality from intelligence is embodied in the ideology of bureaucracy and expertise, where it is imagined that you (or an organisation as a whole) can become a very smart and morally neutral mechanism for matching means to ends. This is the arrangement suited to a pluralistic liberal, individualist world. We aren’t united by any deep bonds of shared purpose, and so all our interactions are based on utility. At an individual level, our friendships are of a lesser form – what Aristotle calls friendships of utility, where we are in it for personal gain (what the other can do for you; how they can advance your private ends). At a collective level, our institutions are set up to enable individuals to pursue their private goals as much as possible, and we support them according to their performance in doing so. They are not assessable from any other standpoint – in particular, they are not assessable from the objective telos of a common project.

Macintyre says that Aristotle would hardly see such an arrangement as a political society at all. A true political society is based on true friendship, and true friendship is based on commonalities of purpose (of morality). We love a true friend because we share an ethical project – because of the virtuous character we recognise in them. Our friend both inspires us and gives us an opportunity to develop ourselves by helping them become more virtuous. Macintyre claims that for Aristotle there can be no conflict between loyalties to friend vs. country. Filling in the implicit argument, I suspect this is because loyalty to your friend involves a commitment to keeping them on the path of virtue rather than simply helping them along any old path they chose to pursue. And virtue is partly political: it is to do your part for the community. So it would be simply confused to think you were helping them by helping them do something disloyalty to the community, since that would be leading them away from virtue (their true good). And note that if you reject the aims and rules of the community (the circumstance under which such a conflict is conceivable for Aristotle), then it’s not really your community and so this isn’t actually an example of a conflict between loyalties (you are in fact “a citizen of nowhere, an internal exile wherever [you live]”).

This leads into a discussion of Aristotle’s affirmation of the Platonic doctrine of the unity of the virtues: if you have one virtue, you have them all. What is more, there are no deep moral conflicts or dilemmas of the Sophoclean kind. Macintyre concedes that Aristotle is too idealistic here. We can be spotty in our moral character. Even in Greece at the time of Aristotle’s writing, there was far more disharmony and incoherence that he (Aristotle) admits.

The discussion at this point is interesting but meandering (p.157-60). One little thing I noted was the fact that Aristotle thought we could partially assess a polis according to whether it facilitated metaphysical contemplation, since this was necessary to realise our essence as rational animals. Also, he thought there was a ‘divine order’, but it was secularised into something like thought itself. Deities don’t resolve moral questions; the logic of this transcendent order of pure thought does. Here we depart from the Socratic dialectic and Platonic dialogue; philosophy is not about the process or a series of interactions, though these may be helpful. It is about diligent application of rationality to match some final, higher order, and this doesn’t require any special dialectic. Hence Aristotle does exposition, not dialogue.

Another nice point he draws out of Aristotle is the role of freedom and its relation to morality. To be a good person, you need to be a political person, i.e., to have political relationships. But for Aristotle, being in political relationships means being free, since politics is about simultaneously ruling and being ruled over. It is a collective democratic endeavour in that sense: you contribute to a process of collective decision making (you rule) and you subject yourself to the outcome of that process (are ruled). This entails that barbarians (i.e., anyone who lacks a polis) cannot be free. Nor can slaves or women, as they are excluded from politics. It also seems to entail that such people cannot be good and cannot flourish either, since politics and its freedom are required for virtue and fulfilling our nature.

What about enjoying life? For Aristotle, enjoyment supervenes on, and characteristically accompanies, virtue, but it is not our purpose per se. It can come apart from virtue quite easily – the cunning and vicious may enjoy themselves quite nicely, after all.

Finally, I’ll mention an interesting technical Macintyre point makes about Aristotle’s conception of practical reasoning (reasoning about what to do). Aristotle thinks that the conclusion to a practical syllogism is a particular kind of action, rather than a proposition. This is because actions can express beliefs just as well as statements, if less precisely. Hence, they can form parts of chains of reasoning and inference. An action can be consistent with, or necessarily entailed by, a set of statement, or not (that is, can be ‘validly inferred’ nor not).

The meaningful consequence of seeing things this way is that we draw the whole person into the space of reasons, as it were. Aristotle helps us make sense of ourselves and others at a rational level: we can assess not just what they say as rational or not, and what they do as good or bad (in some non-rational, brute way). We can assess what they do as rational or not, by reference to its consistency with what they say – or rather, by reference to the whole network of beliefs that are expressed through all forms of speech and behaviour. Macintyre thinks kind of holistic evaluation is necessary for “any recognisably human culture.” We are constantly evaluating others in this way; we don’t separate the mind (statements) from the body (actions).

More specifically, on Macintyre’s formalization, practical reasoning involves (1) agent’s goals (context of reasoning), (2) the major premise (doing X is the type of thing that’s good for a Y), (3) the minor premise (this is an occasion of the requisite kind of situation for X), and (4) the conclusion = action. The assessment of normative premises (2) is for Humeans a matter of preference or passion. Not so for Aristotle. There are facts about what is good. And knowing what these facts are requires intellectual and moral virtue, acquired through training, habit and education. working at the level of judgement and feeling simultaneously. Reason is not ‘the servant of the passions’, as Hume argued. It’s not just we take our evaluations and assertions of (2) as given by passion and beyond rational scrutiny, i.e., as fixed points which reason must merely find ways of reaching. Somehow reason must help us set those points correctly.

Lastly, Macintyre raises some objections to and questions about Aristotle’s account of the virtues without really resolving them.

  • Aristotle’s teleology is essential to his account, and it presupposes a dubious metaphysical biology (where nature gives us a fixed telos). Macintyre here suggests that we need a new non-biological teleology to resolve this dispute, but I don’t know what that would look like. Social constructionism? Existentialism? Apriori rationalism?
  • What is the relation of ethics to the polis? If the polis is necessary to the ethical life, then the latter is impossible in the modern world. Is it so historically particular as to be meaningless today?
  • Can moral conflict be avoided or managed? The absence of “the centrality of opposition and conflict in human life conceals from Aristotle also one important source of human learning about and one important milieu of human practice of the virtues.” Macintyre cites John Anderson here, who urged us not to ask of a social institution: “What end or purpose does it serve?” but rather, “Of what conflicts is it the scene?” Aristotle cannot see this and hence cannot see a potential source of Sophoclean insight into virtue: “it is through conflict and sometimes only through conflict that we learn what our ends and purposes are.”

Macintyre seems most sympathetic to this last worry, and it is to my mind the most interesting and obscure of the three. I’ll leave it to you to ponder for now.

Complementarity as a meta-strategy

I want to make a quick note about the idea of complementarity. I have written a number of posts about rationality today. I think complementarity is a kind of meta-principle of rationality. If X and Y are different strategies for problem solving or forms of rationality (they mean more-or-less the same thing for me), the principle of complementarity asks us to keep them both in mind, rather than resolve the apparent tension between them by deciding for one and against the other. The key idea is context. X and Y may demand different things of us in the same situation because they are both nominally applicable. The trick is to identify the features of the particular context that suggest one should be deployed rather than the other.

There are different levels of complementarity.

For example, Confucianism and Daoism may themselves form a complementary pair. Very crudely, one calls for order and control, and one for freedom and spontaneity. This is a very high-level, abstract complementarity, which will resolve into more concrete tensions.

A couple of further examples, getting progressively more specific.

First, the tyranny of the majority vs. the tyranny of the minority. Tocqueville argues that in times of rising equality, we will see a tendency towards political and social tyranny of the majority, where we no longer defer to sages, aristocrats, specially sanctioned authorities or other elites, but instead to the majority opinion. Nassim Taleb argues that there are circumstances in which vocal and highly-interested minorities can win outsized influence in a community, to the detriment of the majority (even if it is marginal for most within the majority). Clearly, I think both kinds of tyranny can occur. It is a matter of context as to what tendency – the will and interests of the majority or the majority – prevails (what the net effect is, or alternatively which tendency is triggered).

This points to the way that it is unprofitable to dispute general claims in the absence of their application in particular circumstances. The claims here could be descriptions of society in the past or present, or predictions about what will happen to society in the future (will the minority or the majority gets its way?). We can think of the theses above as offering different frameworks for solving a theoretical problem (accurate description or prediction). They highlight different features of the situation and advance hypotheses about how they are related which can generate different ranges of solutions.

Second, competing strategies for discourse control. There is a lot of debate about how the media should handle disinformation, fake news, conspiracy theories, toxic ideologies etc. Some say ‘sunlight is the best disinfectant’, whereas others regard it as ‘the primary source of energy for growth’. The former camp advocates for an ‘air and confront’ approach, whereas the latter endorses a no-platforming approach. Clearly, we aren’t going to resolve this debate at the level of metaphor. I agree with the media critic Jay Rosen that there is no decisive argument here. But I would argue (what may be implicit in his thinking) that this is because which approach is appropriate is entirely contextual, even if our goals are the same.

So determining which strategy within a complementarity is best requires, as Aristotle would say, practical intelligence to discern what strategy is superior in a given situation. Such wisdom consists partly in the ability to know what to pay attention to; to see the little things, as the Chinese philosophers would say; or to do effective relevance realization, as Vervaeke would put it. These theorists would argue that there is no algorithm that could tell one which strategy to apply.

The intelligence required here is not completely unteachable.

Experience and time, given subtle (discerning), open-minded (unattached) attention are perhaps the best teachers. But the wise often something useful to say in guiding those who wish to learn. Most obviously, they can select well-chosen anecdotes to help listeners cotton onto relevant considerations. After all, even though there are no guarantees, what has worked in the past is likely to work in the future. They can also formula principles, generalization, maxims, and theories (in some loose sense) about what works and why. For example: ‘if the speaker is a powerful rhetorician and the interviewer is weak, the air-and-confront strategy is likely to backfire (unless, for instance, the speaker is perceived as mean and the interviewer is a sympathetic fellow)’.

Reductionist Rationalism

When we use ‘rational’ as a mild pejorative in everyday life, I think it is because we think someone is being overly narrow in their problem solving. ‘Reductionist’ is cognate with this pejorative sense of ‘rational’, and I will call the kind of attitude I am talking about reductionist rationalism. For example, there are people who approach food ‘rationally’, as a source of nutrition only. They manipulate it to achieve relatively specific ends, like weight loss or health (which is a bit of a fuzzy one, I admit). When people just eat intuitively, nonrationally, I think they are often being rational in a wider sense, since they are integrating a broad array of other goals and information into their decision-making. Among other things, they listen to their body instead of treating it like a machine. And listen in the context of their other various interests, obligations and values. If they do so well, we call them wise. This is wisdom as wide rationality.

When the rationalist individual pursues his goal ‘well’, we call him neurotic. He is not proper paying attention to things. He is out of balance because he has forgotten his other goals in life. He is not properly constraining his solution space, so to speak. Thus, his strategies – while simple and direct – are inconvenient, monstrous, peculiar, or potentially catastrophic. He is not attending to how much time he spends weighing his peanut butter, or to how he is missing out on opportunities for social bonding, or enjoying a richer variety of culinary and cultural experiences, or to how much money he is spending on specific foods, or whatever. But he is still rational relative to his narrow goal: he has a strategy, an algorithm, which is highly effective at achieving it.

Tunnels are efficient.

I think this kind of reductionistic rationality is rightly criticised and that our culture tends towards rationalism in this sense. Reductionistic rationalism is opposed to a holistic, non-linear, and, in many ways, more humble attitude that keeps an open radar out for new possibilities and is more flexible.

With that said, I think these two approaches are complementary. When I criticise one, it is because of an imbalance, not because it is bad in itself. The former approach has the virtue of efficiency in many contexts – an efficiency which may be sacrificed to some degree for the sake of robustness and flexibility. To say that one is simply superior would be itself a ‘rationalistic’ move, lacking balance.

I emphasise the faults of rationalism partly because my own life has been damaged by it. I latch onto a principle or relationship between variables that seems solid, and press on it hard. I deduce that I should press this button to get that result, without considering ways in which that result might be contingent on various contextual factors, and hence not persist as I expect it to, or without considering the way that the results of some strategy may change and begin to morph into something else, increasing problematic side-effects. I have treated my body this way, and I don’t think this is unusual.

A simple manifestation of this problem is the ‘more is better’ attitude. For example, if exercise is good, more exercise must be better. If vegetables are good, more vegetables must be better. Nobody told me that ‘progress’ isn’t simple, or that we have limits, or that there are psychological, social and physiological side effects to advancing certain goals, and so on.

Similarly, others adopt the rationalism of hedonism. Pleasure is good, we can all agree. So why limit or regulate pleasure if it’s not harming anybody? What’s wrong with the life of indulgence? We latch onto that principle and rationally construct a life for ourselves to maximise pleasure in a narrow-minded way. Often with catastrophic consequences.

Another, similar manifestation is what I’ll call the atomistic fallacy: we identify some component of a process that seems to have a strong relationship to some effect associated with some goal, and myopically, monomaniacally extract and pursue that component. For example, I wanted to build muscle so I knew I had certain macronutrient needs to reach my goals, centrally protein. Refined wheat gluten is a highly cost-efficient source of protein. Why not just get all my protein from wheat gluten? Well, I did. And I savaged my gastrointestinal tract in the process. I did the same thing with a number of foods. Dietary garlic increases testosterone levels, theoretically expediting my progress, and so I ate a tremendous amount of garlic and I suspect this helped speed along my arrival at severe IBS.

I think we see similar atomism all over the place. We’re always trying to hack our way to happiness and utopia by exploiting various apparent causal connection between ‘atoms’/variables, with limited appreciation of context or keeping in mind the non-linearity of the system in which these effects occur. Some of us – the rationalistic type as I am identifying it – design lives in a completely non-organic way, always trying to maximise by identifying these connections. And our whole society is arguably doing the same thing. GDP is an obvious, if perhaps slightly unfair, example. Here, we rationally pursuing one index without regard to how it interacts with a system of other variables we care about. Relatedly, our environmental problems are another classic side-effect of rationalism. And conservative arguments abound about how rationalist social engineering policies can create unexpected backfire consequences. (E.g., sex positive feminism frees up prostitution and pornography markets, abolishing the police resulting in privatisation of security in super-wealthy enclaves, decriminalizing drugs allows middle-class and large corporations to take over from poor street vendors, speech codes making it easier for HR departments to fire working class people, de-sacralising marriage and making it easier and more normal to divorce reinforces patterns disadvantage through broken homes, etc.).

Why am I blaming rationality? In the personal anecdotes I gave, aren’t I/society simply guilty of immoderation and narrowness? Well, yes. But the point is that moderation is a principle that comes from wisdom. What’s it’s justification? It’s a holistic understanding of how things work in general. The rationalist truly thinks he understands a system with some (incredibly simple) model. Given this understanding, why not push some lever if you can? Perhaps there is some error at the level of rationality (not assessing available scientific evidence correctly, for example). But I think there is something deeper going on, in addition perhaps. It is a failure of wisdom and virtue, which always precedes and surrounds rationality, if it is to function well. The virtue of epistemic humility is important as one of those virtues. Yes, garlic might have been shown to have this one effect through one pathway. But that doesn’t mean you should eat a bunch a day. You don’t know what else it may be doing or if there are non-linear dosage implications, counterproductive feedback effects, and so on (I genuinely do not know, apparently there is linearity in dosage effect, but this is surely contextual in the real world; if I eat so much that I damage my capacity to absorb zinc, which is a nutrient factor in producing testosterone, it won’t).

Whatever you want to call it, I think you understand what I am talking about. It is a myopic and hubristic application of rationality that I worry about.

Are there multiple ‘rationalities’?

A Wittgensteinian approach

What does it mean to be rational? Wittgenstein doesn’t answer this question directly (surprise). But here’s a Wittgensteinian answer I would venture: to be rational is to have good reasons and justifications for what one does, says or believes in a given context, where ‘good’ is determined by the language game belonging to (and partially constitutive of) that context.

Previously, I talked about epistemic vs. instrumental rationality, and argued that the former reduces to, or is ultimately bound up with, the latter. What I want to do here is suggest a slightly different notion of rationality that I will call reasons-rationality.

The intuitive idea is fairly clear. It is about whether you have good reasons for what you say, believe or do. To know what good reasons are, we must know the language games in which they appear. For ‘reasons’ are constructs of language. A reason is a statement or proposition. Language in this sense determines rationality. And given that there is no ideal language, there is no fixed, ideal rationality. Reasons can be appropriate or inappropriate, good or bad, in some context, and within some language game. The problems of rationality are the problems of (1) identifying what context we are in (or should be in), (2) how the rules of the relevant language game apply, and (3) how to critique the language game and its rules.

Take morality as an example of a kind of language game. A moral stance is rational if we can present appropriate reasons for it; if we can justify ourselves to others, or to ourselves. But what counts as an appropriate reason, and hence as a rational moral stance (attitude, claim, conviction)? It depends on the context and the nature of the language game. Though there isn’t one morality in the sense of one moral system, the word ‘morality’ in English is used to denote a certain kind of practice that has certain general features or, if you like, a certain grammar.

Cavell – rightly, I think – articulates this grammar in terms of personal commitments. A moral reason expresses what is important to us, given our particular way of seeing the world (our conceptions of various social roles and responsibilities, nationality, kin group, gods, nature, a cosmic order, karma, etc.), and is something which we are prepared to take personal responsibility for.

We might be committed to veganism, for example, because we have a worldview in which nonhuman animals share a certain kind of equal standing with human beings. We are therefore committed to them, to standing by them in the face of threats to their dignity and interests, to fight for them and do what we can in our own behaviour to minimise the injustice and suffering they face. Someone may disagree with us, but if we articulate these reasons, we can at least help them see us as having a rational moral belief in veganism.

On the other hand, if we are in a moral argument in a restaurant as to whether to order the pork dish, and I (urging against it) merely say ‘we shouldn’t get the pork because my uncle doesn’t like pork’ or ‘It’s Tuesday evening’ then I may be regarded as irrational. Regardless of my goals, I am irrational because I have failed to provide appropriate reasons for my belief, given the standards, norms, use, meanings (etc.) belonging to the language game of morality, or moral argument.

It may seem odd to introduce the idea of rationality here, as Cavell wants to distance his view from the kind of rationalism that sees moral discourse as a logical, inferential process. To the rationalist, a moral belief is rational if it is a valid (or legitimate) inference from a set of premises. This makes morality universal and invariant. Whenever those premises are true, the moral conclusion is true. We can then provide impersonal criteria for moral beliefs and claims being true or false. Articulating and applying those criteria is then the mark of the rational.

Aristotle for instance was a rationalist. For him, to be a rational agent using practical reasoning, one must have a major premise (doing X is the type of thing that’s good for a Y), a minor premise (this is an occasion of the requisite kind of situation for X), and then derive a conclusion (which for him may be an action rather than a belief). This all occurs within a context which includes the agent’s nature and goals, which are an impersonal factual matter (he gets this from his teleology, which we don’t need to go into).

But this is a conception of rationality I would regard as at best incomplete. There is nothing in the wider conception that is inconsistent with Cavell. On the wider conception, logical inference is unnecessary (in fact, syllogistic argumentation is itself a particular form or language of rationality which has it’s uses and abuses). Formalising a set of commitments into axioms and deriving moral conclusions via minor premises is, I think, a Quixotic enterprise liable to produce confusion and puzzles for philosophers. For there are always exceptions. The way commitments express themselves isn’t reducible to a function.

To get a better grip on this wider notion of reasons-rationality, distinguish it from unreasonableness. In the Odyssey, Odysseus’ wife Penelope announces that the man who can string Odysseus’ great bow and fire an arrow through a row of axes shall win her hand in marriage and hence all of Ithaca. Her greedy suitors are dismayed and bemused. They may call her demand for such a contest unreasonable insofar as it is bizarre and difficult. But can they call it irrational? To call it irrational would be to say that Penelope has no justification within the relevant language game for demanding the shooting contest – i.e., within the language game relating to kingmaking, match-making, negotiating lineages and successions, or however we want to characterize it.

The Trial of The Bow (1929), N. C Wyeth

I don’t know enough about ancient Greek culture in the Homeric period to judge, but let’s suppose she does have justification. It is up to her, as (supposedly) the widow of the king of Ithaca, to set out the conditions regarding who shall succeed Odysseus on the throne, and these conditions may properly take the form of a contest of strength and martial skill – a contest symbolic of the need for a king equal to Odysseus, and this ultimately for the good of Penelope and her people. If that’s so, then I think there’s nothing ill-conceived about regarding her demand as unreasonable but rational. It’s just that it would seem odd to use ‘rational’ as an adjective here, because it’s not even a question in this circumstance. But that just proves my point. She is obviously rational, because what she says is in keeping with the rules, in this context.

What would be irrational? Perhaps if she had instead said that her brain chemicals were such that this contest occurred to her (applying the ‘wrong kind of rationality’). Or if she said that Calypso told her to do it this way (misapplying the rationality of the traditional law). Or decided to choose the weakest man, or the man who looked most like a woman, or something completely incomprehensible to that form of life. Then she would be regarded as not only unreasonable, but bonkers, irrational, mad. Those aren’t good reasons! What are you talking about!

How different are the various forms of rationality? Isn’t there some unity to them? I think there is unity insofar as they are all about problem solving. In this respect, there is an indirect connection with instrumental rationality. The forms of life which ultimately give rise to discourses of rationality persist if they are in some sense viable.

Moralities, to return to that theme, can be understood in terms of adaptation to different environments and circumstances. The rationality of a morality might make sense in one place, at one time, but not another because the form of life that underwrites it may be suited to the former but not the latter.[1]

But from with in form of life at a given time, the rules that determine rationality are not instrumental. For one thing, the instrumentality being invoked here is macroscopic: it applies to groups of people, their practices and discourses. For any given individual, on any given occasion, it may not be instrumental to be rational according to rules which are macroscopically rational. (Except in a fairly trivial sense: any reason can be related to some goal which the action or belief supported by that reason helps achieve. For example, I stop at the traffic light because that’s the law. My reason can be related to some goal: wanting to obey the law. Stopping at the traffic light is (trivially) instrumental to that end.)

A given language of rationality may be shaped by both critique and some kind of organic evolution that doesn’t involve explicit critique. In both cases, we may reflect on how that shaping is governed by instrumental considerations. Likewise for disputes over which rationality should apply in a given circumstance or context. (See my post on complementarity). For example, consider the rationality of military strategy and the rationality of political strategy. When it comes to deciding how to choose officers within a military command structure, these two kinds of rationality may enter into conflict. It may be politically rational to appoint those with certain political allegiances, but militarily irrational to do so (if they are inexperienced). Stalin for example resolved this conflict in favour of political rationality, heedless of the profound human and military costs of doing so (its irrationality in military terms was acute). The ancient Roman army, meanwhile, owed some portion of its success to its tradition of appointing a lower layer of military command (the Centurions) according to experience – a military rationality – rather than political expediency. This preserved knowledge, continuity and functionality throughout periods of political instability, which only threatened instability at higher levels of rank (the Generals and Tribunes).

To take a more contemporary and controversial example, consider decisions about whether or not to receive a COVID-19 vaccinate. I would argue that we can see different rationalities in operation here: a political rationality (do it for the good of the community, or don’t do it as a protest against the emergence of “global police state“) and a prudential medical rationality.

[1] There’s complexity here since language games may form babushka dolls. The doll type is the category of morality as such. The grammar of morality, as Cavell characterises it, describes this type of doll (the morality doll). Any given morality doll may have its own language-game features and rules, and hence its own rationality. So we can ask: why is the doll-type adaptive/instrumental? As well as: why is this particular doll or layer of the doll adaptive/instrumental?

The rationality of alternative medicine

Instrumental vs. epistemic rationality

Julia Galef distinguishes between epistemic and instrumental rationality.

  • Epistemic rationality: about how you form beliefs about the world. You recognize errors and cognitive biases (e.g., wanting it to be true, or liking the consequences of it being true) and what constitutes good evidence or reasons for a claim.
  • Instrumental rationality: about how to best achieve your ends.

I like this distinction. There is clearly something right about it. Beliefs and actions are obviously different and the ways in which we judge each with respect to rationality tend to be different, too. But I wonder if they are more closely related than may be immediately apparent. I think they are.

My theoretical claim is that whether a belief is rational will ultimately be a pragmatic question, and hence will tie into instrumental rationality. (Sorry this won’t be focused on the details of alternative medicine! That was a bit of a chubby worm on the hook of boring philosophy).

Epistemic rationality pertains to procedures of belief-formation rather than the content of belief. Beliefs are rational if they are generated rationally. Beliefs are irrational if they are not generated rationally. Rationally-formed beliefs are those that are formed on the basis of procedures or methods or processes that reliably result in beliefs that survive scrutiny.

Scrutiny can come in many forms, however. So which forms should we take seriously? Sometimes it is clear because a form is or isn’t applicable. Mathematical beliefs don’t need to be scrutinized by the methods of empirical science for example. They need to survive a different kind of analytical scrutiny.

But there are often different and competing forms of scrutiny that apply at the same time. So, what if a belief survives one but not another? Are they rational or not? We could do an aggregating assessment where a belief is rational to the degree that it survives different forms of scrutiny, but this would seem a wimpy solution insofar as it doesn’t engage the crucial question of whether some forms are better than others.

Alternative medicine

Disagreements over what is rational often emerge in cases of great uncertainty. For example, health science is still emerging. There are conflicting studies and a lot is under-researched and unknown. There is also plausibly a lot of individual variation in how people respond to different diets, drugs, supplements, hormones, and so on. Given all this, it is unsurprising that we find different people advocating for different diets or lifestyle regimes, and they all seem to ‘have their reasons’. Not only is there noise and diverging results in the data, resulting in disagreements over how to weight different results and so forth, but some people feel entitled to reject or downplay the role of population-level data in forming relevant beliefs altogether. More generally, I think what shows up in this area are conflicting forms of rationality regarding what to do to look after one’s health.

Talking about ‘forms of rationality’ is imprecise, for these forms are not clearly definable or distinguishable, but I think it’s a helpful way of thinking. As I will use the term, a form of rationality is a cluster of typical approaches to solving a certain kind of problem.

Within complementary and alternative medicine (CAM), there is an endless wilderness of beliefs about how to achieve and maintain health which the mainstream medical world would regard as irrational insofar as they aren’t backed up by clinical (ideally controlled, double-blind) trials, meta-analyses, field studies, and so on.

My question is not whether CAM should be banned, condemned or denigrated. I think most liberal people would certainly agree that there’s a lot of harmless complementary stuff floating that people should be free to practice and/or pay for it, especially if they do so alongside conventional treatments. (E.g., meditation, progressive relaxation therapy, yoga).

My question is rather about what it is rational for individuals (and to a lesser extent policy-makers) to do and believe.

CAM advocates will sometimes recognise that their treatments and beliefs are unsupported by ‘mainstream scientific research’, at present. That is precisely why they are, for the time being, classified as CAM. But they don’t think this makes them irrational, while others do.

In their defence, some might say that relevant studies are difficult to run. In particular, it is difficult to control and manipulate certain independent variables like ‘expert shamanic ritual’ and ever harder to blind practitioners to what they are doing. And even if you did run them and got little to no effect, one could argue that this doesn’t decide the mater because it’s all so contextual that the effect is being lost in the noise of individual variation, or that it only works in conjunction with an integrated treatment plan. Some individuals may respond to a given treatment while others may not, for a myriad of highly context-specific reasons. In light of this, advocates argue, you have to consult deeper principles, ideas, experience and intuition. These will have to draw on many sources and insights, because the problem is at once highly specific (an individual patient) and holistic (involving their whole life and the social and natural ecology it emerges within).

There is a huge variety of practices and treatments within CAM and what I say may be applicable to some and not others. (In some ways I am portraying some of the better tendencies within and arguments for CAM.) But I will tentatively say that the traditions within CAM broadly share a form of rationality, by which I mean a cluster of problem solving strategies, where the problems are problems of health and wellbeing. These strategies can be distinguished from scientific ones, although they are not necessarily unrelated. Strategies inform what one should pay attention to and how to regard the (e.g. causal) connections among these things. CAM strategies often involve intricate and well-developed theories. Thus, there might be a strong ‘rationale’ for a claim or treatment approach even if there are no scientific studies demonstrating a robust effect. But they also almost invariably involve empirical observation, albeit in usually non-rigorous and anecdotal ways. The accumulation of experience within traditions and individual practitioners becomes the partial basis for theory formulation, criticism and development.

This experience crystalizes into forms of discernment and intuition which are not always articulable, but may come wrapped in powerful convictions on the level of direct perception itself. In the same way that you know that you are looking at the colour blue, someone might know that a session of prayer reduced their hip pain. Other times it is articulable in the form of general statements which approach theory, although they are based on a convergence of background evidence from everything one knows about the world. For example, I have heard someone ask me: ‘isn’t it likely that, even if we haven’t detected it yet, a COVID vaccine that produces acute myocarditis in some individuals is damaging all of us to some (lesser) degree? Some people are just more vulnerable than others, for various reasons.’

If we look at specific CAM traditions – Ayurveda, naturopathy, homeopathy, kinesiology, reflexology, TCM, and energy therapies like qi gong, Reiki, Prana and Therapeutic Touch – we can get a better handle on these various forms of rationality and their family resemblances. But that is not my job here.

The question is what to do when competing forms of rationality inspire different kinds of belief-scrutiny, all of which seem applicable to the problem at hand in that there is no domain/category error. How do we decide between them?

Partly, it is a matter of context. In the context of policy development for large populations, I think we should go with empirical science and studies which deal with population-level effects. This leaves it open that there are exceptions and so forth. In that role, mainstream medical science, I would say, wins based on track record.

My basis for saying this is instrumental. Instrumentally, scientific methods get the results we are looking for at the policy level. So what is rational to believe for a policy-maker is nothing less than that which survives scientific scrutiny. And what survives scientific scrutiny is that which is rational according to the scientific form of rationality (empirical science with all its various institutions and methods such as peer-review, statistical analysis, field research methods, clinical trials, etc.) characterised by a certain ‘language game’ (statistical significance, effect size, normal distribution, reliability, validity, evidence, theory, hypothesis, etc.).

There is no ‘proof’, ultimately, for epistemic rationality that doesn’t reduce to instrumental considerations (instrumental rationality).

At the individual level, however, things are not so clear.

Take someone who spends a lot of money, time and energy on organising their life around some cluster of alternative treatments to address their health problems and improve their perceived quality of life. They aren’t wealthy and have other things they could be doing. But they really believe this is best for them and their goals – goals we share and sympathise with. Are they being irrational? Should they be doing something else?

Here the factors which go into judgements about rationality are many, and based on the context of the individual’s circumstance, including the seriousness of their health concerns, the availability and efficacy of mainstream interventions, opportunity costs, the risks involved, the types of alternative treatments being explored, and the manner in which they are being explored.

For example, we could fault them if we felt that they had formed their beliefs in an epistemic bubble, making no effect to look at conflicting research or experiences. Or if they failed to attempt or even consider different approaches in order to conduct anecdotal longitudinal self-experiments (where possible). Or were subject to obvious cognitive biases and didn’t attempt to correct them (insofar as we are rational in thinking that these are biases and attempts to correct them are effective). We could also look at all those opportunity costs and so forth. We might conclude that they are being rational if we buy Pascal’s wager-type arguments: if a treatment is 5% likely to work, and likelihood of serious side-effects from taking it is <5%, you might as well take it if the payoff is sufficiently high (e.g., curing a disease, preventing an illness, avoiding death, substantially improving quality of life). As with the religious wager, one problem is that the bar for ‘buying a ticket’ becomes so low that the principle advises you to purchase a great many of them. This is especially true given that you can easily find or run studies showing small positive effects for just about any chemical compound.

In sum, I think it is not clear that they are ipso facto irrational. And ultimately what we should recognise is that in a given circumstance there is no single rational conclusion (in philosophy we call this view permissivism). I see the rational/irrational distinction as marking a boundary between those positions which are not supported by good reasons or some plausible rationale, and those which are.

But there are many reasons, and they are not all compulsory to adopt, nor weight in a particular way, nor are they conclusive in the sense of determining one specific conclusion. We can always question how rational someone is being by point to our reasons for a different conclusion, or undermining their reasons, but at a certain point we must, I think, turn from arguing that our interlocuter is irrational to simply persuading them towards a different position.

I suspect people are inclined to rejecting this permissivist attitude because they have in their mind a geographical metaphor. Rationality is about getting from A to B in the most efficient manner possible. Surely there is one most efficient path, and hence one rational conclusion/strategy.

But this presumes a number of things. First, the presumption that our goals and circumstances are identical. Yes, if we had identical goals, characters, bodies, friends, environments, and so on, uniqueness (the view that there is only one rational conclusion) would be more plausible. But this is never the case (otherwise we would be the same person). Insofar as we differ, the ‘most efficient’ path will differ. The map will be covered in bumps and valleys, objects, obstacles and various textures, and we will have different attitudes about those things and strengths and weakness with respect to how to navigate them. We try to imagine ourselves being in their position, but this is always imperfect. Moreover, judgements about rationality are not purely descriptive, but are instead normative, exhortations to transformation of one’s self and one’s goals themselves. We can rejigger the complex matrix of values and goals that inform what is rational for an individual.

Pushback: Relativism!

Let’s say that applying the scientific method in medicine turns out not to help us do what we want, which is increase health outcomes at the population level. So let’s suppose we design policies and distribute drugs and other treatments according to the scientific consensus that emerges as a result of applying typical scientific methods. But over time we notice that our health outcomes are declining. Has this form of rationality failed us, instrumentally? If so, should we regard science as irrational?

What do we do? We look for alternative explanations and confounds. In other words, we ask whether it is really medicine letting us down, or if there are other forces that explain the decline in health (forces which would wreak even more havoc in the absence of mainstream medicine).

It’s hard to imagine it letting us down systematically in this way. After all, if a clinical trial shows a population level effect, why wouldn’t it have an effect outside the trial? It might, but then the most intelligible reason why would be ecological invalidity; i.e., discrepancy between test conditions and application conditions.

I would argue that this is a further problem solvable within the scientific paradigm of rationality. But I take it seriously. The replication crisis in social psychology suffers from the same problem. The causal networks involved in outputting social behaviours are complex and context-specific such that even population-level effects are not easily generalisable.

For example, even if you persuasively demonstrate in one study that positive-reinforcement basketball coaching works for some team(s), that doesn’t mean it’s going to work for all teams. You can’t write down a piece of coaching advice and bank it just like that. Arguably medicine is less context-sensitive insofar as the relevant part of our physiology is typically more stable across populations and individuals, but as I noted, there is bound to be a significant amount of recalcitrant variation.

The most obvious solution here would be to power up our studies. If your trial population is millions of people living in the same conditions that will (roughly) obtain in the context of eventual application (the real world), then there is little doubt about ecological validity. Of course, the world may change. We should always be on the look out for that. But this is just to call for ongoing testing/retesting.

But this powering-up approach disguises underlying variation in outcome. To take the coaching example, perhaps if we apply a blanket policy for all basketball teams that they implement regime A, that would result in superior outcomes to a blanket implementation of regime B. Sure. But if subset variation exists – i.e., if some teams respond well to regime A but others do not – that leaves a lot of potential gains on the table.

Looked at from the policy-maker’s point of view, the question becomes a technocratic one: how many statistically significant subpopulations can be identified for population-specific interventions in order to minimise costs relative to gains?

Looked at from the individual’s (or team’s) point of view, the question is perhaps more complicated. For any intervention targeted at subpopulations level larger than 1, you can wonder about whether that intervention will be ‘right for you’. So what is the rational thing to do?

It seems to me that this is not a simple matter of ‘going with science’. If you have no good way of identifying whether you are an exception to the rule (or no good reason to think that there is a profound degree of subset variation), then you should go with the scientific generalisation (at whatever level). But this is precisely where rationality as a track-record of experience comes in, and that is a wider notion that scientific rationality as described above.

And it’s not just a matter of individual experimentation going forward. That is, you can’t always say: try A, see if it works! For, sometimes, the stakes are simply too high to experiment on yourself or your team, or workplace, or whatever. You must scrutinise both the extent of variation, and whether you are likely to fall into some category or other. How do you do that? It will depend on the area you are looking at.

But what I am presently convinced of is that ‘go with the science’ is, in the end, not a uniformly wise or even contentful exhortation. Rationality is, again, broader than that. There are intricate tradeoffs here, such as the bias-variance tradeoff famous from AI (see the overfitting problem). Rationality is, in the end, all about strategies for solving problems. And I haven’t even started to talk about the messiness typically associated with ‘the problem’ itself: what is the goal, precisely? Often, we can’t answer that question in any definitive way. There is a recursive interdependence of goals within which rationality emerges so that although the goal-posts may not be clear, and may shift over time, we generally know whether we are progressing or not. Wisdom is about know what progress mean, paying attention to when it is occurring, being open to alternative strategies when it is not, and devising such new strategies as we learn.

Final notes

I should note that the mainstream vs. alternative medicine dichotomy I have appealed to is itself blurry. Not only are some areas of medical practice on the borderline (e.g., various drugs and therapies that haven’t received approval from key regulatory bodies and exist in a legal grey area, such as the use of peptides for cosmetic and injury recovery purposes). But we all know that in practice doctors are not robots who only look at the science. A good GP incorporates this wider rationality or wisdom. They use their intuition and experience (i.e., ‘anecdotal evidence’), all the more effectively the more they know about their patients. This is why long-term relationships with GPs is so valuable.

Artificial Intelligence and medicine

The question going forward in this age of big data and AI is whether sufficiently sophisticated algorithms can bridge the gap between science and intuition through the sheer quantity of data available. The more information is recorded on every aspect of our lives, the more reliable AI health predictions will be. Understanding, theory, and science itself are potentially side-lined in this development. All we need are correlations to generate predictions. Not just about you – your job, your age, height, weight, blood pressure, activity levels, heart rate, disease profile, psychological profile, race, etc. – but about your community, social networks and your recent contacts within that. All of this is potentially highly useful information for diagnostic purposes. Maybe the problem was just that we were forced to choose between a highly fallible attempt to process mass contextual information through intuition (non-science) and the more reliable, but also limited, narrow rationality of science. AI potentially turns this into a false dilemma.

Even if it does turn out to fill a certain very useful role in the medical system, I don’t think human judgement and understanding will be displaced by AI. For one thing, the role of the placebo and the effect of knowing that someone cares and understands your situation (at a human and medical-scientific or theoretical level) will remain important. We will have to trust AIs to get the placebos, and we’re a way off from plausibly ascribing understanding to them. But also there are political questions here. We could use medical AI to simply bulletproof a sinking ship. Advanced treatment may function as life support for a badly damaged society that needs deeper structural reforms.

Žižek on Ideology

Like most people, I only know Žižek from his talks and interviews on YouTube and his writing in mainstream publications. Also like most people, I find him to be engaging and amusing, but also confounding. Recently, therefore, I have attempted to discern the logic underlying the reliable-but-still-somehow-obscure pattern of his tantalising ejaculations. Here is the result of that attempt. I hope it might help someone.

The part of his work that I want to focus on is his theory of ideology. Žižek made the critique of ideology cool again. But it is important to note that by ‘ideology,’ he does not mean some political doctrine or worldview that we explicitly believe. Instead, he has in mind something like an orientation that guides our practice regardless of what we explicitly believe and serves some larger power structure. This conception of ideology may be called Marxist insofar as it pertains to how some cultural configuration emerges out of and strengthens the dominant (e.g., liberal-capitalist) order, like how a plant binds together the very soil that nourishes it. On the other hand, his conception of ideology may be called psychoanalytic – and specifically Lacanian – insofar as it concerns the operation of ideology at the psychic, and often unconscious, level. In other words, how does ideology function, psychoanalytically? What psychic structures does it exploit?

Very broadly, ideology can be thought of as that obscure part of the social order which helps preserve the whole system by filling in its gaps, taping up its tears and bleaching its stains. Zizek is interested in making the political and social world intelligible through bringing this hidden ideology (and hence the gaps, tear and stains) into the light. The aim is not just clarity, but liberation. For he thinks we are trapped in various ideological contradictions and illusions.

It is not enough, as we shall see, to simply gain knowledge, however. Zizek’s prescription is to think rather than act. But what he means by thinking is a radical thinking that is only possible when we cease to act – or rather, I would say, react. There is too much critique these days that is, according to Zizek, simply knowledge that creates reaction or “pseudo-activity.” This kind of reaction is, perversely and ironically, in fact highly ideological and hence counterproductive. If that doesn’t make sense right now, that’s fine. It will, I hope, become a little clearer by the end of this article.

How does ideology preserve the social order? I think we can talk about at least two distinct, albeit related, mechanisms. First, the accommodation of popular frustration with society through cynicism and “inherent transgression.” Second, the presentation of the social order – its ideas, practices, norms, and so on – as natural, neutral, objective, and unquestionable.

Accommodation.

Ideology accommodates and neutralises discontent. To introduce this idea, I will draw a parallel. There is an old socialist worry that capitalism keeps itself chugging along by throwing crumbs to the workers every now and then. The idea is to pacify them just enough so that they don’t rise up and take what’s theirs. In this view, the welfare state is part of the ‘immune system’ of capitalism. The thought is that in granting a little of what we want, the powers that be diminish the likelihood of getting a lot of what we want.

My point here is that we can think of what I am calling Zizek’s thesis of ‘ideological accommodation’ as the cultural-psychic equivalent of welfare. Both are crafty means by which the system protects itself by offering concessions to its subjects, disguising the overall nature and effect of power in the process.

Ideological accommodation includes cynicism and inherent transgression. Cynicism is when you have explicit, negative beliefs about the motives and modus operandi of some system, practice, individual or belief-system. Zizek’s point is that being cynical about capitalism, parliamentary democracy, liberalism, advertising, and so on, is compatible with behaving in a way that is fully consistent with those things. Hence his phrase, ‘they know it [is bad], but they are doing it anyway.’

We see cynicism weaponised as a tool of ideology all the time. You may have noticed how the advertising industry and politicians have started using irony and self-reference as part of their rhetoric. With an obvious, self-assured wink, they admit that they are playing a game with you, but it works anyway. Knowing that you are manipulated does not mean that you are not in fact manipulated – though perhaps not always in precisely the way you thought you were. In fact, such rhetoric may be even more effective than old-fashioned dissimulation. Why? Zizek’s main answer has to do with what he calls “ideological disidentification” which can be seen as a manifestation of “fetishistic disavowal”, in Lacan’s terminology.

When you proclaim that the political parties you are voting for are self-serving and feckless, you can flatter yourself with the illusion of personal freedom of thought. In Zizek’s language, you disidentify with ideology on a superficial level that doesn’t impact your actual behaviour. You can then say to yourself: ‘I’m no dupe. I know this is a tokenistic gesture in a game that’s basically rigged and sending us all to hell… Still, what are you going to do?’ You do the ideological thing (participate in a limited way in parliamentary democracy) all the while thinking in a way that can pass as transcending that ideology. Our dissatisfaction with the social order is in this way accommodated: there is an abundance of awareness of, and talk about, how bad things are, but this simply allows us to go on as before without worrying that we are being naïve or brainwashed.

Inherent transgression is another (deeply related) ideological tool to accommodate dissatisfaction. Transgression is when you disobey a rule. “Inherent transgression” is Zizek’s name for when a transgression is implicitly solicited or sanctioned by the very system that explicitly endorses the rules it tempts us into breaking. In this way, the transgression is part of the system and its survival, rather than a genuine opposition to it. This sounds paradoxical unless we think, as Zizek does, of a system as containing more than its explicit, surface-level injunctions, ideas and norms.

Note here that Lacan thinks we all have an appetite for the jouissance or enjoyment derived from violating rules. Zizek applies this politically, saying that because we get a kind of ‘enjoyment’ from protesting against the system, the system can safely allow us to protest, knowing that on some level we actually want it to continue so that we can go on drowning in the pleasures of venting and congratulating ourselves on being free-minded radical subversives.

This is why Zizek’s catch-phrase ‘I would prefer not to’ is so powerful. Despite appearances, it should be understood as a kind of revolutionary slogan, albeit a “negative” one. Indeed, Zizek advocates it precisely because it represents “pure negativity” – it allows one to adopt a position outside of the system by not reacting to it. In other words, ‘I would prefer not to’ expresses the refusal to define oneself in opposition to power, or to enjoy actively doing something against it, on its terms, and hence capitulating to it. Nor does it invite the political danger of jouissance – i.e., the danger of becoming invested in power through being investing in the enjoyment of opposing it.

Inherent transgression provides a great example of the way in which, according to Zizek, ideology fills in the gaps, pitfalls, inconsistencies and contradictions in the Law, that is, in the symbolic order which creates, organises and sustains the social order (some system, institution or way of life).

To use one of Zizek’s anecdotes from his time in the Yugoslavian military, the explicit part of the Law (the official rules) forbade consumption of alcohol. This prohibition makes perfect sense from one point of view. It facilitates the operation of the military by improving the safety and effectiveness of troops on duty. It may also help to project the military’s official image as ‘clean, mean and relentlessly professional’.

But this prohibition may also be problematic for the sustainability of the military ‘system’ as a whole, because alcohol is an important part of bonding (hence solidarity and combat effectiveness) and allowing individuals to let off steam in a stressful environment. These things are not trivial. They are part of the system’s healthy functioning. And this is where ideology steps in: it props up the overall system by soliciting the transgression of the official prohibition.

As Zizek relates his experience, it was expected that you violate the rules by drinking with your mates. Those who didn’t were regarded as foolish outcasts. Seen in this way, getting drunk on the sly is not a protest against the military and its rules. It is an ideological behaviour: one that supports that system and causes you to become more invested in it. The same could be said for all kinds of activism, attempts at cultural critique and faux resistance, according to Zizek.

The second mechanism by which ideology supports the social order is via the rhetoric of neutrality. When everybody is cynical, it might appear as if we have achieved a post-ideological society. It is true that few of us find ourselves waving the flag for liberal democracy, or handing out pamphlets for socialist alternative, or going to neo-Nazi rallies. But this just shows that we have a limited and arguably ideological conception of ‘ideology’. We are in fact ideological, in the sense that our behaviour is structured around norms and ideas that support the current system. Zizek calls the fantasy that we have transcended ideology the “archideological” fantasy of our age. And this fantasy just makes the ideology stronger. Why? Because there is this idea that if a position, policy, belief or action can be presented as hanging above or outside the fray of politics, detached from any particular viewpoint whatsoever, it gains a special authority. It is then taken to reflect a neutral, common-sense, or objective understanding of the world, unbiased by political affiliation or the passions of ‘tribal’ commitments, etc.

Many claims that get thrown around in politics that are taken to be simply descriptive and neutral are in fact ideological, for Zizek. The clearest, most intuitive examples may be drawn from other times and cultures. For example, when people in medieval England said ‘Charles is the King!’ this could be taken as a statement of fact. Indeed, Charles is (or was) the King. But it is also, and more fundamentally, ideological, in that it is only because people affirm such statements that he is the King. ‘Charles is the King!’ is what we might call a performative utterance in speech act theory – it makes itself true by doing and not merely describing something. Zizek wants to apply the same logic to political discourse today, although this logic is harder to see insofar as we are confronting our own ideological illusions – illusions about what reality is objectively like. Think of federal budgetary constraints, free contracts, nationhood, and so on.

It is only by understanding the scope and power of ideology and refusing it with a pure negation (“I would prefer not to”) that we can break free of its “coordinates.” We are then finally in a position to construct a new symbolic and social order out of that negative space, a space exterior to the system it finally overcomes.