Extra-linguistic facts

I tried to say something uncontroversial, and perhaps it is commonsensical in spirit. But there are few interesting claims which can be articulated in more than a cursory manner without becoming contentious. Even that one, probably.

Here I try to account for extra-linguistic facts. In particular, I ask whether there is good sense to be made of the notion of facts which don’t depend on the way we use words. In the background are various theories such as social constructionism and deconstruction.

What are extra-linguistic facts? Two intuitive responses seem to be in tension. On the one hand, we want to say that they are the things that linguistic symbols point to, refer to, stand for, denote, represent, etc. On the other, we don’t want to say that the entities in question are themselves ‘facts’. Facts are propositional – they involve things like quantification, at a minimum of complexity. For example, a cat isn’t a fact. “A cat exists” is a fact. “Jarred the cat exists” is a fact. But put this way, facts look essentially linguistic. It seems impossible to distinguish a fact and the statement of a fact (and statements are unavoidably linguistic). In order to make such facts extra-linguistic we might conceive of them as structured propositions. Structured propositions are “complex entities, entities having parts or constituents, where the constituents are bound together in a certain way” (King, 2011). They are, in other words, arrangements of things in the world (objects, properties, relations, regions, times… exactly which entities you want to talk about will be determined by your ontology). So, a cat could, apparently, constitute a structured proposition. But is it a fact, too? Perhaps all facts are structured propositions, but not all structured propositions are facts. Or perhaps not! I’m not sure. Maybe there is a one-to-one correspondence between structured propositions and facts.  In that case, the cat would be a fact. That looks weird. Still, it seems to get us the result we want: there are extra-linguistic facts, and this is what they are like.

But doesn’t this approach smuggle in language by describing the entity constituting the proposition (i.e., the cat) in a certain way, (i.e., ‘the cat’)? Whether “there is a cat” is true depends on whether it is a fact that there is a cat. The question is whether the latter is extra-linguistic, i.e., doesn’t depend on linguistic matters (what words mean). I think it is. Changing the meaning of our words doesn’t change whether or not it is a fact that there is a cat. It can only change what ‘cat’ means. If we change the meaning of the word ‘cat’ such that it refers only to dogs, then it might not be true that ‘there is a cat’, because there is a cat, but possibly no dog – that is, because we have only been given the fact that there is a cat.

Notice that when I use the word ‘cat’ in stating the fact, I am using it with its standard meaning. I could just as well state the fact using the word ‘dog’ given its just-now stipulated meaning. None of this changes what I have stated, which, recall, we are supposing is a structured proposition, i.e., the thing in the world, the cat.

Remember the old story about Abraham Lincoln. Lincoln is said to have asked: “how many legs would a dog have, if we called the dog’s tail, a ‘leg’?  Five? No, calling a tail a leg don’t make it a leg.” We can distinguish two claims that might be made here. There is the claim  “If we called a dog’s tail a ‘leg’, then it would be true that dogs have five legs”. This is the claim that Lincoln denies, and rightly so. The sentence is false. Dogs would not have five tails. Here, the word ‘legs’ is used in the second clause, viz., “dogs have five legs”. But in a variant of the example, the claim might be made: “If we called a dog’s tail a ‘leg’, then it would be true that dogs have five ‘legs'”. This claim is true. Here, ‘legs’ is being mentioned. The sentence is merely claiming that dogs would have five ‘legs’, under the new definition of ‘leg’. One can consistently deny the first sentence while endorsing the second. A dog can have, as it were, five ‘legs’ and four legs, if we keep the use/mention distinction in mind, and know that when ‘legs’ are mentioned here, it is given the newly stipulated meaning.

The only point I wish to make in concession to the ‘linguistic constructionist’ (a vague label) is that which facts we recognize, or are inclined to see, are those which can be stated in our language and which are conceptualized in our ‘conceptual scheme’.

What I am suggesting – and this is an empirical claim along the lines of the so-called weak Sapir-Whorf Hypothesis – is that if indeed we called tails ‘legs’ and had no word for tails, we would be less likely to recognise the fact that dogs have only four legs. And if we had no concept of <tail> then such a recognition may not even be possible. Possibly, would would not be able to make that discrimination*. In the life-world of the person without the vocabulary or concept of <leg> as we understand it, legs would not exist in the everyday sense; they would not be salient.

I should quickly emphasize the distinction between words and concepts. Obviously it is possible to make a given discrimination without having the vocabulary for it, otherwise nonhuman animals would have a hard time getting on in the world. But words and concepts appear to be intimately related in human psychology; learning and using language helps shape and facilitate the deployment of various concepts, and how we use words is shaped by our prior ‘conceptual scheme’. So having the words for a given discrimination makes it more likely that we will deploy the relevant concept and hence make that discrimination. Also note that ‘discrimination’ in the relevant sense is not mere perceptual discrimination. Of course in some sense any creature with a comparably powerful visual system will perceive a difference between legs and tails. But I am interested in something stronger: a conceptual discrimination. A discrimination is conceptual if it distinguishes two or more types of thing. To conceptually discriminate between a tail and a leg is to register the existence of two types of entity, associated with (or constituted by) various properties and conditions. Thus, conceptual discriminations are not merely numerical discriminations. An alien visitor might distinguish a chimpanzee and a human standing side-by-side in the minimal sense of observing that there are two animals standing next to each other, but fail to make the conceptual distinction between humans and chimps. They might only have the very coarse notion of <humanoid animal>.

* Thanks to Noon for bringing various issues here to my attention. This is one of the more contentious claims I make. Part of what is problematic about claiming that concepts are necessary to make the relevant kinds of discrimination is that ‘concept’ is ambiguous. I have tried to avoid the difficult matter of giving a theory of concepts – a subject on which there is of course an enormous literature. To do any justice to the issues here, one would have to be given. All I would say for the moment is that I am working with a fairly generous concept of concepts.

Noon suggested that a person could notice a clydesdale in a lineup of horses and register it as of a different type without having the concept <clydesdale horse> or even <draft horse>. Several points. First, I think lacking such concepts (and/or words) would make this less likely. Second, were it achieved, the ability to discriminate between the horses in the lineup might depend on having a relatively narrow concept of <typical horse> which is used to eliminate the clydesdale. Third, if we stipulate that the person has never seen horses, and has never developed concepts directly related to horses, and yet manages to make the discrimination, we run up against a sort of paradox. If we insist that the person forms inchoate concepts in reaction to the information they are exposed to, and that is how they are making their discrimination, we question is: how do they form those concepts, if not on the basis of a prior discrimination? We need concepts to make discriminations, but we need discriminations to form concepts in the first place. I’m sure empirical psychology would have a lot to say here. I would only gesture towards the possibility that base-level perceptual discriminations begin a process of concept-construction which draws in various other related but more mature concepts, forming primitive concepts and allowing very rough conceptual discriminations.



King, J 2011, Structured Propositions, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.)


Verbal disputes in metaphysics

In his 2013 article ‘Charity to Charity’, Eli Hirsch defends the claim that many disputes in metaphysics are verbal. He gives the following example of an “absurd” line of argument to illustrate the nature of verbal disputes:

(O): “Ordinary people appear to have a misguided theory of what it is to open something. The basic structure of the problem concerns cases in which an object x is appropriately moved or altered in order to gain some form of access to an object y. When one appropriately manipulates the lid of a box to gain access to the (inside of the) box, folk-theory says that one has opened the box. But when one appropriately manipulates the door of a room in order to gain access to the (inside of the) room, folk-theory says that one has opened the door. This is evidently untenable. Folk theory cannot explain the nature of opening something. Any such explanation must imply either that one has opened both the box and the room, or that one has opened both the lid and the door. So which one is the correct alternative to folk-theory? Perhaps one can do no better than an educated guess. But one intuitive clue might be that even ordinary folk will sometimes be willing to say that the room has been opened. This perhaps favors the theory that both the box and the room are opened, rather than that both the lid and the door are opened. Consider, furthermore, the extreme intuitive oddity of saying that one has opened one’s eyelids. Perhaps the least costly theory is that when an object x is appropriately moved or altered in order to gain some form of access to an object y, it is y that is opened, not x. It follows that when you remove the cover of a bed to gain access to the sheet, you have opened the sheet. Mutatis mutandis, when you remove your hat in order to scratch your head, you have opened your head or your scalp. And when you unzip your fly, …”

Hirsch seems to be suggesting that metaphysicians over-think their way into speaking new variants of ordinary language. As a consequence, when they turn back and argue with those whose language they have left behind (ordinary English speakers, say), or turn aside to those who have developed alternate metaphysical idiolects, they end up arguing “verbally”.  For Hirsch, a verbal dispute is one in which both parties should be interpreted as being correct, according to their way of speaking. Thus, he claims, “perdurantists, four-dimensionalists, mereological essentialists, organicists, nihilists” and so on might all be uttering truths in their own metaphysical idiolects. He insists that this is not to say that there are no substantive disputes in the vicinity. It is just that, in many cases, we have not yet found good reason to think there are, and that present disputes are non-substantive, i.e., do not pertain to some matter of fact over which parties genuinely disagree. If a non-verbal dispute is to be found in such debates, it resides at the meta-level, when rival parties contend that their way of speaking corresponds to ordinary English. When they do so, they are said to be doing ‘revisionary metaphysics’.

In the above example, the advocate of (O) would contend that, in order to speak correctly, we should talk about “opening rooms”, rather than doors, just as we open boxes rather than lids. The rival revisionary metaphysician who thinks that we should speak of opening doors and lids, thinks that (O) gets the truth-conditions of ordinary English wrong. The truth condition for “Joe opened the door” is that Joe manipulated x (the door) in order to gain access to y (the room) – it is not that Joe manipulated the surface of the door in order to gain access to the door, as (O) would presumably maintain.

Others, like Hirsch, who reject (O) claim that (O) misrepresents the truth-conditions of ordinary English in an absurdly overintellectualized fashion, and pit (O) against a ‘non-revisionary’, common-sensical understanding of English.

In his book ‘Quantifier Variance and Realism’, Hirsch aligns himself with ordinary language philosophers a la the later Wittgenstein. He regards his critical attitude towards the ‘mereologist’ (i.e., the compositional universalist or liberal who thinks that there are many more objects in the world than are commonly supposed) as under-girded by a defence of ordinary language, rather than an expression of ‘metaphysical realism’. In terms of the above dialectic, the metaphysical realist would be the one who repudiates (O) in favour of an alternative but equally deviant ‘idiolect’. These types of philosophers are playfully caricatured by Hirsch as bickering lawyers: “They descend upon us as a legion of ontological lawyers, their briefcases overflowing with numerous arguments and counterarguments, a case for one entity, a case against another. Questions that appear to be trivial beyond the pale of conversation are somehow converted by them into occasions for deep theoretical debate. “Metaphysical realists” are afflicted with a kind of hyper-theoreticalness.”

But it is easy to simply ridicule metaphysics. It is harder to come up with a principled rejection of it, and still harder to find some good replacement for it – a way of proceeding to think about the world at a general level in some illuminating manner free of the defects of ‘metaphysics’. Wittgenstein sees the urge to seek any kind of systematic philosophical picture of the world as itself the problem, and so rids us of the need to find a replacement. But does he meet the first challenge: does he offer a principled and consistent rejection of metaphysics? That is too big a question to answer here. I will, however, address Hirsch’s attempt.

Hirsch claims that metaphysicians take themselves to be speaking English; to be uttering sentences that are true (or false) in English. He argues that many of those sentences are false in English; they are only true in variants of English which give variant meanings to the (e.g.,) existential quantifier. That is, they are true only in languages where ‘there exists…’ has different truth conditions than in ordinary language. All the subtle and often compelling arguments advanced in favour of these metaphysical sentences are feckless next to the fact that, in English, they are commonly regarded as false.

This is the point I wish to query. If an ‘ontological’ proposition – one concerning whatever it is that exists – is agreed to be true among English speakers, that apparently implies that those speakers have an ontology. They take some things to exist, and others not to exist. This in turn suggests that there is a substructure of belief giving rise to that ontology – a substructure composed of (some combination of) ‘ideology’, ‘philosophy’, ‘science’, ‘faith’, and so on.

Two points: (1) this substructure need not be shared across all English speakers, and (2) it need not be internally consistent. Indeed, it would be highly surprising if it were uniformly shared, or consistent. As a result of (1), quantifier variance might be present within and not just between languages, so that different speakers of English might ‘verbally disagree’ about matters of ontology. The response here would probably be to fine-grain and formalize the notion of a ‘language’. In that case, standard English as a linguist might categorise it would not be single language in the philosophical sense. That’s fine. Perhaps more seriously, as a result of (2), it seems like philosophers may not be misguided in questioning the platitudinous ontological claims of ordinary folk, and investigating the underlying logic (or lack of it) within their implicit epistemic ‘substructure’. We are doing this all the time, and that is part of how language (and thus the meaning of such things as the existential quantifier) changes over time and across space.

It is folly to regard ordinary language as orderly and systematic. It is, to be sure, a complex and unruly animal. The late American lexicographer Philip Gove described it thus:

“It may be observed that the English language is not a system of logic, that its vocabulary has not developed in correlation with generations of straight thinkers, that we cannot impose upon it something preconceived as an ideal of scientific method and expect to come out with anything more systematic and more clarifying than what we start with: what we start with is an inchoate heterogeneous conglomerate that retains the indestructible bones of innumerable tries at orderly communication, and our definitions as a body are bound to reflect this situation.”

But it seems just as quick and dirty to regard it as incorrigibly anarchic. Ordinary language evolved as part of the ongoing project to understand and organize the world, or at least our representation of it.

But there is an alternative way of viewing things. Earlier I said that ordinary statements involving existential claims imply an ontology. That might not be right. Perhaps the sentences of ordinary folk are only insights into ‘a confused underlying ontology’ in the sense that they reveal the meaning of words. That is, they reveal something about the widely shared conceptual maps that speakers are walking around with. If the concept of a <hole> appears in a language (as expressed by the word ‘hole’) as the empty space in a perforated object, and the concept of the existential quantifier involves truth-conditions which stipulate that when there are external perforated objects, there are holes, then there are holes. This is revealed by the fact that ordinary folk, perceiving such objects in their environments, assent to the sentence ‘there are holes’. The metaphysician who comes along and rejects the existence of holes is simply mistaken, regardless of what arguments they proffer in favour of their view, according to Hirsch. Is that fair? For example, if there are parts of language and other commonly asserted sentences which appear to be inconsistent with the claim that ‘there are holes’, do we thereby have grounds for saying that in fact, in English, it is not true that ‘there are holes’? Perhaps we would tend to agree that ‘in order for x to exist, x has to be made of stuff’, as indicated by our behaviour concerning other sentences like ‘my bag isn’t in the boot’, ‘Confucius didn’t exist’, ‘fairies aren’t real’, and so on. Does this present a challenge? Do Hirsch and others who follow his approach even have to deal with these problems, i.e., those associated with making various parts of speech consistent? (It is plausible that such consistency could be demonstrated, but even if it couldn’t, is that a problem?).

I don’t think they do have to deal with this problem, since they do not take the ordinary language they wish to cleave to as embedding an ontology in the sense of a set of metaphysical claims about what really exists. If they did, the burden would be on them to maintain that this ‘commonsense ontology’ possesses certain theoretical virtues – consistency, explanatory power, utility, and so on – an attribution which could be disputed. That would be a very unattractive thesis to maintain. Instead, observations about ordinary language serve merely to illuminate semantic facts – facts about the meaning of words – rather than facts about a correct ontology.


Self-deprecating humor: its value and achievement

I see self-deprecating humor as a virtue in most circumstances. Insecurity is one explanation for  the inability to regard one’s own failings and standing and life with humor. When someone is truly insecure, there is emotional pressure to avoid looking at the realities that trouble them. A fortiori, there are forces working against their being able to see the potential comedy associated with what troubles them, and to publicly expose it. But taking one’s self too seriously due to insecurity is self-defeating, because it only preserves the ego as long as everyone else also takes one seriously, and don’t expose those insecurities or the phenomena to which they pertain. Whether intentional or not, it is often hard not to produce such exposure; truth has a way of getting in the way. In fact, part of the discomfort felt in being in the company of such people is due to the dangling threat of their ‘exposure’. Conversely , the effect of self-deprecating humor is to help everyone feel more at ease, including the individual exhibiting it. Moreover, a dash of the stuff allows one to get away with quite a lot; various oddities, norm violations and other supposed defects of body or character may be forgiven through candid humor and levity.  Or if the qualities in question are not violations to be forgiven – they may be the kinds of things that can be accepted, and not avoided, as they would be if they continued to produce the aforementioned atmosphere of anxiety.

Of course, there is a balance you have to strike. Too much self-deprecation can be annoying and indicate pathological self-absorption. And even if the latter becomes involved in the self-deprecation, in a self-reflexive manner, it can still be annoying. There is a limit to the exculpation that public honesty and humor grant to one’s defects. But judicious self-deprecation is priceless – not only does it dissolve tension, but is positively attractive.

In this respect, I think of a charming older gentleman I know. His charm is partly due to his capacity for self deprecation. He refers to himself and friends as “old fuckers” and laughs that “young people don’t want to hear these old stories – you can go if you want!” This contrasts with someone who might also be ‘old’ but so insecure about it that they refuse to acknowledge, let alone joke about it. Even if those in the company of such a person are not themselves prejudiced against the elderly, they may become uncomfortable on account of the awkwardness of navigating the issue.

Being aware of one’s flaws and peculiarities is not sufficient for having humor regarding them. One also needs to adopt a certain light touch towards them (which requires acceptance in some sense), as well as a certain comedic perspective, which might simply be the perspective of an outsider looking at oneself more objectively. This actually requires quite a sophisticated leap of imagination. Indeed, to begin with, it may be a cognitive effort. But I think the insight it can bring is invaluable, both intrinsically and insofar as it unlocks all the benefits of self-deprecating humor.

Now for some caveats and second thoughts.

First, although in general I think it is healthy to extend the reach of humor as much as possible, it is not the kind of good whose value is ‘contextually insensitive’. There are two ways in which self-deprecation may lack value.

In the first place, there may be an inner ring of concerns about which one is justifiably unwilling to make fun. The boundaries of this ‘inner ring’ varies by individual and circumstance.

In the second place, self-deprecating humor is not appropriate when the defects it makes light of are disrupting one’s ability to fulfill serious obligations. It is not just that humor sits uneasily with matters of moral or professional importance, it is the fact that this type of banter (if it has any truth in it whatsoever) is self-disqualifying in the most practical and direct way possible. In this sense, self-deprecation is for times when one is free from obligation, or where the defects singled out are clearly irrelevant to one’s obligations.

Second, the above gives a moralistic gloss to self-deprecation. But perhaps we can see it more ‘pragmatically’ as a feature of personality. Some people can be cool with their failings and not take themselves too seriously, but lack ‘the funny bone’ entirely. I think this would be unusual, but possible. If one lacks humor in general, they will lack it with respect to themselves, too. But their incapacity for self-deprecating humor should not be seen as a defect. At most, we can say that they should be able to deprecate themselves humorlessly – which brings up the subjective dimension of humor, since others may then find them funny even if they don’t; a circumstance the clown must be cool with, for consistency’s sake.

Third, perhaps lacking humor regarding certain aspects of oneself is very common, and we should regard the capacity for genuine self-deprecation is a virtue to be praised rather than expected. The only circumstance in which it is to be expected is when somebody teases another person. You should be able to take it before you may dish it out to others. Once you unlock the gates of interpersonal critique, i.e., ‘roasting’, the savages will have at you whether you chose to step inside or not. Having genuine self-deprecating humor is protective in this context.

As well as being prudent, this equality between teaser and teased (that both come under scrutiny) seems to be a low-level moral requirement. If one is genuinely self-deprecating – i.e., able to self-criticize in a humorous rather than self-flagellating way – then teasing actually becomes a joke rather than (or in addition to!) an attack. In fact, in many contexts the teasing can only be a joke if it is delivered by a self-deprecating individual. This may be because, otherwise, it is perceived merely as an attack, whatever the intentions of the would-be comic. Or it may be that in some way having an attitude of humor about oneself is a precondition for having it about others – or that having a sense of humor at all entrains (rather than entails) its indiscriminate ‘distribution’ across subjects. In neither case do we have a necessary condition, however. There are always limits to self-awareness, awareness of others and moral scruple! Some think pure attacks are funny. Some (psychopaths) might have a sense of humor but actually think they are above critique, and thus genuinely see nothing funny about themselves which is available for joking… But in general, I do think there is some connection between having a sense of humor and being self-deprecating.

Fourth, I wonder if there is a correlation between ‘seriousness’ and age. The old proverbially accuse the young of being frivolous, of not giving proper reverence to traditions or received wisdom, etc. I think this attitude expresses some accurate insights – e.g., about how youth doesn’t appreciate what it has and so on. It might also indicate a less obviously ‘good or bad’ age-related discrepancy: the young may be less settled in their views, and correspondingly less committed to what they say, which inclines them towards irony, sarcasm, doubt, cynicism, irreverent humor, insincerity, ambivalence, ambiguity, and what I have elsewhere described as the half-joking way of speaking.  The old, by contrast, may be more settled in their views, more committed to what they say, and consequently less inclined toward (have less need of) such ‘non-committal modes of speech’, including humor. Of course, the reverse may be sometimes true: the young may have more conviction, and the experience of age may erode misplaced certainties. Consider the tropes of the naïve revolutionary and the sage mystic who never seems to fall on one side of the fence or the other. And there are bound to be historical-cultural variables which influence how these things play out. It could be that we are in a particularly irony-soaked, ‘hipster’-dominated period, for instance. But the point is that in some cases there may be genuinely age-related explanatory factors at work regarding the things I am interested in. This (partial) explanation (partially) mollifies my negative attitude towards ‘humorlessness’. It may be that I am reacting not to humorlessness per se, but simply the lack of a certain kind of nonchalance associated with youthful lack of a commitment view. I want to be free to say things without consequence – to play around and goof off without fear of being condemned or ‘taken too seriously’ – and I want others to be the same way. But perhaps this isn’t fair. Perhaps it is not proper to expect such folks to be this way. Perhaps it is not even desirable. Firm, unambiguous modes of speech may be desirable. There is a place for humor in this view, but it is circumscribed more clearly. You cannot hide in its penumbra.

Fifth, I think some neurotics get a free pass at taking themselves (too) seriously. Here’s my idea. In order to succeed in competitive environments – e.g., in certain careers – you need to be somewhat confident. Confident enough to give it a go. For those who aren’t neurotic, this is compatible with a self-deprecating attitude. Indeed, I have always regarded awareness of one’s weaknesses and shortcomings as a source of strength and invulnerability to embarrassment. There is even a perverse and paradoxical sort of defiant conceit in having an accurately undesirable image of oneself permanently etched onto one’s retina. But perhaps those who lack deep confidence or emotional stability are not be able to sustain such an attitude alongside faith that it’s worth keeping up the effort to compete and play the game. In that case, the neurotic must either be relatively faultless, or maintain a level of delusion about their proficiency.



Overcoming cynicism

As children, we take the world at face value. This is part of what makes children particularly vulnerable to advertising, for instance. As we grow up, we come to see more of the world’s complexity: we begin to perceive the forces at play ‘behind’ and ‘within’ the appearances. When it comes to advertising, this manifests as cynicism. That is, we see the advert as a piece of carefully designed propaganda, whose elements, including the characters that appear within it, do not appear or behave spontaneously, ‘naturally’, authentically, or randomly. They have specific functions, defined within a broader economic and social system. Next in line come mainstream media and pop music, religion, the education system, politicians, ‘patriarchy’, ‘the police white ethno-national state’, one’s very own parents (!)… and before you know it, you have a nihilistic teenager who doesn’t trust what anybody says, sees hidden motives and agendas everywhere, finds conviction and sincerity laughable, and communicates solely in sarcasm and irony. I think this process develops as one gets older, but its revelations often cease to have this cynical effect. I’m not sure why this is. But I’ll hazard a guess: when we wake from naiveite into the complexity of the world, we may feel pervasively deceived. Once disillusioned, however, we may come to distinguish genuine (or wrongful) deception, from mere disillusion – cases in which nobody claims that the ‘function’ they carry out, or the behaviour they manifest, is what you took it to be. These are cases where we come to see that there are ‘hidden’ forces operating within and on people or institutions that determine their actions, often without those entities realizing it. Even when they do, such entities may be quite helplessly in their thrall, or non-culpably subject to their manipulation. That is, while resistance may be possible, desirable, and good, it cannot always be expected… I remember it becoming a fashionable insight at one stage in high school that ‘there is no such thing as a selfless act’, since all acts of charity & goodwill have, as a motivating factor, some degree of self-interest, even if this is just the warm feeling of having done the right thing. But even if true, this does not annul the value of the good deed. After all, genuine concern for others is compatible with self-interest.

What I am suggesting is that, after the initial shock that there is no God, the cops aren’t all nice people, your pop idol has a team of marketing professionals crafting their image, charity isn’t purely selfless, and so on, you learn to live with nuance, forgive imperfections and adjust to disappointments by revising expectations. You also see that what seemed to be there often is – just not quite as one was expecting it, and it is often accompanied by other elements as well as a rich web of explanatory factors.

Explanation is often taken to have debunking force. But this is, typically, the conclusion of a fallacy. It is as though once morality, love, rainbows, political orientation (etc.) have been given an illuminating analysis or scientific explanation, they are somehow deflated or invalidated – that once again the world has ‘lied’ to us. This impression, I think, is largely spurious. It might show that things could have been otherwise. But we were foolish to think they were necessary.

If we do not reconcile ourselves to the world with its nuance and complexity, we are liable to drift in one of two directions: toward the comforting simplicities of childhood, or  nihilism, cynicism and conspiracy theory rabbit-hole. But choosing to explore more of the ‘reality’ behind the ‘appearances’ has no definite rational conclusion, I think. It takes some towards an almost ‘spiritual’ acquiescence, especially in the moral domain: taking what PF Strawson describes as ‘the objective attitude’ towards other morals agents, and suspending what he calls ‘the reactive attitudes’ (resentment etc. on the negative side; gratitude etc. on the positive side), we may end up with a thoroughly forgiving kind of determinism which lends itself to a purely pragmatic consequentialism. Others want to distinguish behaviours for which agents are responsible (or which may be ‘authentic’), from those for which they are not (and which may be ‘inauthentic’), even if all have various causes and explanations pertaining to ‘external factors’ at every conceivable level of analysis. Or we can entertain both perspectives simultaneously: e.g., what a person says expresses that person’s thoughts and feelings – they are responsible for their words, their communication is to be taken in some sense at face value; they mean what they say, if they are being sincere. But what a person says also expresses their upbringing, cultural and social norms, genetic inheritance, their particular neurochemistry at that time of day, a swathe of psychological biases and unconscious motives, and so on. When we turn this insight on ourselves, and begin to piece together how any given thought, sentence or action is the confluence of countless factors outside of our control and awareness, things start to take on a different hue again. We may come to see ourselves as ‘cosmic happenings’ on one level… Or take pop culture. We can view a superhero film as a fun thrill ride with its own rules and mythology etc. Or we can view it through some analytical prism such as Marxism, or Jungian psychology. The artist, in general, like the rest of us is both aware and unaware of what they are doing: they have certain intentions, and these reflect some truth about the relevant behaviour, but there are also things driving their creations (or our behaviour) which do not fall within conscious view. The latter fact does not mean that ‘we are being lied to’.

I think that growing populist cynicism about mainstream politics reflects something like a childish craving for the simplicity of a world of appearances. Some of us like Trump because he purports to act like a genuinely free agent. Such people are disillusioned: even Obama turned out to be a con! Look at how he was manipulated, and hence manipulated us! His words meant nothing. They despise pundits who attempt to analyze Trump and his supporters in terms of systemic economic factors and broader cultural-technological trends. They want things to be simple. But of course, Obama might have meant what he said. So too might Trump. But this doesn’t negate the possibility that other forces are at play – some of which we have limited to no control over.       [I do not wish to say that the way politicians speak and play the game today is inevitable. It is certainly not desirable, I think… I don’t know what to think about that situation, precisely.]

The heart of real-world disagreements?

Many disagreements seem to be based on questions of relative value or probability weightings. Most of the time in ‘reasonable public debate’, a set of basic facts, values and methodological principles are agreed upon. The general parameters are known. But the differences that emerge occur at the blurry edges of those parameters. For example, in a discussion on the role and moral status of ‘forbearance’, Waleed Ali argued that civic and political discourse today is being undermined and polarized by the valorization of righteous anger, which goes along with a failure to appreciate ‘temperate’ virtues like forbearance. Adrienne Martin disagreed, suggesting that telling people not to get angry has figured prominently in historical oppression of minorities like blacks in America, and that this continues today (the ‘angry black man’, the ‘hysterical woman’, etc.). Further, she argues that anger has been an invaluable tool for civil rights campaigns and has helped move society forward, morally speaking. I think that both sides here would recognize the values and dangers at stake. They just assign different weights to them.

These sorts of disagreements are the most difficult to resolve because there is no clear rational or empirical methodology for determining how to rank our concerns (our moral, political and prudential priorities, interests and values). As in so many debates, there is a bedrock divergence of attitudes which can only be straightened out through an almost trans-rational, or quasi-rational exchange of sentiments, stories, perspectives and thoughts. We try to win over our opponents through the cumulative weight of feeling and redirection of attention. Often, we might formally grant the merits of our interlocuter’s position without truly thinking it through or letting it land on an emotional level. This is what good debaters and rhetoricians endeavor to correct.

Shit Writing

My appreciation of what constitutes good writing has widened over the years. The range of literary skill is vaster than I had imagined. The number of variables involved in putting together a good sentence is large, and figuring out how they interact is bewildering. One need not only have a good idea – a novel concept, an accurate metaphor, a keen observation – but sufficient vocabulary, control of syntax, and sensitivity to the musical properties of language to put it well. By ‘the musical properties of language’ I mean the rhythm and ‘sonic contours’ of language – the hardness of some words, the softness of others, and so on. The very length of a sentence, and of course the use of grammar, help to confer some of these ‘musical properties’. For instance, in considering how to describe the process of suddenly deciphering the lyrics of a rap song , the image of ‘sound-vapors’ condensing into discrete droplets of words occurs to me, but I need a way of expressing the metaphor which is not over-labored, and which reflects the metaphor itself in some way. “A puff of sound precipitates into a string of words”? Not quite. There are many metaphors I could choose from which have a similar effect (one could think of crystallization, for example), and countlessly many ways of expressing them. And it all must come together with the appearance of effortlessness. There are general rules that one can try to adhere to, but they are not without exception. One rule I like is to dispense with unnecessary words, hard as that is for an indulgent linguaphile. (Orwell has a good list.)

Another variable pertains to how words are placed next to one another, which changes their effect. The interactions between all of these variables are complex to a magical degree. When I try too hard, I tend to spoil things with a childish surfeit of literary toys and trinkets. It is often much better to let a good word or two, or a gracile turn of phrase, stand alone. There is no need, for example, to add ‘great’ to the adjective ‘celerity’. The latter is singular enough, and shines more brightly outside the shadow of predictable constructions. And this brings me to a final thought.

There is often a trade-off between originality and readability. More familiar, unoriginal expressions, imagery and syntactic structures move frictionlessly through the mind. Unfamiliar and original ones are apt to slow things down. They decelerate progress through the text. A good writer deploys originality strategically. Though generally valuable, highly stylized, artistic, novel or otherwise ‘sophisticated’ writing is not always fit to purpose. Both prose and poetry are quickly overburdened with such qualities, and are crushed into a turgid, opaque surface of letters. But discretionary use of originality is indispensable to good writing, and can serve many functions. It can be used to manipulate the reader’s speed and thus the overall cadence of the text. This might occur by simply raising the reader’s difficulty , or else by prompting reflection. Original or unconventional writing can be used to express unique or complex ideas which would otherwise be deformed by forcing them into familiar but incongruous constructions. And there is also, we mustn’t forget, value in the pure enjoyment of a fresh voice.

Lastly, I think it’s also true that skill can help overcome this trade-off. A skillful author manages to be original without raising difficulty for the reader. I think this may be what it is to have a style. Contrast anybody in the literary canon with a young writer enthralled with the thesaurus. Although the latter might deploy interesting or uncommon words, find quirky ways to use old words, and build up clunky metaphors through brute force, there is often no worthwhile payoff, and the reader is confronted with the kind of compacted surface of letters I just mentioned. In other words, the medium gets in the way of the message.

To have a style is to sculpt the medium to suit one’s message. The consistency that results – though difficult to define – creates the possibility of highly readable originality. As the reader acquires the writer’s palate, the latter’s words become a direct funnel into their world.

But there is no escaping the essential dualism that I am gesturing towards between order and chaos, predictability and surprise. As in all things, we need both, and find joy in their balance. I would draw an analogy here with music, which is even more strongly characterised by this balance. Indeed, it seems to be fundamentally defined by it. That is, not only is it a virtue, but without it, a series of sounds is either mechanical dross (pure order), or mere cacophony (pure chaos). Some repetition is pleasing – e.g., alliteration or the rhythm of a recurring phrase in writing; the steady beat of a dance track – and it can be pushed to extremes with good effect. But without variation, it loses its charm and becomes stifling, inhuman, meaningless. Notice, however, that an entirely predictable language, lacking in ambiguity or opportunity for innovation, seems possible, while an entirely predictable piece of music does not. Though I will have to think more about this and clarify my terms in order to better judge this speculation.

I should say that in this post I have been concerned with something like the intrinsic artistic or literary value of writing. But what constitutes ‘good writing’ is relative to one’s purposes. Sorry to bring up Jordan Peterson again, but I heard him argue that adding too many caveats to the expression of one’s thought compromises its power. This seems to be true. And yet it also seems to be in tension with his claim to be concerned exclusively with Truth. It is the opposite of the philosopher’s approach to writing . The typical analytic philosopher is willing to drain their words of as much rhetorical, aesthetic and propagandistic power as necessary in order to maximize precision. No amount of subtlety or verbal clutter is too much when it comes to articulating how one takes things to be, as long as the message remains clear. All the content of one’s thought should be accessible to the intelligent reader from the words one puts on the page. It may require a bit of work for the reader, but nothing is hidden in obscurity. At least, that is the ideal. This is not to say that elegance, style and concision are not valued. They are. But they are subordinate in this domain.

On this point, I would defend the style of writing in analytic philosophy from two charges: (1) of pretention, and (2) of pure sterility. The latter is more common, but I have come across the former, too. Among those new to it, the style of philosophers can make it all seem like humbug and sophistry. Philosophers are times accused of hiding emptiness or triviality behind a wall of words. Of giving labels to essentially mysterious notions in order to create a false appearance of understanding. Of using language to invent puzzles and solutions where there are none.

This charge, of course, comes in various forms, with various degrees of merit. The later Wittgenstein, for instance, strikes me as presenting it in a very serious form. But I find that, usually, such an attitude doesn’t do justice to philosophers – to either their character or the reality of their activity. In fact, I think any serious analytic philosopher is quite ready to expose anyone using words carelessly, deceptively or vacuously. Philosophers are always asking one other to explain what they mean. One cannot get away with calling a proposition ‘modally robust’, for example, without being able to articulate precisely what that entails. Indeed, one is scarcely able to use the word ‘proposition’ in a technical context without indicating whether or not one has a particular account of <propositions> in mind, and, if so, which account that is! As a consequence of this culture of intellectual rigor, philosophers are typically not semantic magicians running around slapping labels on things, spooling out elaborate incantations and announcing: ‘problem solved!’ (The claim to have solved anything is in fact rare).

Regarding the second charge, I think there is much to recommend the concise and precise style of analytic prose. In addition to promoting the intellectual rigor described above, I think it has under-stated aesthetic value. Each word, phrase and sentence in a paper is a soldier with a carefully assigned roll of duties. Idleness and ornamentation are spurned. The effect is not always austerity, but the pleasing order of a well-disciplined army. To be precise is not necessarily to be brief! Compact, perhaps, but not brief. Metaphors like the one I have given must play a particular role in the paper, and have a defined measure of analogical significance. The more one understands, the greater the pleasure one derives from judicious application of a given word or concept. There is also a strange pleasure in using words in unusual ways which suggests their deployment for conceptual purposes rather than rhetoric or the satisfaction of habit. For instance, when discussing moral disagreement in a Q&A, one professor brought up a dilemma: either you defend a view on which concepts are sparse, in which case moral disagreements may often be genuine, or you allow conceptual plenitude, in which case they are liable to be merely putative or verbal. For some reason, the use of the dichotomy between sparse and plentiful accounts of concepts struck me as neat, enabling the conversation push ahead with a clear structure, if not a clear destination. The use of those words seemed elegant and useful. (This is perhaps not the best illustration of my point; it just came readily to mind).

Dress codes and sexual harassment

How you dress partly determines how people see you, and thus how they respond to you. That is inevitable. Clothes are a major part of how human beings communicate. Indeed, one can see this ‘sartorial language’ as a sub-species of body language. Given this, it should be fairly uncontroversial to say that people who dress in a way that exposes more of the body, or highlights sexually appealing parts of the body, will tend to attract more sexual attention than those who don’t. Sexualised dress communicates something sexual, although what this is, is often unclear, and not necessarily connected to a person’s intentions. Given that attention strongly influences action, sexual attention will tend to influence the way others behave. Although the boundary between attention and behaviour is blurry, we tend to think that it is behaviour that is properly subject to moralistic or legal regulation. After all, we don’t want to police thought processes. (This does not preclude, incidentally, moral scrutiny and judgement of internal states. Indeed, I would argue that since the root of action is attention, and attention is to some extent subject to control, it makes sense to scrutinize and judge internal psychology).

As far as such behavioural regulation goes, we tend to disconnect our moral judgements of offenders from the actions of victims. I think this is just. In the context of dress codes, those who claim that offensive conduct (sexual harassment etc.) is justified or excused by the behaviour of the victim are often making the mistake of confusing explanation for exculpation. Rape and sexual harassment are never okay (excluding highly unusual and speculative scenarios). I want to make that clear. We expect moral agents to be able to control their behaviour, if not improve the moral character of their attention in the first place. However, I would say that some feminists are guilty of making the opposite error: of mistaking explanation for blame. To explain how certain ways of dressing and acting increase the chances of attracting certain kinds of attention and behaviour is not to forgive offenders or blame victims (which can range from ‘she deserved it’ to ‘well, what do you expect?’). It is just to give a causal account of events.

But perhaps this is a straw man. Being more charitable, it is when explanations are coupled with advice for future conduct that feminists really object. The issue comes down to what kinds of attention and behaviour are deemed appropriate in response to what kind of dress and behaviour. If you say, ‘don’t dress in a scandalous fashion and you’ll be less likely to be groped, harassed, bothered and raped’, that is taken to signify a kind of complacency with the status quo – it is to say, or imply, that “we can’t expect men to respond differently. That is the way men are.”

One crude and probably minority feminist position here is that society ought to be such that men are not encouraged to think of women as sexual objects, even when they dress in minimal or, to contemporary eyes, salacious ways. This judgement pertains to attention, and the misogynistic culture that moulds (primarily male) attention in certain ways…

Yet I think more feminists want to focus on the behavioural aspect, and to change norms so that crass, boorish, presumptuous, misogynistic etc. responses to women become entirely unacceptable, however the latter present themselves. Women can dress how they want, and men can get excited all they want, but men are not allowed to lavish their lascivious attention on women if that attention is unwelcome.

The question then becomes how to communicate when it is welcome. No longer is merely dressing a certain way to be taken as a sign that it is welcome. The prescription is, in effect, to change the sartorial semiotics which grant permission to men to impose themselves on whomever dresses in an attractive way. Part of the problem, of course, is that ‘attractive dress’ is a vague and to some degree subjective category. If ‘attractive dress’ is taken as licence to ‘make a move’, then, on a subjectivist reading of ‘attractive dress’, it will be the case that ‘if A finds S’s presentation attractive, A has licence to make a move on S’. That could be considered problematic by some (many?), even if ‘making a move’ in construed in a benign way, since sexual attention can simply be a hassle.

Defenders of the status quo will reject the subjectivist definition of ‘attractive dress’. They will insist that the ‘sartorial semiotics’ of attractive dress are more refined than this – that men only take certain kinds of presentation as constituting something like a licence or invitation to approach.

I think the truth is somewhere in between – it  both a vague category with considerable subjective flexibility and one characterised by certain objective properties (however culturally contingent). The essential problem is then that the language of dress is too imprecise, unreliable and practically inconvenient a way of conveying sexual, romantic etc. invitations. It is liable to be misunderstood and misused, on both sides of the equation: people often do not know what the effects of their mode of presentation are. Young people in particular tend to explore and experiment with semiotics which they do not fully understand, and this is made all the more dangerous because of their relative vulnerability and attractiveness.

With all this in mind, a reasonable conclusion is that we should give up on using the category of ‘attractive dress’ as a way of negotiating that sort of behaviour. We need to replace it with a clearer, more reliable and congenial mechanism – a holistic appreciation of body language and verbal language that is less liable to be confused and generate inconvenience and offence.

In this perspective, offering advice on how to dress ought to be virtually irrelevant to how women experience the world. I say ‘virtually’ because there will always be psychopathic, clueless and/or immoral people who will do what they want regardless of what is communicated. But the perfect world (in this vision) will be one in which most people do not take any dress or fact about appearance, in itself, to be an invitation to any particular sort of response. People will be less presumptive and, in effect, more respectful of boundaries.

A final thought here: perhaps the difficulties that seem to have ramped up in recent decades are partly due to the fragmented and protean culture we live in – a culture in which communication is much more confused. That is, given that the meanings involved in body language and sartorial language are determined in part by culture, the mixing, splitting, and generational changes of culture generate a cacophonous polysemy which results in miscommunication. This is just one speculative factor, but I think there is something to it. This is not a hugely original thought, but I also think that unclarity and change in norms of sexual and romantic behaviour, particularly between men and women, are making it much more difficult for us to ‘get together’. If awkwardness is the result of uncertainty about how to act, and norms promote certainty, then obscure norms will increase awkwardness. Furthermore, if awkwardness undermines social interaction, then more awkwardness will result in fewer successful romantic social interactions. This gets at the everyday costs that ‘reactionaries’ talk about with respect to the #MeToo movement. I think it is probably just a phase as we negotiate a renovated sexual politics. Eventually, new norms will emerge.