Everyone knows the names Plato and Aristotle. The Ancient Athenian philosophers are widely celebrated as founders of the western intellectual tradition, and they continue to exert immense influence on our thought and culture today. Yet since they are customarily so revered, far fewer are aware that they have also saddled academia with some of its most dangerous tendencies and longest lasting dogmas.
These are the subject matter of Karl Popper’s 1945 masterwork, The Open Society and its Enemies. Popper first documents and exposes the reactionary political philosophies of Plato and Aristotle, created at a time where the birth of democracy was causing rapid change to Athenian society. This was a development which Plato greatly feared, and which he therefore tried to stop in its tracks. Popper goes on to show how these flawed philosophical ideas were taken up in modernity by Hegel, whose work as an apologist for Prussian absolutism marks him out in ignominy as the intellectual father of modern totalitarianism.
Popper’s book traces a long and complex intellectual history, and this essay is hugely indebted to his work, which I highly recommend. Not everyone has time to read such a long book, however, and it may not be immediately clear how the esoteric ideas of the Ancient Greeks could have any contemporary relevance. But be in no doubt: these peculiar philosophical fallacies, born two and a half millennia ago, then later reinvigorated by Hegel (along with Marx), are manifest in the most destructive dogma of our own time: Wokeism. To better understand the ideas circulating in our present, we must first take a look their intellectual history.
This essay focuses largely on essentialism. This is a philosophical method best characterised as the worship of language. An empty scholasticism, essentialist thought has stifled our reason since antiquity due to its overwhelming intellectual obsession with language itself, thereby failing to recognise that language is simply a tool we use to represent reality, nothing more. However, though important, exposing this Platonic essentialism is only one part of Popper’s thesis. Popper also demonstrates that, despite their highly revered status in our philosophical and cultural canon, the political philosophies of Plato and Aristotle are deeply totalitarian. Their collectivist worship of the state above all else puts them in direct opposition to liberalism and its demand for individual freedom. Their horror at the emerging Athenian democracy reflects their great elitism and contempt for the common man. Popper also debunks their method of historicism. This is the belief that one can predict the future of humanity by analysing the currents of history, thereby finding out universal laws according to which human history inexorably unfolds. These four ideas – essentialism, collectivism, elitism, and historicism – coalesce in an unholy alliance to form the “intellectual” basis for modern totalitarianism. This in turn revives a bitter, punitive tribalism between those who conform to the totalitarian mindset and those who do not.
Plato’s Essentialism
I’ll now sketch Plato’s doctrine of essentialism, known as his ‘Theory of the Forms’. This idea has been so appealing that, although demonstrably false, it has bewitched, confused, and stultified our thought for millennia – and still does today.
Plato observed that there are many different objects in the world which we nonetheless describe using the same word. He uses the example of a bed. Though there are many different beds in the world, we call them all “beds”, regardless of these differences. Plato wanted to explain why it is that we call these things, which are clearly not the same, by the same name. He posited that these beds must have something in common that makes them similar. That something is their “essential essence”; in this case, we call each bed a “bed” because they each share the essential quality of “bedness”. Thus, Plato reasons, all beds – single, double, queen, twin, lumpy, soft – must share resemblance to some abstract idea of a bed. This is the original, perfect bed, which was created by God. Each bed we see in the world is merely an imperfect copy of this one original bed – the ‘only one real bed-in-itself in nature’ or ideal ‘Form’ of a bed (Plato, 597c).
With his theory of the forms, Plato argues that every object which we give a name must necessarily have an essential nature, in virtue of which it is given this name. Not only is there an original bed, in resemblance to which all subsequent beds acquire their “bedness”, the same is true of all goodness, beauty, equality, bigness, likeness, unity, being, sameness, difference, change, and changelessness (Kraut, 2017). Thus, all things which are e.g. “beautiful”, must in some way resemble the abstract form of beauty. For Plato, investigating these forms is the principal task of philosophy; ‘[t]o understand which things are good and why they are … we must investigate the form of good.’ (ibid.).
Yet Plato’s essentialist theory is gravely flawed. Plato pins a huge amount of meaning on what is really nothing more than a label – that is, on words themselves. Human beings use language to communicate. For this to work effectively, we use words as labels for objects and concepts so we can talk about them. Yet though they are extremely useful, this does not mean that these labels have any special significance in and of themselves; nor that we can derive meaning from them, since it is us who give words their meaning in the first place.
To best illustrate this, a little thought experiment. Imagine you are the lead scout for a hunter-gatherer band in 10,000 BC. You are exploring virgin forest, and you come across what we today would call an apple tree (not having been domesticated yet, the apples would be tiny and almost inedible, but we’ll ignore this anachronism). You decide to eat one of the apples, and it tastes okay. You gather some, and return to the tribe, pleased about your discovery. Once back, you pass out the apples and the tribe happily eat them together. Everyone now knows what the apple looks and tastes like, and you also explain where the apple tree is, so they can find it in future. But all this information is becoming a bit unwieldy – people keep referring to “the fruit you found on the tree near the river yesterday which was bigger than blackberries but not as sweet”. So, as a tribe, you decide to give the fruit a name: “apple”. Encoded in this name is the information that the tribe already knew about apples; this is just a shorter way of expressing that information, a mental marker. Indeed, the name you give to this definition is quite arbitrary; you might as well as have called it a “pear”, or “HMS Belfast”.
And, crucially, there is no new information to be derived from this name. If you wanted to find out more about apples generally – when they ripen, or whether they go well in mammoth stew – you would have to go back into the forest and pick some more. And it would certainly puzzle your tribe if you tried to find out more about apples by sitting inside your tent ruminating on the essential meaning or nature of the word “apple”, instead of investigating the object itself. After all, what’s in a name?
After Plato, it was Aristotle who then took up and furthered his teacher’s essentialist philosophy, by continuing Plato’s focus on essences, as well as definitions. Here is Popper: ‘Like Plato, Aristotle believed that we obtain all knowledge ultimately by an intuitive grasp of the essences of things. “We can know a thing only by knowing its essence”, Aristotle writes, and “to know a thing is to know its essence”.’ (2011, 227). On this view, in order to properly understand a real-world phenomenon, it is necessary for philosophers (for it is only philosophers who have the ability to conduct such investigations) to determine, through “intellectual intuition”, its essential nature, which for Aristotle means its true definition. And it is only through such investigation that one can gain true knowledge, by asking: what does a bed, an apple, or goodness really mean?
There are many problems with this approach. As we noted above, it is futile to search for the essential nature of a word we created to use as a mental note, because we will only end up going in circles – the word only means what we already decided it meant.
It is also a lost cause to search for a perfectly precise definition of any real-world phenomenon. This is because no such “definition” can ever be exact (with the possible exception of pure mathematics). Returning to the first example, Google defines a bed as ‘a piece of furniture for sleep or rest, typically a framework with a mattress.’ Of course, this definition is useful enough in everyday parlance. But its precision (or lack thereof) can only ever depend on the terms that comprise it: what do we mean by ‘furniture’, ‘sleep’ or ‘framework’, for instance? We would have to precisely define these terms as well to really define a bed in a perfectly clear and unambiguous way. Except in order to do so, we would need more terms again. In this way, we enter what logicians call an ‘infinite regress’ – we would go on defining our terms ad infinitum, always forced to define the defining terms with new terms of their own. It’s important to recognise, then, that even those definitions which may seem clear and precise are only ever an approximation, usually only as precise as is necessary to be unambiguous (and sometimes not even that).
Time for a third analogy. Consider the word “bald”. Its definition seems simple enough: ‘having a scalp wholly or partly lacking in hair.’ Yet between a man with a full head of hair and one with none, there are innumerable shades of grey. And it is impossible to provide a precise and clear-cut boundary between “baldness” and “non-baldness”; certainly this is no binary. When we define something, then, what we are really doing is drawing a circle around a set of objects. But this circle will always have blurry edges, because our language simply cannot be infinitely precise (nor need it be, most of the time).
We should think of words, then, simply as mental markers for a definition which is (usually) sufficiently precise that when we use it in everyday language, everyone will know what we’re referring to. Still, we might sometimes wonder what the “real” boundaries of a bed are. For instance, we might ask: is a sofa-bed still a bed? But since there is no essential essence of a bed, this would be a meaningless question. All it asks is: should we still refer to this thing, which I have just called a “sofa-bed”, as a “bed”? The question is trivial, because “bed” is simply an arbitrary sound that we customarily use to denote a particular object. Like driving on the left or right of the road, all that matters is agreement.
Now, I trust that this explanation has been sufficient to disabuse you of the esoteric mysticism of Plato’s theory of the forms. Having had it laid out, it may seem strange that such a renowned philosopher as Plato could endorse such an esoteric philosophy, and stranger still that this same fallacy should have endured over millennia. Yet, as philosopher Sarah-Jane Leslie and psychologist Susan Gelman have shown, our predisposition to generalise from quantitative statements – to essentialise from linguistic categories – is in fact hardwired into our cognition, in preschoolers and adults alike (2012). Humans beings have always used essentialism as a mental shortcut through which to make sense of the world. This does not mean that, by striving to think clearly, we cannot escape essentialism. But it is an especially pernicious and persistent logical fallacy, meaning we must always be on our guard against it.
Why does essentialism matter? The immediate answer is that we see essentialism enacted every day in modern identity politics. Before we get to that, however, I will show how Hegel revived this idea in 19th century Prussia, in his work as an apologist for the absolutist Prussian state. Hegel used an essentialist method to great effect to convince his followers of his towering intellect through the magic of empty but high-sounding words.
Essentialism and Hegel
It isn’t hard to spot when someone is thinking about the world through the lens of essentialism. Put simply, it means their thought focuses on language itself, not the reality that we use language to represent. Thus, an essentialist might puzzle deeply over a question like: what is power? or, what does “power” really mean? In the mistaken belief that language is a force in itself – that it has a life of its own, independent of the meaning we give to it – language becomes everything. By idolising language, we usher in at our own peril an ‘age controlled by the magic of high-sounding words, and the power of jargon.’ (Popper, 2011, 243).
To gain a better understanding of this essentialising method, let us see it in action. Here, in his Philosophy of Nature, Hegel describes the relationship between sound and heat (it is translated by Popper, ibid.):
Sound is the change in the specific condition of segregation of the material parts, and in the negation of this condition;—merely an abstract or an ideal ideality, as it were, of that specification. But this change, accordingly, is itself immediately the negation of the material specific subsistence; which is, therefore, real ideality of specific gravity and cohesion, i.e.—heat. The heating up of sounding bodies, just as of beaten or rubbed ones, is the appearance of heat, originating conceptually together with sound.
Here we can see three key features of Hegel’s writing. The first is its painful verbosity. Einstein famously said that if you can’t explain a concept to a six-year-old, you don’t understand it yourself. With this advice in mind, it’s clear Hegel didn’t have a clue.
The second is his unflinching embrace of contradiction. At the root of Hegel’s philosophy is his strange assertion that ‘all things are contradictory in themselves’ (id., 253). He uses this idea – that opposites are in fact the same – to generate his whole philosophy, through wholly flawed “dialectical” reasoning. Flawed indeed, because at the heart of Hegel’s dialectics is his willingness to gleefully violate the most basic law of logic, science and reason itself – the law of non-contradiction. This law is no mere “social construct” – it is a fundamental axiomatic truth of reality, without which our science and our civilisation could never have been built. Hegel’s willing embrace of the hypnotic power of language, which he uses to bewitch and to mesmerise rather than to explain or to communicate, outs Hegel as no true philosopher but a magician, a conjurer, a mystic. Indeed, Hegel’s method flies against reason itself, dragging our thinking back to a new Dark Age, one of superstition and blind deference to authority.
Third, we note Hegel’s blatant circularity. If you are brave (or foolish) enough, see if you can translate Hegel’s last sentence above into plain English. You’ll notice that he doesn’t seem to be saying very much at all. That is: “The heating up of sounding bodies … is the appearance of heat … together with sound”. A tremendous insight indeed. The reason essentialising writers need to smuggle in such circularity is because they really have nothing to say – since they cannot gain insight about reality by investigating a word which humans merely made up to represent that reality. Instead, they will often either play around with definitions, or write so densely as to be unintelligible.
Don’t be overcome by the mesmerising bluff of this garbled prose. It is nonsense! Those who employ it give themselves the cowardly advantage of being so vague that their claims can scarcely be pinned down and shown to be wrong. For instance, I cannot prove Hegel is wrong when he contends that ‘sound is the change in the specific condition of segregation of the material parts’, when this itself is simply empty verbiage. Analogously, nor can I prove or disprove that a unicorn smells nice, since it does not exist. To refute Hegel’s reasoning, one needs to openly call his bluff, asserting unabashedly: “this is meaningless, and you are a fraud”. However, at this point, the wily pseud will try to pull off his greatest trick. “If you think I am wrong,” he will magnanimously point out, “you clearly didn’t understand my work.” Fear of looking stupid has allowed this tactic to beguile many the impressionable fool. Yet this is the Wizard of Oz at work – the verbal fireworks present an illusion of astounding insight to awe you into submission and reverence. But if you do peer behind the curtain, you’ll find very little there – save a cowardly man who has nothing to offer but bombastic, high-sounding words, and no intellect, insight or reasoning with which to back them up.
To stand up to circular, essentialising dogmas requires no small amount of courage. One can take heart, however, in the knowledge that, like the Wizard of Oz himself, it is the philosopher-mystic, frantically booming out his Kafka traps from behind his safety curtain, who is really afraid – afraid of the world seeing the shameless fraud he truly is.
Essentialism and Wokeism
Perhaps this picture of the philosopher-mystic is starting to sound familiar. In any case, I’ll now examine some of the ways essentialism shows up in contemporary society, specifically in perhaps the most significant part of Critical Social Justice scholarship, Critical Race Theory.
For a movement that supposedly likes to break down categories, the prevalence of essentialist reasoning in Woke identity politics is staggering. However, to see why it is so undesirable, let us first consider first this stridently anti-essentialist passage, from Critical Race Theory: An Introduction:
A third theme of critical race theory, the “social construction” thesis, holds that race and races are products of social thought and relations. Not objective, inherent, or fixed, they correspond to no biological or genetic reality; rather, races are categories that society invents, manipulates, or retires when convenient. People with common origins share certain physical traits, of course, such as skin color, physique, and hair texture. But these constitute only an extremely small portion of their genetic endowment, are dwarfed by that which we have in common, and have little or nothing to do with distinctly human, higher-order traits, such as personality, intelligence, and moral behavior. That society frequently chooses to ignore these scientific facts, creates races, and endows them with pseudo-permanent characteristics is of great interest to critical race theory. (Delgado and Stefanic, 2001, 7–8).
In this passage, Delgado and Stefanic display a remarkable awareness of the problem of essentialising people by their race. Like my thought experiment of the hunter-gatherer, they recognise that the terms we use for races are made up – “socially constructed”, in the jargon – and because of this they are largely arbitrary. As with baldness, there is no clear-cut dividing line between races because, as constructed categories, these are necessarily blurry at the margins. And they rightly recognise that when society puts meaning into such arbitrary categories – when it “creates races” – this is curious (since it reflects poor reasoning), and undesirable. Indeed, they champion instead the “notion of intersectionality and anti-essentialism”, since “[n]o person has a single, easily stated, unitary identity.” (id., 9). They continue: “A white feminist may be Jewish, or working-class, or a single mother. An African American activist may be gay or lesbian. A Latino may be a Democrat, a Republican, or even a black— perhaps because that person’s family hails from the Caribbean. An Asian may be a recently arrived Hmong of rural background and unfamiliar with mercantile life, or a fourth-generation Chinese with a father who is a university professor and a mother who operates a business. Everyone has potentially conflicting, overlapping identities, loyalties, and allegiances.”
So far, so good. Paragons of individualism, Delgado and Stefanic display a touching awareness of the vast overlapping complexities common to all people, whatever their race, and are rightly sceptical of the idea that anyone might be easily, singly defined by any identity characteristic they might happen to have. Of course, everyone is in one of these categories, but human life is so amazingly diverse because our individual background, interests, and character are far more important and interesting than our immutable characteristics. (I can’t resist pointing out here that in this elegy against essentialism, that delightful phrase “people of color” is strangely absent. It seems that this insidious linguistic dichotomy – which reductively sees society as comprised of two groups, white people and everyone else – doesn’t sit well with the belief that “[n]o person has a single, easily stated, unitary identity”.)
Another brief digression. This passage, recognising as it does the limitless complexity of human existence, demonstrates by itself exactly why “intersectionality” is a flawed concept. At pains not to essentialise people, Delgado and Stefanic remind us of (just a handful) of the many, indeed, innumerable categories in which everyone finds themselves; or in other words, the fact that no two people are ever wholly alike. But the logical conclusion of trying to analyse the infinity of categories we might put people in – race, gender, class, nationality, religion, height, attractiveness, education, health, favourite ice-cream, opinion on marmite etc. etc. – is to get right back where we started, by treating everyone as an individual. In order to analyse the “intersections” between e.g. race and gender, it is necessary to reductively essentialise both.
In fact, when applied to people, essentialism – placing meaning into categories that are largely arbitrary – literally is racism. It is the idea that one should be judged not by the content of one’s character, but by the colour of one’s skin. Like Plato vainly searching for the essential essence of a bed, it posits that there is some essential essence to each of the (socially constructed!) racial categories extant in society, and that this essence is the principal determinant of who we are, what our experiences are, and what we think. Yet just as there is no “bedness”, so too is there no “whiteness” “blackness” or “Asianness” (Asianity?), because we are not defined by our race, we are individuals. When we pin meaning onto these empty racial labels, that is itself racism.
Furthermore, this focus on categories is what makes identity politics so spiteful and dehumanising. If one assumes that things have an essential essence, one also assumes that things differ only because their essential essence is different. This becomes predictably poisonous when applied to race – it implies that “whiteness” and “blackness” are essentially different. And the grave consequences of such thinking do not end there. Because these concepts are wholly imaginary, it is nigh impossible to argue against them – I can no more prove that “whiteness” is not synonymous with oppression and “blackness” with victimhood than I can prove that unicorns smell nice. Yet it is because these concepts are so abstract – so uncoupled from any grounding in reality – that they can become so powerful in people’s minds. Without any way to check if they are true, there also is no good way to show these assertions to be false, meaning activists can rage and rage against something that is nothing more than an idea in their head.
Indeed, it is only by this single-minded focus on the idea of another’s race that someone can stop seeing them as an individual human being. Sadly, millions of years of evolution have made humans very good at being tribal; we have an in-group and an out-group, and we are all too adept at convincing ourselves that the out-group is inherently evil, dangerous or other. Essentialist thinking facilitates this hugely because it encourages us to see people as representing an abstract idea associated with their group rather than as an individual human; one who you might otherwise have had a beer or a cup of tea with. It takes a truly powerful mental image to hate someone that you have never met.
There is a whole culture out there of people essentialising others in terms of their race, gender and sexuality, so to sum up I will furnish you with just one, rather amusing (if not exasperating) example. It comes, not from an obscure extract by an errant acolyte, but right at the core of Delgado and Stefanic’s text – indeed, from the very next paragraph to the anti-essentialist manifesto I quoted above. They write: “A final element [of Critical Race Theory] concerns the notion of a unique voice of color. Coexisting in somewhat uneasy tension with anti-essentialism, the voice-of-color thesis holds that because of their different histories and experiences with oppression, black, Indian, Asian, and Latino/a writers and thinkers maybe able to communicate to their white counterparts matters that the whites are unlikely to know. Minority status, in other words, brings with it a presumed competence to speak about race and racism. The “legal storytelling” movement urges black and brown writers to recount their experiences with racism and the legal system and to apply their own unique perspectives to assess law’s master narratives.” (ibid., my italics).
So did you spot it? Fresh from having proclaimed their opposition to essentialism, they posit: a “unique voice of color”; the even broader essentialised category of “[m]inority status”; the claim that all those in this category necessarily have “experiences of oppression”; and the ugly dichotomy of “voice[s]-of-color” and their “white counterparts” (in what way are they counterparts? The only possible answer seems to be that white people, since they are necessarily oppressors, are the counterparts to “people of color”, who are necessarily oppressed – how lovely!). But for writers who explicitly disavow essentialism, this is no mere “uneasy tension”. It is a flagrant and direct contradiction. Having begun their book with the anti-essentialist, humanist sentiment that although human populations may differ in “certain physical traits”, these differences “are dwarfed by that which we have in common”, it takes them no less than two pages to completely reverse their position. And be in no doubt: this is no anomaly. The book carries on in much the same fashion, racialising and essentialising, casting whites as inherently oppressive and “people of color” as necessarily oppressed.
If, like most people, you think that a theory that directly contradicts itself is a bad theory, you are probably wondering how any serious philosopher, or thinker of any kind, could ever write something so very puzzling. Of course, we need only turn to the immortal genius of Hegel for our answer, with that timeless nugget of German idealist wisdom: “Alle Dinge sind an sich selbst widersprechen; all things are contradictory in themselves.”
Bibliography
Delgado, R. and Stefancic, J., 2001. Critical Race Theory. New York: New York University.
Kraut, Richard, “Plato”, The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2017/entries/plato/.
Leslie, S. and Gelman, S., 2012. Quantified statements are recalled as generics: Evidence from preschool children and adults. Cognitive Psychology, 64(3), pp.186-214.
Plato, 1974. [Trans. Lee, D.] The Republic. 2nd ed. Harmondsworth: Penguin.
Popper, K., 2011. The Open Society And Its Enemies. 7th ed. Oxford: Routledge.
95 comments
Mu!
I find your essay apt in all ways except one – race does not exist, but racism does.
Obliterating racial division in philosophy doesn’t remove its presence and footprint in modern society. Deep wrongs have been done to all groups throughout history on the grounds of arbitrary division. In the face of society’s ‘white’ dominated racism, ‘coloured’ voices have to unite and protect themselves until the reality changes.
Equality legislation recognises the reality of personal experience in discrimination. For example, while a person could be white British, in the UK, if they are perceived as black African and discriminated against for that reason, they are considered a victim of racial discrimination.
Of course someone who conforms to the false essences of ‘white’ and ‘male’, and never perceived otherwise, cannot possibly have any comment on the experience of someone who is ‘black’ and ‘female’, and never perceived otherwise. A ‘white’ person with dark hair and a deep tan may understand something of the experience of someone who is ‘Arabian’, and then be mistaken for ‘muslim’, too, and therefore have some comment, though they wouldn’t have inherited the cultural mindset from being raised in the group, so they cannot comment on the experience of living in a society incongruous with the way they have been taught to act.
These matters must be taken into account. Wokeism developed to help the groups the essentialists created, and so is forced to use their awful tools to identify where this help is needed. Blandly asserting we cannot offer help, as that would be conforming to essentialism, fails to deal with the reality of victims.
“Equality legislation recognises the reality of personal experience in discrimination.”
No it does not, or at least it depends.
Equality legislation as it is been redefined now is increasingly recognising GROUP (Systemic) experience in discrimination first and foremost, as a precursor to acknowledging individual experience.
That is why it’s so dangerous.
The group label either allows or disallows “the story” of personal experience, a racism in itself, but in any case, “group” is the decider.
White straight male group is assumed to contain no discrimination against the group so any discrimination against the individual is regarded according to that group’s “rules” (the birth of the term anti-racism).
Consequently all “white” (whatever that is) straight men are considered sexist and racist by default due to the label applied to the group (not them individually as you claim).
It is BECAUSE OF THEIR GROUP that their “personal experience” counts for nothing, or more accurately is considered to not exist (in a victim/perpetrator context).
Conversely, previously marginalised GROUPS are interpreted as having experienced discrimination against them so any discrimination against the individual within any of those groups is regarded according to THAT group experience.
Consequently every non-white, non-male is a victim (of every white male – see above) even if individually they are not.
It is an almost child-like simplification of the world.
The “story” of every individual is unique to themselves. My story and yours are different.
The colour of MY skin may OR MAY NOT have had any bearing in MY life (even historically), irrespective of any bearing it may (or may not) have had on others in my group (or even my biological family).
The above paragraph IS MY story, as I have experience it.
Who has the moral right to tell me that MY story must be what THEY interpret it to be by averaging a group based on their interpretions?
Welcome to the world of the Woke.
Even within the same group
there can be victims of circumstances caused by others in that group.
Conversely every group has wealthy privileged powerful members. Every group!.. irrespective of skin colour and racial background.
This means that in reality any individual from any group CAN be both a victim and a perpetrator at the same time.
Importantly, this upside down woke logic allows for someone from a “victim group” who behaves as a perpetrator to ironically be more privileged than someone from a “privileged group” who is victimised.
This is exactly the thinking that resulted in the horrors of 1940s Germany. One group decided how another group influenced the world and treated every individual by the rules of the group.
It seems that we (and the woke in particular) have learned nothing from that.
And they thought they were right too, at the time. The apologies came later. I’ll wait.
You don’t get into it here, but I was shocked when I started to read Plato for myself, to discover that in his ideal society, there would be no families. Children would be raised in deliberately impersonal group homes. Mating would be directed by the state, so as to eugenically breed the best citizens. People would not be allowed to know who their parents or children were, because supposedly family and clan loyalty is the cause of all human conflict. Also, there shall be no art, music, or theater … Basically all normal human feelings are outlawed. Clearly there has to be something wrong with the understanding of human nature in any philosophy that would recommend this totalitarian nightmare.
This was the premise behind the 2002 film Equilibrium
All of this, along with Hegel and all the liberal enlightenment philosophy, is an attempt to reinvent the philosophical wheel. Existential questions of Plato and Aristotle are asked and answered. It leaves people with some uncomfortable truths though that are generally rejected through that notorious “I will not” which results in all of this time wasting debate over what constitutes truth and the essence of the word “truth.”
“In the beginning was the Word and the Word was with God and the Word was God.” God is Essence Itself. Christ is the physical manifestation (the Word) of that Essence, distinct in personhood, and yet essentially the same.
As long as Right Religion (like 2+2=4, there is only one right one) is considered created by man, this futile debate regarding essence and words will continue and the descent into barbarism will continue unabated.
You really have no answers, just conjecture and theory and have to admit that any answers you arrive at are subjective and therefore have no compelling moral authority. A priori godless, subjective answers must carry with them the caveat that they might be wrong, which is why the society founded on them will always descend into barbarism. There is nothing to compel ordered civilization. Flawed human nature without Grace is insufficient. Identity politics is merely another iteration of trying to reinvent the philosophical wheel – only worse because it doesn’t have Christian morality embedded as a framework.
I appreciate everything the author has written, but I still don’t understand what the issue is. From my very science-based perspective, language is a code we humans use to identify objects/things in our environment including abstract concepts like emotions, thoughts, etc. Words don’t exist outside of the things they represent. There would be no need for words if we didn’t need to communicate with each other. What’s the big mystery?
In science, words are used as labels exactly as you describe. They are not confused for the phenomenon they are describing. By contrast, for an ideologue, words and ideas are more real than reality.
A physicist does not seriously imagine that a quark can be ‘charming’ – the term is just an arbitrary pointer to a rigorous definition, but a feminist is deadly serious when they ascribe ‘gender’ to the concept of an atom. Atoms, for them, are just social constructs reflecting the interests and perspective of the Patriarchy. Ideologues really do go that far, for instance in describing mathematics as ‘racist,’ they are genuinely and unironically putting something like the Löwenheim-Skolem theorem on a par with The Turner Diaries as simply the expression of the will to power of some racist group.
The hardest thing to believe for someone used to common sense is that there are people genuinely immersed in complete nonsense, and that those people occupy positions of power and prestige, but it is all too real.
You haven’t given any references or quotes for Plato or Artistotle’s arguments to show they had this view of language or essentialism.
Sorry but this is sloppy concerning these two thinkers. Essentialism concerns the definition of things as received by the senses for Aristotle (form and matter) and the rationally received Form for Plato. Definitions were important, but only insomuch as they confirmed to the things themselves. You could call a chair “bluvat mark” if it worked for the language at the time as far as they were concerned.
The problem of the woke begins with Kant who blocked off all access to the empirical as it is, then with philosophy of language being used by Wittgenstein to describe thought itself as a way to clear up philosophy. I think, then, as this project slowely failed and the whole will to power thing came in and reason was done away with then, yes, language became a power tool to describe reality not of the mind but political will.
*conformed to the things themselves/corresponded.
You’re partially blaming Wittgenstein for the woke phenomenon? It’s his arguments that lay the groundwork for definitively rebutting the linguistic manipulations of the likes of Hegel.
Jeez Laweez… We cannot not worship words. Worship of words and images are the oldest religions. It’s ingrained into our DNA. It’s arguably the construct that allowed us to evolve civilization as we know it. Instead of writing an essay apologizing for humanity’s worship of words and trying to avoid it, you should be seeking to understand why we worship words, like a good Philosopher ought to do. There is something primitive and caveman-like about Plato’s/Socrates’ approach to logic. There is something paradoxical about all dialectic. That is the nature of dialectic. It is no criticism of paradox to accuse it of apparent contradiction. I’m a paradox. I know myself. I’m not a contradiction.
Humanity has always worshipped words. You need to understand that, because otherwise you sound like you’re blaming Plato and Aristotle and Hegel for why we worship words today. It’s not their fault. As an example, you cannot blame humanity’s Lust and sexual perversion on the existence of pimps and hookers; jailing pimps and prostitutes treats the symptom, not the disease, of being human; perversion and disease can exist without immediate symptoms, albeit that’s no justification for prejudice. You cannot blame pimps and hookers for sexual perversion, as Sinful as they are; they are exhibiting some basic human nature. Likewise, you cannot blame humanity’s innate characteristic of just naturally worshipping words for all of humanity’s flaws, according to your Perfect Ideals of Humanity, on Plato, even though your Ideals of the Perfect Humanity without Flaw are Platonic Forms, in and of themselves, all in your head. Nice try, Sophist!
Next time before you go committing the misattribution error and playing the blame game as to who started it, who smelt it, who dealt it, etc, first you should always always always ask yourself if the quality or characteristic you are looking to blame on some ancient Philosopher isn’t actually a quality of human fucking nature, and if maybe you’re the one who farted and is farting up a storm while blaming old farts for your new farts.
Thank you for the great article. I also have some reservations about your analysis of Plato and Aristotle’s essentialism, and agree in principle with the view of Hegel. Words and their definitions could spark an entire conference which I would love to attend. I work in architecture, and I keep my nose is in the building code book most of the time. I keep telling the junior designers in our firm that we must pay attention to words and meanings, particularly when it comes to code. Its almost like we treat words as money because it’s a “value” that we can agree on to promote interaction and continued cooperation. The value agreement of words is important and may increasingly be in jeopardy if Critical Theory has its way.
I think you misunderstand Aristotle;s thought
I think the characterization of Plato is completely wrong. Plato’s dialogues are essentially dramas, much like Shakespeare’s. You can’t understand Plato by imposing some kind of logical deductive mathematical approach for the same reason you can’t understand Shakespeare from a logical deductive approaches. If you apply the method you used for discussing Plato to Shakespeare, you’ll never understand what he’s doing either.
The Platonic dialogues are the acting out of different axioms and the exploration of what world, what universe and laws of nature are suggested by any individual or set of axioms, and their consequences. They are essentially thought experiments. Science doesn’t come down to having all the “facts,” it comes down to being able to conceptualize fundamentally new states of nature and new ideas, new causal principles. That whole process does not happen logically, you need creativity. And creativity is not something you can deduce from mathematical laws. Nor is the mind. The mind has no length, breadth or depth, but that doesn’t mean it doesn’t have as much of an objective existence?
Human evolution over the last few millennia has not been the product I’d just getting more “information,” it’s the result of fundamental discoveries, ideas, while by their nature have no measurable dimension. And yet, they are causal. The evolution of the human species is the evolution of ideas and our increasing levels of understanding about the universe.
I think if we’re going to talk about truth and the objective existence of ideas, one has to start there.
I think the author has tried to apply a mathematical deductive method to Plato and his arguments, which are not mathematical logical arguments. Plato’s dialogues are about irony and paradox, developing one’s ability to think it terms of ironies, paradoxes, which is the essence of a scientific discovery.
Scientific discoveries are NOT the product of logical deductive steps, they are by their nature discontinuous, they come in “leaps” or in “flashes.”
That’s a fact.
No person who is accustomed to creative thought and rigour can deny that.
PS
If you’re fond of the “Open Society” concept, you might want to look into George Soros and his “Open Society Foundation.” George Soros is a proud disciple of Popper. He’s been doing a pretty good job of subverting the idea of the nation state with precisely this open society concept.
Interesting perspctive, but let me provide a few suggestions.
1. Athenian Democracy has nothing to do with modern liberalism. It is a conseqence of oligarchy and is equally far removed from our representative system as a monarchy. Also, Aristotle’s favored view of the state is reminsicent of a genuine system of representation. Greeks, however, weren’t individualists because they weren’t monotheists. In addition, they aren’t really in opposition with liberalism because liberalism will not exist for another 2000 years after them.
2. Plato’s essentialism isn’t his fear of democracy but the rejection of sophists’ relativism (true is what we say is true, we being those with physis – natural ability, not those equal according to nomos – state law).
3. The idea that words are only labels was put forward in Medieval times to counter the view that general concepts (universals) really exist. It, however, didn’t turn the tide on essentialism because even if words are mere labels, you need independently strong case for something like antiessentialism or conventionalism. That will come only with Wittgenstein and ‘Ordinary language’ analytic philosophy in the 50s and 60s.
(3) Hell yes, my friend.
Laurie, your article provoked the kind of conversation that the expression of words should produce. The interaction is great if we were philosophers sitting around a table in Edinburgh sipping drams of single malt. There is a problem concerning the meaning of words. But that problem is not really the meaning of the words themselves. We have become used to a word taking on different connotations. We know an apple is a fruit. But it is also a recording label, and a technology company, and, maybe many other uses. We understand because the words are used within a cultural context.
However, the real problem is the “utility” of words. How we use words is the problem here. CRT/CSJ alters meaning not because the conventional meaning is imprecise or wrong, but because it doesn’t provide the right utility to acquire power. We are not living in Plato’s and Aristotle’s world. We are living in the Frankfort School’s, Derrida’s, Foucault’s, and other postmodernist’s world. I am too old to have been presented with their ideas when I was at university. But I’ve been reading them for the past decade. I’m convinced you cannot understand this time by reading critiques of their writings. Particularly critiques that attempt to validate the past for the present. It isn’t that Plato and Aristotle, and all the other Greek and Roman poets and philosophers of ancient times, are irrelevant. They are not. But their perception of the world is alien to the one we are discussing here. We must read the postmodernist themselves to understand them.
My impression is that basically, they are all utilitarians (lower case). They really aren’t that interested in what we think of their ideas. They are interested in distracting us from what is “essential” to them. It is about power. I’m not validating a postmodernist perspective on power. I’m not advocating for them. I am saying that we need to know the postmodernist as well as we know Plato and Aristotle. You need to know why the intellectual splits between the Empiricists and the Continentals, between Marxists and the post-Marxists, between the corporate left and the anarchist left is the ground of CRT and CSJ. If we think of their work as a children’s playground, we’ll see that we have here those who have felt marginalized within their own political traditions, reacting as they have. As I have read and observed what is transpiring, I’d say we are watching the intellectual disintegration of the modern world. That is not a new idea. Many have said so. What is not said is that this represents the inherent fragility of the CRT/CSJ world. This fragility is why it is so extreme and violent.
If there is a strategy needed, I’d recommend Jean Baudrillard’s strategy of reversibility. In effect, you cannot terminate this kind of thought. This a war that is unwinnable. It is like the Vietnam conflict. The US and its allies sought to win the war. The Communists sought to outlast their enemy. This is what is happening. Fight the skirmishes, but don’t think that some inevitable win will reestablish Platonic/Aristotelian traditionalism in the world. You can only turn their ideas back on them. Take their own approach and reverse it. Baudrillard is a hard read. He is the contrarian within the postmodern circle. I find him making sense. Not on the surface, but through reflection on his ideas of symbolic exchange and simulation/ simulacrum.
The key, I believe, is in how we talk about reality. Not as a concept, but as to how we experience it. I’m injecting a bit of phenomenology into this idea. Not for debate, but rather as a means to enhance observation of what we see happening around us. One example, and I’m done. I found several of my friends speak of their hatred of Donald Trump in almost exactly the same words and sequence of ideas. Yes, they were programmed to say it. My response is what do you know by direct experience. Sometimes they had an answer, most of the time they didn’t. I see the same thing happening with CRT/CSJ. I ask about specific examples that they have personally experienced. Once they say something, we have a place to reverse their perspective.
“[Hegel] the intellectual father of modern totalitarianism.”
Totalitarianism began when Hegel was 19 years old: https://en.wikipedia.org/wiki/Reign_of_Terror
Excellent piece. I suspect William of Ockham would have enjoyed reading it, as well.
Possibly typos:
“… demand for of individual freedom.”
to
“… demand for individual freedom.”
“As we noted above, is it futile to search for the essential nature of …”
to
“As we noted above, it is futile to search for the essential nature of …”
Fixed. Thank you.
The notion of “categorization via language” misses some aspects of how we think and use language. Specifically, we do categorize, and we are not as dumb as this suggests.
Take “red”. We can all agree that there is a color “red”. We can also agree that some things which are colored are red, and some are not. However, we also can agree that some “red” things are more “central” or “true” than others. The “red” of a stoplight or fire engine is “true red”. Other reds are “sort of red” and other approximate words. Same with “bed”. We can distinguish between a “central bed” or “real bed”. There are other bed-like things which share some characteristics with beds, but are also approximate. A “bed of nails” is a good example – some would lie on it, others would not. A “bed made of stones” is another example.
Political ‘Science’ is another great example of words used as a hoax, (Not to denigrate an excellent essay). It always amuses me to contrast such terminology with the more excellent designation ‘Medical Arts’.
Bingo! All ideology – especially Abrahamic religion – is essentialism in practice.
People ARE inherently essentialist thinkers – they categorize and prefer binaries. They don’t understand that language is ambiguous and arbitrary. They are largely unaware that all things exist on multidimensional CONTINUA and are RELATIVE.
The way to an understanding of reality is NOT philosophy. As the author points out, philosophy is based on ARGUMENT rather than testing (experiment). Arguments depend on assumptions, speculations, interpretations, personal biases, hidden agendas, and lies. Science requires hands on manipulation and demonstration of the accuracy pf propositions. Philosophy is nothing more than “navel-gazing” BS that leads to emotion-based conflict. Science yields answers that benefit man through improved engineering.
Human progress does not progress through ideology (believing), but through science (knowing or finding out). Philosophy and science are antithetical to one another.
«Philosophy and science are antithetical to one another.»
I think it’s a bit more complicated than that. Originally what we call «science» was part and parcel of philosophy. Same with mathematics. I think it is the extraordinary beauty of mathematics, a pure intellectual process, which convinced all philosophers, up until now, that philosophy could yield «truths» as solid as the mathematical truth by virtue of mere intellectual reasoning, detached from the unpalatable and always decaying «matter». It did produce some interesting perspectives but failed to produce unifying, compelling consensus of its own, only ungrounded, concurrent, «systems» with limited appeal. Meanwhile science is hacking philosophy, one discipline at a time. Perhaps, in the long run, what will be left of philosophy will be epistemology. Metaphysics is already gone. Rhetoric is soluble in social media analysis, Politics has fallen prey to Cambridge Analytics and History is less and less the narrative of the winner and more and more constrained by physical evidences. I am not even sure Ethics will survive (we are already implementing ethics in self-driving cars). The goal to provide Truth by pure reasoning proved elusive but it remains an idea extraordinarily attractive to many people and we probably will continue to suffer the consequences of this delusion for a long time. And there will always be that beauty of a cute sentence. Hey, we are the talking species!
There is a d missing at the end of “Alle Dinge sind an sich selbst widersprechen”. It’s “widersprechend”. German native speaker here. Takes you right out of the essay and makes the sentence unintelligible, because surely even Hegel couldn’t have been that dense. Otherwise great essay.
Gracias, amigo!
Excellent essay. As one who bastardizes language daily (I have my reasons); 2 thoughts come to
mind:
1) There is a reason Noam Chomsky is beloved by the left: he is a Doctorate in Linguistics. From M.I.T.!!!!
2) Elmer Kelton – one of the finest American writers to ever live; ironically pigeon-holed as just a “Western writer” (i.e. non elite riff raff) wrote “in order to overcome racism, bigotry and hate; you need to have an individual experience”
This is the exact opposite of what is being sold by the race charlatans.
How many amicable relationships have been destroyed by CRT? It’s in the trillions…
Others have made the same comment. The intention is good, but the argument of the post misfires–badly. Essentialism is not the enemy–nominalism (or conceptualism) is. Without essentialism you land in precisely in the waters the Woke are advocating. Back to Aristotle is the way forward. Essentialism is the only safeguard against ethical and logical relativism.
tl;dr Arguing for the third position of “conceptualism” with some ranting against (a naive reading of?) Derrida’s “difference” as being rooted in not understanding the point of dictionaries.
If I am not mistaken, the question of essentialism vs nominalism is really just the Problem of Universals. Essentialists and their platonic forms are a form of “realism”, where the universals are thought to be real objects in some metaphysical way. Nominalists say that universals don’t exist. A realist would argue that two chairs are identified as the same due to being flawed instances of the form of chair. A nominalist wouldn’t argue they have some abstract property of “chairness” in common. Instead the mind of the observer groups them based on the definition of a chair (they both fulfill the predicate of “chair”).
So far these are positions that I don’t really understand, so forgive me if I am wrong in these descriptions.
Personally, I hold to a third position that has been described in earlier comments: Conceptualism. Universals are real, but only exist as concepts. Concepts are not arbitrary but reflect real properties of particular objects. In particular properties useful to humanity or to particular cultures. Ayn Rand’s measurement omission theory of abstraction is (arguably) the most famous modern version of the theory.
Concepts are not just empty words. To convey a concept to someone else it must be understood by the recipient, meaning they understand the underlying logic, pattern, or utility of the concept. The act of communicating works as a check on the correctness of the transmission: you ask the other person to provide examples of the concept or to explain it back to you, if they fail you try to explain another way.
I don’t have the philosophical experience to argue more convincingly, but to me this approach makes the most sense. Ideas are stable over a short time period but drift over generations along with language, which means universals drift as well. Neural networks can learn how to handle visual concepts and identify shapes and objects just like humans can. Did the neural network learn a complex predicate? Did it learn to grasp “appleness”? No, it learned to abstract from visual data (percepts) the concept of what an apple looks like. From that it can’t figure out that apples are fruits from that or any other fact about apples. Likewise, children proceed from simple language and concepts to complex language as they grow. Is this from more thoroughly grasping the particular forms they are exposed to? No, they learn simple concepts first, like simple nouns and verbs, then use those concepts to learn more in-depth ideas and nuanced versions of earlier ideas. Which lines up well with what is seen in psychology and not in philosophers musings on how ideas are learned / formed.
On this last point, I think it relates to Derrida’s biggest flaw: differance. The infinite differal of meaning, as words are defined in terms of other words, which leads to an infinite regress. To me this sounds like a failure to grasp the purpose of a dictionary, where instead of being a boot-strapping device to fill in missing concepts to understand other ones (and a tool limited to text by its nature as a book) it is viewed as the ultimate arbiter and imbuer of meaning to words. Instead of language being a power game with no referral to reality, the issue is that dictionaries must use words to define other words, it can’t point to a dog and say “dog”, its just a book. The definitions of words in a dictionary must form a closed web: that’s the only option a book has! And its the best way to accomplish the goal of a dictionary.
However, compare picture dictionaries, children’s dictionaries, and college dictionaries. Each supposes wider vocabulary and thus concepts it can use to boot-strap your knowledge. Each reflects greater stages of development, and amount of abstraction away from precepts (raw sensory data).
For as much as language is socially constructed and power influences word usage, it is not utterly arbitrary as implied by (naive?) readings of Derrida. But I suspect like some “very smart people” he may have been that wrapped up in his own magic ideology.
Also, conceptualism like this makes sense to me as it fits in with evolution. Why do we have some universals and not others? Because we evolved to understand the world in a certain way and not in others. Does that mean our universals/concepts are invalid? No, they refer to real things, in particular things that if we failed to understand our ancestors would not have survived (ex, “How do you know you aren’t just imagining things right now and in actual reality you are fighting wild animals in a swamp?” style thought experiments to “disprove” the validity of evolutionary epistemology)
Likewise, blind people can’t understand the concept of “red” but with the right instruments they can identify red objects because “red” refers to certain wavelengths of light. Just because we can’t perceive of the form “red” without using some form of observation doesn’t mean we can never truly understand “true” redness. (Referring to all that Kantian “noumenal” bull Rand loves to rail against, esp. with “man is blind, because he has eyes—deaf, because he has ears—deluded, because he has a mind”)
Finally, conceptualism centers individuals, not abstract collectives. Concepts / universals can only be grasped by individuals, there is no group brain. The validity of concepts are implicitly checked every time a new one is learned. If it doesn’t “click”, then it is not integrated into your thinking. If it is misunderstood, it isn’t due to failure to grasp some perfect metaphysical “form”. Also, some concepts can be just plain wrong (lots of woke ideas in this area) and deserve to be rejected. No magic authority can declare concepts automatically valid and above scrutiny (ex any -ism or -ness in CSJ).
Maybe if someones interested and doesn’t find all this to be gibberish I’ll post a follow up on how object oriented programming is similar to Ayn Rand’s conceptualism (class = concept, fields = omitted measurements) and how programming languages (indeed all formal languages) are not a solution to ambiguity and communication issues due to (at the very least) the incompleteness problem.
Conceptualism isn’t really a third alternative; it simply moves the debate between realists and nominalists into the mind. Presumably, when I employ a concept in thinking, that involves my mind/brain acquiring some properties, e.g., if I’m thinking of the Pythagorean theorem, my mind has acquired whatever properties are involved in directing my thought toward the Pythagorean theorem. If you and I are both thinking of the Pythagorean theorem, it is natural to say that we are employing *the same* concept, in virtue of our minds exemplifying *the same* properties. But that description implies realism about universals. The nominalist will prefer to say that we have “exactly similar” but nonidentical concepts/mental properties. The problem with this move is that similarity usually seems to be analyzable in terms of two or more things being identical in some respects but not in others, with “exact similarity” being a limit-case in which they are identical in all respects. Thus, n my view, realism wins. But that doesn’t mean you have to follow Plato by locating universals in a “timeless heaven” or “third realm.” Better to follow Aristotle by locating them in things as their attributes (essential and accidental).
If communication (through language) accurately conveyed concepts between speaker and listener, we wouldn’t have need of civil courts.
Listeners ASSUME they are understanding the concept being conveyed when, in fact, they are IMPOSING their own meaning on the concept and assuming that that meaning is what the speaker intended.
Likewise, speakers assume that a head-nod in their listeners indicates understanding of what was said – it doesn’t.
First the flippant reply: What other option do we have other than language to convey concepts?
I’m not saying concepts are conveyed through a singular utterance of a word/sign, for a concept to be conveyed one must communicate what one means by the sign referring to the concept. Meaning is checked by both people in the conversation explaining their understanding/version of the concept and debating the correct one. Which sounds a lot like what happens in your court case example. (Yes I’m aware this gets into “discourse” territory)
I am not a philosopher at all, but realism, as an abstract platonic realm or whatever else, just sounds like woo on the level of depak chopra or kabbalah. So it seems to me the real debate is between hard social constructivist and POMO positions (which end up essentializing anyway, despite their anti-realist roots), and their detractors.
My main objection to the former is to the idea that since signs/words are utterly arbitrary and their is an “apparent” infinite regression in signs depending on others (definitions) then concepts themselves are arbitrary and completely socially constructed (esp. by power dynamics). I believe concepts must at some level reflect reality in a coherent way, and they can be introduced into any discourse (perhaps at great effort to define lower level concepts). In fact, the person who defined a new concept had to name it and define it, which means (to some extent, however briefly) the concept existed outside any discourse and had to be introduced into one.
I accept the idea that the lowest levels of concepts are formed from basic sensory experience that is abstracted by the mind, and some are derived from genetic predisposition (evolutionary epistemology). According to very basic intros to nominalist vs conceptualist vs realist, that sounds like conceptualism. Also, nominalism still has the issue of making sure each person’s predicates and names for collections of particulars are consistent and each mean the same thing. So the two schools of anti-realism have the same problem.
To end another wall of text, call it conceptualism or nominalism or whatever, I object to the Derrida derived (hard) social constructivist idea of the infinite regress of meaning, that the discourse is all there is, power dynamics rule/define it, and concepts do not have to be coherent (or even definable, looking at you “post-modernism” and “god”). Mainly due to the misunderstanding how dictionaries work idea above, and ignoring how language is acquired (from concretes to abstracts) while somehow simultaneously obsessing over conspiracies of mass indoctrination via language (patriarchy, cis-sexism, “capitalism”, etc.)
NIght-
“Social science made a lot of progress in this regard with the move into modest and pragmatic middle-range theory in contrast to the high theorising which seemed to come from the Continent and depended on teetering epistemic edifices.”
Psychology and other social sciences belong in the humanities not the sciences. Psych likes to think it’s a real science like chemistry so it struts around in the clothing and cosmetics of stats and the experimental method.
Exactly correct. Social science is an oxymoron. The mind is neither material nor rational (consistent in principle) and therefore not amenable to scientific investigation. The mind is delusion – the opposite of reality.
You’ve misdiagnosed the problem, which is not essentialism itself, but rather the conflation of essence and accident by CSJ theorists. For classical and medieval thinkers, essence has to do with what the members of individual “species” have in common – with the properties that make all humans human, for instance. It does not include any properties that differ among human beings, like skin , eye, or hair color, height, weight, etc., let alone ethnic and cultural differences. These are “accidents” (to use Aristotle’s term), not part of “essence”. The failure/refusal of certain modern and postmodern thinkers (including CSJ theorists and the “woke”) to apply this distinction properly and to treat accidents as if they were constitutive of essences is a very significant problem, but not one you can lay at the feet of Plato (whom Popper badly misunderstood, btw) or any other classical or medieval essentialist. In fact, it’s only after the late-medieval nominalists rejected the classical notion of essence that these matters start getting confused – so I’d say the fault belongs to the nominalists, not the essentialists. Alas, your own characterization of essentialism – in terms of language rather than in terms of the the properties of things – shows the influence of nominalism and its empiricist descendants (e.g., Locke and his talk of “nominal essences”) .
I agree with this comment in general. In particular I think you are right in saying Popper badly misunderstood Plato. Also, that essentialism relates to numerical identity of meaning and has almost nothing to do with language.
You fail to appreciate that all things are the same except in the ways they are different and all things are different except in the ways they are the same. What you are calling “properties” in humans are not properties at all – they’re ALL features or traits that VARY over a wide range. Humans and chimps have a great deal in common, but it’s unlikely you would suggest chimps should vote.
How about concepts are mental abstractions ultimately derived or developed from things that exist in reality , in which the various characteristics of those things are noted and identified, but with their specific measurements omitted? The data from which concepts are derived is obtained from our senses and processed by our mind into concepts (ideally using rational means). Note that when I refer to things that exist in reality, I also mean intangible but real phenomena like emotions, context, temporal relationships.
Higher order concepts can be derived from other concepts (abstractions from abstractions) but the ultimate basis still lies within reality.
Definitions are the detailed verbal and written descriptions we use to describe a concept. Words are the arrangements of sounds and the corresponding symbols (in a phonetic system) used to identify specific concepts.
This at least grounds concepts and thus knowledge in reality and ties out sense-perceptions to our conceptual knowledge. It also avoids the essentialist issue that fooled Plato by tying concepts to reality rather than trying to locate them in separate “essences.”
Usually the articles here are quite good, but did nobody fact-check this one?
Your argument has been utterly insufficient to disabuse me of the theory of forms, as anyone with more than a cursory understanding of the theory can see that you don’t really understand it yourself. It has nothing to do with the words themselves. You do realize Plato was Greek, right? He wouldn’t have been philosophizing about the words ‘bed’ and ‘apple’ since he spoke Greek, not English, and those are English words. He was certainly aware of the existence of other languages, so how do you think he would have come up a theory such as the one you describe? He wasn’t talking about the words themselves, that’s why. He was talking about the conceptual category as abstracted from either the word or the individual objects. That’s why it’s not called the theory of words, it’s the theory of FORMS. The specific word used is irrelevant. The truth of the theory is patently obvious to anyone who understands what a conceptual category is, and why they’re essential to abstract thought. The fact that the word ‘apple’ exists to describe different but similar objects is merely the proof that the form exists, but the word is not the basis of the theory. The forms exist beneath the words. We all use these categories whether you agree with Plato or not (which is still proof that he was right), and regardless of what word we use to describe them. We can all use different words (though this makes communication rather difficult) but the form remains the same.
The criticism of Aristotle similarly fails to understand that he wasn’t talking about words. The essence of a thing is just that—its essence, NOT its name. Even something like ‘baldness’ has an essence that allows us to identify it even without the word. We can recognize the pattern of it. It doesn’t have to be a total binary, either. Of course it exists in degrees, but the essence of it is still present. The specific word used to describe it does not matter at all.
Conflating the theory of forms with modern SJW ideology is ignorant at best and disingenuous at worst. There maybe be some similarities, if for no other reason than because the theory of forms is so fundamental, but that does not make them the same thing. Throwing away the theory of forms means throwing away the entire idea of conceptual categories and pattern recognition (and any half-decent psychologist will tell you that you can’t do that. Most of what we know about human cognition would have to be thrown away along with it, and that’s absurd).
Now, I’ll admit that I have not read this entire article. It’s pretty long, and I stopped after the part about Aristotle because it was obvious by that point that you really don’t know what you’re talking about.
At the risk of speaking for the author, it seems reasonable that we are dealing with a translation, hence when Plato speaks of an ‘apple’ it’s the Greek word for ‘apple’ that he’s actually using.
The Theory of Forms is often read as a theory of language’s meaning; Plato anchors meaning in metaphysical/transcendental essences. Simply using categories does not ‘prove’ the Theory of Forms because alternative empirical and subjective theories also exist.
I concede though, that I don’t know if Plato viewed the actual signifier as arbitrary. This sounds like a structuralist influence, however.
I think the concept is that the theory of forms influenced Hegel. Hegel modified it into his particular form of hokum, which was adopted by Marx (Marx would say that he “scientifically developed” Hegel’s ideas into a stronger theory: Communism.
The Frankfort School were disillusioned interwar Communists, who tried to explain Marx’s failed predictions about the revolution (he said it would happen in western europe/the US first, places like Russia & China last). They were also blindsided by the growth of Fascism (which had resulted in many in the Frankfort school being jailed as political prisoners in Italy, Germany and Hungary in the 1920s-1930s). They didn’t want to ditch their faith in Communism, so they tried to reform Marx by re-embracing Hegel, inventing Critical Theory, and pushing Cultural Marxism, to try and spark a revolution through propaganda, and a takeover of popular culture.
Relocated at Columbia U, they intentionally radicalized black activists like Angela Davis, who directly trained Derrick Bell (father of CRT), who taught Kimbele Crenshaw, author of Intersectionaity. Davis, Bell and Crenshaw’s ideas are directly linked to the Frankfort School, and are heavily infused with their Hegelian philosophy.
I am thinking “baby” and “bathwater” here.
To me, Plato’s concern with universal truth is indicative of his status as an early adopter — he was after all one of the first Greeks that we know of who set his thoughts down in writing — and I believe that we overthink him when at bottom he was merely envisaging the necessity of a dictionary (the pre-existent notion of a dictionary if I may be so bold) in the roundabout way that visionaries often have when intuiting the new.
Think “horseless carriages” or “dirigibles” even.
After centuries of taking the medium of the printed word for granted, it is easy for us to overlook the grave concerns that other cultures (including our own if we look back far enough) have had about committing something so personal as our word to paper. Socrates would have none of it. Neither Jesus nor Mohamed did it (the Qu’ran was exactly that — recitation — for some one hundred and fifty years before permission was sought to commit it to pen and paper for academic study only) . Plato even gives us his mentor’s dialog between Theseus and Thoth, reprimanding the god for the gift of writing because men would no longer carry wisdom in their hearts.
It seems pretty clear to me that in his theory of the forms Plato was making a case for caution over our very real capacity to confuse the meaning of a word when there is no one about to haggle it out in a dialogue. Anyone in the Twitterverse who has ever been deliberately misquoted, misinterpreted or plain old misunderstood when their back is turned should appreciate his intent.
And you know what? There is a whole bunch of apples in my notion of an apple right now, beginning with the sweet rosy yellow one I ate an hour ago and the tangy green granny smith I wish it had been instead — not to mention other varieties that have sprung to mind when someone else has read this far.
But we must be clear that it is never an orange.
Sadly the Sophists have no problem exploiting the written word in addition to the spoken word.
Boom, tish.
Wonderful response! Analogously, the Theseus and Thoth reference re the written word evoked the state of Job when he finally is personally confronted by God and granted his chance to question Him and, to paraphrase, can only say, “Well, shut my mouth!”
Your argument (like all arguments) is based on cherry-picked premises (of which you obviously are unaware). An orange has much in common with an apple. Is a hyena a cat or a canine? Is a virus alive? Is a grain of sand a small pebble? What is a definitive definition of a species? Is Neanderthalensis extinct if his genes continue in modern man? Since they share 98% of their genome, are chimps and humans the same aniumal?
Of course I might argue that nit-picking is not at all the same thing as splitting hairs.
Amazing piece. The essay is well-structured, and does a good job at defining the necessary concepts to understand the content of the argumentation. Writing style is engaging & flows naturally, which was obviously critical in a text that disses over-complexified writing. Insightful read, I look forward to see more from Laurie.
I think it’s queer to characterize neo-racism as an extension of Platonism and not a perversion of it. Surely Plato would have considered the form of Socrates (his soul) before considering the generalized form of a man or of an Athenian in place of that individualized form.
Abercrombie-
“…confusing mess of unintelligible gibberish.” is probably a great way to describe a lot of the philosophers’ ideas.
Couple of thoughts:
“Karl Popper, who for some reason wanted to limit his demarcation to only science.”
Maybe I’m missing something here, but how does one test opinions? How does one test feelings? How is it possible to experimentally support with hard data an internal, subjective world? Only the person experiencing the feelings knows what they are; and sometimes even they don’t know.
Agree with Popper- draw the line at science.
Heisenberg’s Principle of Uncertainty and Schrodinger’s dead cat experiment demonstrate that absolute certainty about something seems impossible. Both these men were physicists. Science never “proves” anything ,BTW.
Also, I suggest people look outside the purview of epistemic tenets and into childhood development for understanding the role/purpose of words/categorization.
Data supports the notion that children universally learn about the world by categorizing objects/ideas. That’s it.
Humans are imperfect and filled with hubris.
REPLY
Nobody seems to much care about foundationalism these days in the natural sciences. Natural science is more pragmatic; which lever to push or pull to achieve effect x, not what the ultimate truth of the lever is, for example.
Social science made a lot of progress in this regard with the move into modest and pragmatic middle-range theory in contrast to the high theorising which seemed to come from the Continent and depended on teetering epistemic edifices.
I don’t agree with the criticism of essentialism here. I don’t think the goal of Plato and Aristotle in finding the essence, e.g. of an apple, was to find the essence of the *word* “apple” but instead to find the essence of the *thing which we happen to call apple.* It is focused in finding the essences of actual substances in reality.
The article states Plato argues that “every object which we give a name must necessarily have an essential nature, in virtue of which it is given this name.” But the argument here really isn’t about naming; it’s about categorizing. Any time we categorize something, it’s because that thing shares some quality or qualities with everything else existing under that category. For example, if we place two objects, object A and object B, under the category of apple, we are arguing they share some characteristic or set of characteristics with each other, along with everything else we’d also categorize as an apple. Otherwise, we wouldn’t categorize them together.
Regarding racists being essentialists, this is true. But that doesn’t mean that essentialism implies racism. The issue here is that throughout history, and currently for the Critical Social Justice movement, people have created the categories of “race” and *wrongly* attributed particular essences to those groups.
For example, someone could be an essentialist when it came to race, and simply say that “all people of color” share the characteristic of not having white skin (or shared the characteristic of having more pigmented skin). We wouldn’t say that person was a racist. We might say they’ve come up with a somewhat silly category, but it’s a perfectly fine definition that makes sense. Meanwhile, if someone instead said that “all people of color” share the characteristic of being oppressed individuals, we would call that person a racist, not merely because they’re essentializing, but because they’re essentializing something totally wrong about the category.
👍
Hi Brett. You are right to point out the distinction between essentialising and simply categorising – I probably should have made this a little clear. It is true of course that merely categorising something e.g. “people of colour” does not necessarily essentialise that category, as long as we use it very carefully. However, because of our strong cognitive predisposition to (illogically) essentialise from categories – which we see all the time – I think unfortunately this is often what happens. E.g. the use of “counterparts” and “minority status” seems to me to be a broad generalisation from that particular category, which wrongly gives it more meaning than it ought to have.
I would also argue from a constructivist standpoint on this: that is, since there are only a limited number of categorisations you can make of society, the ones that we deem worthy of talking about are the ones that we reify. So by constantly referring to “people of colour” and “white people”, even though taken literally this is just an arbitrary categorisation, we imply that there is some kind of essential difference or dichotomy between these two groups – and this dichotomy becomes real in people’s minds.
The slippery but most interesting question is why we have these categories in the first place (such as POC) as one of an infinite number of possible ways of parcelling up reality. I see it as terribly naive to ground their origin in a disinterested empiricism (as ‘natural’ or ‘arbitrary’). Sadly, this could be an unanswerable question because it goes to the heart of what words mean; even Foucault said there was no getting outside of power. Arguments that ‘power’ shaped these categories are probably just as metaphysical as the Theory of Forms.
I think it has to do with the history of racial discrimination in the US. To justify slavery (and actually characterize it as a good christian practice), slave owners dehumanized africans as inferior to full humans. They also gradually freed European indentured servants, while passing laws to permanently enslave Africans, pitting the two groups against one another after several labor revolts, splitting them up on racially constructed lines.
This split people up into white and black groupings, and most blacks lived in the south.
Blacks had some protection as valuable property (if you murdered a slave, the owner would prosecute you for felony property damage). After slavery ended, civil rights laws were not enforced, and blacks became vulnerable to terrorist attacks, peaking with the resurgence of the KKK in 1900-1930.
Because of this, blacks banded together for protection, and racial lines hardened. If a single black person resisted Jim Crow, this could result in violent retaliation vs many bystanders. This fostered a strong sense of community, solidarity, and pressure on blacks by other blacks to conform so as to not “rock the boat” and trigger indiscriminate violence.
So why “POC” instead of black vs white? Because after the successes of the civil rights movement in the 60s, racism diminished, and civil rights groups became less relevant. Also, many other groups of immigrants came into the country, threatening to make black rights movements fade into the background. So black activist groups set themselves up as the civil rights leaders, and expanded “black” to include “POC”. Instead of many splinter groups, they could unify very different ethnic groups in solidarity to be a stronger political power bloc, preferably (for black leaders) with black people in charge.
But those CSJ thought leaders play with categories, making and breaking connections to serve immediate political ends. Hence BIPOC includes Black & Indigenous but then excludes Asians, and this despite the fact that there are many different forms of Asians who generally don’t see themselves as united, except in a certain political context.
Hispanic or Latino or LatinX could be included in BIPOC as Latinos are part indigenous but then there are “white hispanic” who may be categorized as “Spaniard” but these decisions are more based on their acceptance of or role in political beliefs or happenings. So a Hispanic or LatinX can be BIPOC but if he owns a business, shoots an armed robber who isn’t White, or votes Republican, he becomes part of the “Whiteness” camp, supporting normal hierarchy and oppression of BIPOC.
Arab-Muslims are POC, even if they are white, except when a mass shooter was heavily and publicly jeeree as White — both for being a gun-user and for being arrewsted without dying — CSJ pointed out that he appeared to have pale skin, so he’s bad and he is in your category not ours.
Similar for people who are essentially Black but not politically-Black for having adopted an excess amount of Whiteness, and defending that Whiteness as having value.
Hi Laurie, thanks for your response. The distinction I am making in my previous comment is not between essentializing and categorizing. It’s between essentializing and *essentializing incorrectly* (and often in dangerous and malicious manners). I do, however, agree that essentializing at all, when it comes to something like race, can lead to bad consequences, even though there’s no logical necessity there.
“becomes real in people’s minds” = subjectivism which taken collectively = constructivism
.
Learn genetics. A DNA lab tech can tell you the subspecies (race) of sample donors without any other knowledge of their identities. In fact, the tech can usually tell you the tribe of origin of the donor. Subspecies are subspecies because they are allelically divergent (they have markers unique to their subspecies).
Sapiens instinctively prefer others who are genetically similar to themselves and are cautious or hostile to those who are dissimilar – i.e., they think tribally (if you haven’t noticed this tendency, go to a college football game). From a biological perspective, this instinct is an evolutionary adaptation that increases the likelihood that they will survive and propagate their genes.
Interesting point. Got me thinking about how society has been arguing about what makes someone ‘male’ or ‘female’. It’s entirely reasonable to identify their ‘essence’ with certain biological characteristics, but to call someone ‘male’ because they are brash or insensitive is sexist, though many males are.
If we can’t agree on what an apple or a bed or a male is, we loose the bedrock tool of society – our ability to communicate.
Not that we are going to argue what an apple is. Unless the apple starts self-identifying as an orange.
Yep, exactly John. In fact, intellectual thought based in essentialism can much more easily and forthrightly respond to the arguments of Critical Social Justice that rely on deconstruction. There *is* in fact an essence to a ‘male’ and an essence to a ‘female,’ having to do with their binary role in sexual reproduction for sexual organisms. While throwing out essentialism might be a cheap way to score a point against Critical Race Theory (since Critical Race Theorists want to reinfuse racial distinctions with meaning again), it hurts us when we argue against CSJ in many other areas.
Although I liked this essay, a lot, it really needs to be better proofread & edited, for “boo boos” (e.g., extra words) abound, marring it. That said, content-wise, logic-wise, argument-wise, this essay rocks.
That’s very kind, and yeah sorry for those, it was just sloppiness on my part.
“For this to work effectively, we use words as labels for and concepts so we can talk about them.” ???
I can’t make sense of this. Is it a writing error?
The Author is appealing to a taken-for-granted or ‘common sense’ understanding of language that apparently, and unproblematically, ‘shows’ Plato’s Theory of the Forms as being wrong.
In actual fact, it just begs more questions and the Author is slinking away from having to formalise an alternative theory of language.
“begs more questions”
Yet more language confusion.
This is writing of exceptional clarity and accessibility. To see Hegel, Aristotle, and CSJ addressed so coherently in a single essay with such persuasive force is extremely rare. Thank you for this illuminating read. I learned some interesting ideas and useful approaches to thinking here today. I look forward to more from this author.
Thank you!
Alas, the supposed “clarity” has been achieved at the cost of misrepresenting Plato, Aristotle, and the classical doctrine of “essentialism.” (Whether Hegel is also misrepresented, I’ll leave to others to judge – he’s not in my wheelhouse.)
I always thought that Popper was harsh on Plato, and perhaps Hegel too (although I’m much less familiar with the latter), which may be understandable given the wartime conditions under which Open Societies was written, however.
I do agree with Laurie that there is a great deal of essentialising (and reification) of identity categories by the Left today, but why lay the blame at Plato when, as Laurie says, this seems to be a cognitive predisposition that we have to be trained out of? Furthermore, Laurie seems to believe that simply using ‘people of colour’, for example, is ipso facto to essentialise, but this makes no more sense than saying that someone using the word ‘bed’ is ‘doing’ essentialism too.
These identity categories – in the context of intersectionality – should be seen as generalisations (which are quite permitted by nominalism) of broad power dynamics that operate in society a bit like any statistical truth. One could even consider them like Webber’s ideal types. What most Critical Social Justice activists seem to do, however, is fail to comprehend the fallibility, partialness, historical contingency, and aforementioned generality packed into intersectionality and thus they essentialise /reify the categories creating a kind of metaphysics of power. I think of this tendency as a kind of constructionism-in-theory but an essentialism-in-practice which in turn essentialises Theory, unfortunately. It may be that intersectional ideas about power are too crude, general or reductive, etc., to have any empirical use or validity, especially when it comes to saying anything predictive about a given individual (a jurisdiction they do not have in my opinion), but this is not what Laurie is addressing. per se.
Alternatively, we can blame Critical epistemologies for placing their foundation (or centre of meaning) in the Lived Experience (of oppression), rather than the metaphysics (behind language) of Plato’s Theory of Forms. This, in my view, is the main reason for Critical Social Justice ideas being so slippery from the point of view of pinning them down for validation/falsification; they are de facto metaphysics!?
Not that I can, in all honesty, point to any methodology (whether empirical or not) in the humanities which I believe offers an absolute test of any theory. Much mental toil has gone into epistemology over the centuries but the ‘true centre’ or ‘purity of presence’ of language (as Derrida might say) remains forever elusive. This lack of cognitive closure can be difficult for some to handle.
Hi Night. On the categories, you raised a similar point to Brett above which (I hope) I have addressed there.
It’s not that I think categorisation necessarily implies essentialising, but rather that it is very often the consequence of doing so, especially given our cognitive predisposition to do generalise from categories.
Super interesting. Right to the root of this. Really great piece!
Seems like we’ll have to confront this all over again. Open Society is from 1945!
Beauty of democracy: you have to earn it.
In the same direction you might want to check this one out.
I got it from an architect friend who’s into this prof’s thinking.
Links beautifully with the piece James wrote a couple of months ago, foreseeing a renaissance in art.
(Yes, I am paying attention!)
KUTGW, everyone!
All the philosophers like Hegel and the Postmodernists seem to want to intentionally make language loose and sloppy, when we should strive to do the opposite.
I believe the ideal solution would be to get linguists and computer language designers working together to develop a human language based on a strongly typed object-oriented computer programming language. Words would have to be split up into a massive number of different words at every opportunity to decrease potential for ambiguity and word games.
Imagine an extremely precise, rigidly defined, unambiguous language with a crude and inflexible grammar modelled on a computer programming language. Designing it to be perfectly phonetic would be ideal as well. It would solve so many problems in communication.
That sounds a bit like what the Logical Positivists tried with ‘logical atomism’? Wittgenstein eventually rejected ‘ostensive definition’ in The Philosophical Investigations. Sadly, empiricism does not seem to have any chance of a certain direct access to reality through language. The centre of language escapes us.
I looked up Wittgenstein and Carnap and it seems they were before the time of Computers, although Wittgenstein invented truth tables! They didn’t have computer programming languages to work with as a model, which is probably why they failed. I started reading Wittgenstein’s Tractatus and it actually reminds me of object-oriented programming. The abstractions are obviously not the real world, which Wittgenstein points out right off the bat, except they are useful abstractions for modelling the world. It is very useful for data-modelling and designing algorithms. Most video games, complete with their fantastical imaginary worlds, are primarily written in C++, one of the first mainstream object-oriented programming languages. If you had sufficient hardware and you really put your mind to it, you could theoretically model anything, even the entire world in an object-oriented programming language. You could of course model everything in an old-style procedural language as well, except it would be harder. I would not even try it in one of those loose, sloppy “dynamic” languages that seem to be the craze lately because you would likely wind up with a tangled, confusing mess of unintelligible gibberish.
One could even define the suitability of an idea or concept towards being modelled as the demarcation regarding assessment of substance or meaning. I have ordered a book by Carnap. So far, I like Francis Bacon and the Logical Positivists a lot more than I liked Karl Popper, who for some reason wanted to limit his demarcation to only science.
You’re still deferring the question of what those objects are if your language is going to ground its meaning in them. Plato chose his metaphysical Forms for that purpose.
My point was that the older Wittgenstein eventually renounces the Tractatus by ‘showing’ – through the Philosophical Investigations – that a correspondence theory of natural language is inadequate. He uses the metaphors of ‘language games’ for that purpose.
The value in abstractions is only evaluated by assessing its usefulness in solving problems, which is the only way any abstractions or concept or thought can ever be legitimately evaluated.
Many computer programming languages are very precise, rigidly defined, and unambiguous. Not only do they say something, they explain it in intricate detail to the point that even a computer can understand it sufficiently to execute the task. They don’t rely on the computer making an untold number of assumptions, inferences and guesses.
You know how we all wish every single word and phrase could only have one meaning, that only that word or phrase could have that exact meaning, and that the meaning of a word or phrase could never change over time or shift during the context of a conversation?
Also we would no longer have need of loud volume, tone of voice, facial expressions, body language, or hand gestures as a sloppy crutch and communication would be less imprecise, more consistent, and less emotional.
This is a strawman or misreading of Plato and Aristotle and glazes over their work. I’m not sure you understand them.
Neither were doing philosophy of language as the analytic philosophers were doing.
I think you’re reading Plato and Aristotle through a sort of Popper logical positivism that states we must be able to varify the meanings of words empirically, as if Plato and Aristotle were trying to do what Popper was doing.
Plato and Aristotle were trying to define the essence of things using words not as essential themselves but merely as labels to understand what a thing is. Language for them wasn’t essential, the thing it describes was. The form and matter of the bed were essential for Aristotle, words merely described that. The form of bedness was essential for Plato, not the word bedness as the form of the bed isn’t the word bedness, it’s the perfect bed!
It was Popper and the analytical philosophers who were essentialising language not Plato and Aristotle.
Seems like you’re describing a dictionary.
Your wish is not my wish: ambiguity (intentional figures of speech) exists and can be as or more desirable/useful than straightforwardness, especially where there are complexities that defy ready explanation.
If you look a word up in a dictionary, it typically has more than one meaning. I’m talking about not allowing the word bat or whatever have different meanings. Here’s an example of a silly thing that could be more easily cleaned up.
The captain told me to lead the men to the lead mine, so I led them through the street and had to duck a duck that flew over on the way.
On a significantly more important note, we should tighten up even slightly different versions of the same meaning to minimize word play.
If you look up a word in a dictionary it will always be defined by other words, and eventually, you will come back around in a circle to the first word you looked up. Moreover, computer languages only have to carry an internally valid meaning. Natural language otoh have to carry meaning about external reality.
I don’t believe that is the cause of a word being allowed to have two entirely different meanings, such as bat or even the word imply, which can mean anything from suggests to logically entails. Equivocation becomes much easier when words are flat out allowed to have different meanings. We allow it for some reason when we could clear it up. If we aren’t careful, people might start intentionally abusing it.
As far as computer programming languages not being externally valid, I completely disagree. The English language is used to express thoughts and other people interpret the output, whether it be in the form of writing or speech. The output of computer programs is used to inform business decisions, medical decisions, and even guide government policy. Additionally, computer code is used to directly control the operation of things ranging from stop lights, pace makers, air-traffic controller systems, and much more. Even our brains are computers than run on the firing of neurons and must use some type of language that I can only assume breaks down to binary at the most basic level.
With computers, we use idealized abstractions so we don’t have to write everything in bare-bones languages, except the abstractions are imperfect and result in less optimal code even though they are far more productive and easier to work with. However, even a string of zeros and ones is no less a language than any programming language or human language. It is a code used for communication. In fact, it is more pure, exactly as our minds must think in a pure language at some level.
The abstractions we create may not be the real world, except they can be useful and productive when used appropriately and in the right situation. It might take me a month to write something in Assembly Language that I can write in a higher level language in a day. If the program is of any considerable size at all, I would never finish if I tried to write it in machine language. On the other hand, if you go too far off the deep end with the abstractions, you get lost and wind up with nonsense because you wind up outsmarting yourself.
you tighten up word play, you eliminate puns and 3/4 of British humor and I won’t stand for that. 🙂
Then, it gets even worse – the same word can have opposite meanings (contronym). Faith is one such word. It can mean “belief in the absence of any evidence or direct experience” or it can mean “belief based on extensive past experience.” Theists use this fallaciously use this contronym to make the spurious argument that everyone, even atheists, have “faith” in something.
Lojban, the language you are looking for is called Lojban.
Also, programming languages are not devoid of ambiguities. They are introduced in a controlled way to make programming easier, for example method overloading/riding, operator overloading, scoping rules and namespaces (for recycling names), and abstract base types (and so on).
The trick with this is a rigid algorithm for deciding what is meant in each context (normally by using data types and scope). For example + in java on two ints results in the addition operation, whereas + in java on two strings results in a new string that is the concatenation of the original strings.
Major frameworks like Spring Boot take this to an entirely new level, where it will detect the presence of certain libraries and automatically produce entire classes and invoke custom lifecycle methods to make your application work using only your provided config and application code. For example automatically generating database access code for queries based only on the underlying storage, repository data type (used to represent results of queries), and the name of the query. All with 0 ambiguity due to conventions and minimal configuration.
Another trick is where something is not properly defined and is therefore discouraged by being labeled “undefined behavior”. This warnings are typically only ignored when dealing with a particular operating system/hardware/compiler combination. The code won’t work as expected elsewhere. This is common in C/C++, where many minor things are undefined esp. when errors occur or multi-threading is involved.
Lojban, the language you are looking for is called Lojban.
Also, programming languages are not devoid of ambiguities. They are introduced in a controlled way to make programming easier, for example method overloading/riding, operator overloading, scoping rules and namespaces (for recycling names), and abstract base types (and so on).
The trick with this is a rigid algorithm for deciding what is meant in each context (normally by using data types and scope). For example + in java on two ints results in the addition operation, whereas + in java on two strings results in a new string that is the concatenation of the original strings.
Major frameworks like Spring Boot take this to an entirely new level, where it will detect the presence of certain libraries and automatically produce entire classes and invoke custom lifecycle methods to make your application work using only your provided config and application code. For example automatically generating database access code for queries based only on the underlying storage, repository data type (used to represent results of queries), and the name of the query. All with 0 ambiguity due to conventions and minimal configuration.
Another trick is where something is not properly defined and is therefore discouraged by being labeled “undefined behavior”. This warnings are typically only ignored when dealing with a particular operating system/hardware/compiler combination. The code won’t work as expected elsewhere. This is common in C/C++, where many minor things are undefined esp. when errors occur or multi-threading is involved.
I probably will blabber on with some more philosophy related content in another top level comment. In particular how similar classes are to Ayn Rand’s theory of concepts (measurement omission based abstraction, concepts are names given to such abstractions), and the idea that programs are proofs (which leads to “computational trinitarianism”)
Thanks for the info. I had never heard of Lojban before.
One thing I didn’t see mentioned in the videos I watched is whether the language is designed to spoken in a smooth, flowing manner or with abrupt, distinct stops between syllables and words. I’m reminded of the old joke about a termite walking into a bar and asking ambiguously “Is the bartender here?” or “Is the bar tender here?”.
Or we should learn something from modelling the real world with computers.
There our concepts grew form precise mathematical notions (turing machines, functional programming, predicate logic) to emulating human languages and categorization with objects and classes mimicking aristotelian categories and for example prototypes in JS referencing some more modern linguistic theories of categorization, always forward in the direction towards less strict boundaries and towards ambiguitiy needed to describe the real world (of which the sofa-bed is a good example, which might be difficult to model in single hierarchy OO languages, so we invented new (complex to implement) features like multiple inheritance and digressions like interfaces to work around the categorization restrictions)
All that however does not help us with machine learning, where the categorization theories “don’t apply”. Statistics and information theory however do, and with learning everything there is “epistemological”, nothing is “real” (apart from training data) or a-priori.
Information theory is there to save the day, beacuse it can measure what we treasure in categories, namely the excess enthropy (which we like ) and (small) own information to help us with processing abundance of information. Quite a few of the ML models are used for complex classification / categorization.
There is a lot to process for our heads or our computers, so we approximate as much as we can, complete consistency is even impossible, so with increasing complexity we neccessarily introduce approximations and therefore also errors.
I predict composing different ML models will be even more messy than anything we have done so far with computers, we ‘ll try to manage with composing training data and retraining for as long as possible. Then we’ll see a theory of bridging the states produced by different ML models (or ML “categories”) if such a theory is possible at all.
i have near-zero experience with philosophy but I grasp that many abstract concepts defy tight categorization and definition, and can evoke endless argument when people have different ideas about expanding or contracting a category of abstract meaning.
Tangible nouns and verbs are easy. Liberty, slavery, race, gender-orientation, similar things are harder to pin down. Then (as stated) people can bicker about why something or someone got excluded and other people can bicker about why some things should be included.
“Wigger” can be informally included in Black culture in some friend circles (thinking of a video of an argument in a store between an obvious “wigger” who had adopted a lot of Black idiom almost certainly from his friend, vs an actual Black man who decked him), but CRT scholars (and the general public) saw him as out-of-line and White and someone who deserved the beating.
I enjoy some of this on Twitter arguments, such as when Joe Rosenbaum was throwing around the N-Word and when Kyle Rittenhouse was providing first aid to the ostensibly-Black BLM protesters. Who was on which “side” vs who the media and activists claim was on which “side” (oppressor vs oppressed).
Likewise, when Duante or even Floyd were shot they represented Black people yet their history involved violently oppressing Black people, albeit in robbery more than racism. One Black channel video pointed out that these criminals have harmed and killed more Black people every year than all the Blacks killed by KKK since the 1950. If harming Black people is associated with Whiteness, were they not essentially White? Or is it that “Whiteness” isn’t a valid category as now defined by a history of being engaged in Oppression of PoC.
Any time a gangsta like King Von or others murdered someone for an alleged “crime” beyond disrespectful rap lyrics (and therefore assassination of character?), were they therefore White instead of Black since they weren’t upholding Black solidarity? Are those events not similar to LYNCHINGS of sorts?
We already have that language – it’s called MATH.
Another form of that language is LEGALESE. Lawyers assign specific, precise meanings to terms that exclude other definitions in order to avoid ambiguity and misunderstanding in laws and contracts.
Mr Abercrombie Dorfen, have you ever encountered the Lojban constructed language?