Get your Rapture hats ready, kiddies! The sky is falling, and our wise gift of nuclear winter will propel us all into the loving arms of the all-knowing and all-everywhere G-d.

2006-08-06

Mircea Eliade, Relativity Existentialism God Jung Maybe


 Four Score and seven years ago --
okay, just half that -- I bought an Elaine Pagels U. of C. Press book
about Mirceal Eliade -- simply due to a fascinating back-cover synopsis
--oops and the shrink-wrap is still intact, methinks, if it ever is unearthed
in the eternally ballyhooed 'Rapture' that will coincide with my 3 realms of stuff becoming
organized. ha3



http://www.friesian.com/vocab.htm

Terms used in Mircea Eliade's
The Sacred and the Profane,
The Nature of Religion


Mircea Eliade uses many terms in and from several languages in his classic book, The Sacred and the ProfaneRudolf Otto's theory of numinosity to a variety of religious phenomena. It deals with aspects of religion and thus is really rather narrower than the title, "The Nature of Religion," might suggest. Here is a selection of his vocabulary that may be unfamiliar to students (who no longer have routinely taken Classical languages like Latin and Greek, let alone Arabic). This is based on a shorter list compiled by my colleagues Lisa Raskind and Gunar Freibergs at Los Angeles Valley College. Words used as foreign words are in italics, English words and coinages are not. [Harvest/HBJ, 1959], which applied

ab initio from the beginning (Latin)
ab origine at/from the beginning (Latin)
aiones, aeva eons, ages (Greek & Latin)
anthropo-cosmic human-universal (Greek)
anthropophagy eating people, cannibalism (Greek)
autochthony orgin or birth in the land itself (Greek)
axis mundi center of the world, cosmic pillar (Latin)
cipher a code (cf. decipher)
coincidentia oppositorum coincidence of opposites (Latin)
conjugal having to do with marriage, especially the sexual relationship of marriage
consecrate/sanctify ritually render sacred
cosmogony the mythic creation (birth) of the world (Greek)
cosmological having to do with the universe, its structure
demiurge ("people's worker") a creator deity or a subordinate, non-ultimate creator deity or being (Greek, from Plato's Timaeus)
desacralize render or become unsacred (/profane)
epiphany appearance, manifestation of anything, but especially something divine, as of the infant Jesus on January 6 (Greek)
existential a subjective sense of reality or existence, having to do with Existentialism
fecundator one who makes someone/thing else fertile (Latin)
ganz andere wholly other (German)
gesta exploits, deeds (Latin)
Hellenistic the history, civilization, etc. of the Greek states and rulers from Alexander the Great (d. 323 BC) to the Roman conquest of Egypt (30 BC)
hic et nunc here and now (Latin)
hierogamy sacred marriage (Greek)
hierophany appearance of the sacred (Greek)
historicism the unfolding of history interpreted to reveal the meaning of human life, reality, good and evil, etc. and to provide a (relativistic) standard for value judgments
homogeneous the same everywhere or throughout (Greek)
homologous alike or parallel in function or origin (Greek)
homo religiosus religious man (Latin)
hydrogeny birth from water (Greek)
illud tempus that time (Latin)
imago mundi image, model, microcosm of the world (Latin)
imitatio dei in imitation of the gods/God (Latin)
immolate burn up, cremate
in aeternum in eternity (Latin)
incommensurability when two quantities, ideas, values, etc. cannot be interpreted or compared in terms of each other or in terms of anything else
in illo tempore in that time (Latin)
in principio in the beginning -- the first words of the Vulgate Bible (Latin)
in statu nascendi in the process of being born (Latin)
irruption breaking into (compare "eruption")
macrocosm a large image, model, or counterpart (Greek)
marabout North African dervish/mystic/ holy man (Arabic)
microcosm a small image, model, or counterpart, corresponding to the macroscosm (Greek)
numen the might of a deity, majesty, divinity (Latin), especially as interpreted by Rudolf Otto
ontology the theory of existence or reality (Greek)
ontological having to do with existence or reality (Greek)
ontophany appearance of Being (Greek)
orbis terrarum the circle of the lands (Latin)
paradigmatic in the manner of an authoritative example (Greek)
Parmenidean having to do with Being/existence (Parmenides)
parthenogenesis virgin birth (Greek)
phenomenology the description of appearances/phenomena/facts (Greek)
plenitude fullness (Latin plenum)
post mortem after death, sometimes a synonym for "autopsy" (Latin)
retrogression movement backwards
sacrality sacredness (Latin sacer)
sacralize render of become sacred
sidereal having to do with the stars (Latin sidus)
soteriological having to do with salvation or a savior (Greek, sôtêr)
tellurian having to do with the earth (Latin tellus)
templum space, place, sacred place, temple (Latin)
terra mater mother earth (Latin)
transmundane beyond the world (Latin)
theophany appearance of the divine, gods, or God (Greek)
uranian having to do with the sky/heaven (Uranus in Latin, from Ouranos in Greek)
valence degree of value or power
valorization attribute or endow with value
Weltanschauung idea of the world, world view (German)
ziggurat Sumerian/Babylonian temple pyramid

Philosophy of Religion

Reviews

Home Page

Copyright (c) 1997, 1


Ugh!   beware of wordz--

Relativism


The first clear statement of relativism comes with the Sophist Protagoras, as quoted by Plato, "The way things appear to me, in that way they exist for me; and the way things appears to you, in that way they exist for you" (Theaetetustrue, things that some people believe may be better than what others believe. 152a). Thus, however I see things, that is actually true -- for me. If you see things differently, then that is true -- for you. There is no separate or objective truth apart from how each individual happens to see things. Consequently, Protagoras says that there is no such thing as falsehood. Unfortunately, this would make Protagoras's own profession meaningless, since his business is to teach people how to persuade others of their own beliefs. It would be strange to tell others that what they believe is true but that they should accept what you say nevertheless. So Protagoras qualified his doctrine: while whatever anyone believes is

Plato thought that such a qualification reveals the inconsistency of the whole doctrine. His basic argument against relativism is called the "Turning the Tables" (Peritropé, "turning around") argument, and it goes something like this: "If the way things appear to me, in that way they exist for me, and the way things appears to you, in that way they exist for you, then it appears to me that your whole doctrine is false." Since anything that appears to me is true, then it must be true that Protagoras is wrong [1]. Relativism thus has the strange logical property of not being able to deny the truth of its own contradiction. Indeed, if Protagoras says that there is no falsehood, then he cannot say that the opposite, the contradiction, of his own doctrine is false. Protagoras wants to have it both ways -- that there is no falsehood but that the denial of what he says is false -- and that is typical of relativism. And if we say that relativism simply means that whatever I believe is nobody else's business, then there is no reason why I should tell anybody else what I believe, since it is then none of my business to influence their beliefs.

So then, why bother even stating relativism if it cannot be used to deny opposing views? Protagoras's own way out that his view must be "better" doesn't make any sense either: better than what? Better than opposing views? But there are no opposing views, by relativism's own principle. And even if we can identify opposing views -- taking contradiction and falsehood seriously -- what is "better" supposed to mean? Saying that one thing is "better" than another is always going to involve some claim about what is actually good, desirable, worthy, beneficial, etc. What is "better" is supposed to produce more of what is a good, desirable, worthy, beneficial, etc.; but no such claims make any sense unless it is claimed that the views expressed about what is actually good, desirable, worthy, beneficial, etc. are true. If the claims about value are not supposed to be true, then it makes no difference what the claims are: they cannot exclude their opposites.

It is characteristic of all forms of relativism that they wish to preserve for themselves the very principles that they seek to deny to others. Thus, relativism basically presents itself as a true doctrine, which means that it will logically exclude its opposites (absolutism or objectivism), but what it actually says is that no doctrines can logically exclude their opposites. It wants for itself the very thing (objectivity) that it denies exists. Logically this is called "self-referential inconsistency," which means that you are inconsistent when it comes to considering what you are actually doing yourself. More familiarly, that is called wanting to "have your cake and eat it too." Someone who advocates relativism, then, may just have a problem recognizing how their doctrine applies to themselves.

"Here again we see the contrast
between a long history of struggling
with difficult logical issues and the
assertion by the race-gender-class
critics of a logically unsophisticated
position that is immediately contradicted
by their own actions. Although
theoretically against judgments of
literary value, they are, in practice,
perfectly content with their own;
having argued that hierarchies are
elitist, they nonetheless create one by
adding Alice Walker or Rigoberta
Menchu to their course reading lists.
They vacillate between the rejection of
all value judgments and the rejection of
one specific set of them -- that which
created the Western canon."

John M. Ellis, Literature Lost
[Yale University Press, 1997], p. 197

This problem turns up in many areas of dishonest intellectual or political argument, as in the box quote.

Modern relativists in philosophy, of course, can hardly fail at some point to have this brought to their attention. The strongest logical response was from Bertrand Russell, who tried to argue that nothing could logically refer to itself (called his "Theory of Logical Types" [2]). That was a move that defeated itself, since in presenting the Theory of Types, Russell can hardly avoid referring to the Theory of Types, which is to do something that he is in the act of saying can't be done or that doesn't make any sense [3]. In general, one need merely consider the word "word" and ask whether it refers to itself. Of course it does. The word "word" is a word. Other modern relativists in philosophy (e.g. Richard Rorty) try to pursue Protagoras's own strategy that their views are "better" rather than "true." Rorty sees this as a kind of Pragmatism, which is not concerned with what is true but just with what "works."

Pragmatism is really just a kind of relativism; and, as with Protagoras's own strategy, it is a smoke screen for the questions that ultimately must be asked about what it means that something is "better," or now that something "works." Something "works," indeed, if it gets us what we want -- or what Richard Rorty wants. But why should we want that? Again, the smoke screen puts off the fatal moment when we have to consider what is true about what is actually good, desirable, worthy, beneficial, etc. All these responses are diversions that attempt to obscure and prevent the examination of the assumptions that stand behind the views of people like Rorty. It is easier to believe what you believe if it is never even called into question, and that is just as true of academic philosophers like Rorty as it is for anybody else. Being intelligent or well educated does not mean that you are necessarily more aware of yourself, what you do, or the implications of what you believe. That is why the Delphic Precept, "Know Thyself" (Gnôthi seautón) is just as important now as ever.

Relativism turns up in many guises. Generally, we can distinguish cognitive relativism, which is about all kinds of knowledge, from moral relativism, which is just about matters of value. Protagoras's principle is one of cognitive relativism. This gives rise to the most conspicuous paradoxes, but despite that there are several important forms of cognitive relativism today: historicism is the idea that truth is relative to a given moment in history and that truth actually changes as history does. This derives from G.W.F. Hegel, although Hegel himself thought there was an absolute truth, which would come at the "end of history" -- where he happened to be himself, curiously. This kind of historicism was taken up by Karl Marx, who thought that every kind of intellectual structure -- religion, philosophy, ethics, art, etc. -- was determined by the economic system, the "mode of production," of a particular historical period. A claim to truth about anything in any area could therefore be simply dismissed once its economic basis was identified: labeling something "bourgeois ideology" means that we don't have to address its content. Like Hegel, however, Marx did think there was an absolute truth at the "end of history," when the economic basis of society permanently becomes communism. Modern Marxists, who don't seem to have noticed the miserable and terrible failure of every attempt to bring about Marx's communism, can hardly do without their absolutizing "end of history" [4]; but modern Hegelians (e.g. Robert Solomon) can create a more complete relativism by removing Hegel's idea that there is an "end" to history. Unfortunately, that creates for them the typical relativistic paradox, for their own theory of history no longer has any basis for its claim to be true for all of history. Hegel didn't make that kind of mistake.

Another modern kind of cognitive relativism is linguistic relativism, that truth is created by the grammar and semantic system of particular language. This idea in philosophy comes from Ludwig Wittgenstein, but it turns up independently in linguistics in the theory of Benjamin Lee Whorf. On this view the world really has no structure of its own, but that structure is entirely imposed by the structure of language. Learning a different language thus means in effect creating a new world, where absolutely everything can be completely different from the world as we know it. Wittgenstein called the rules established by a particular language a "game" that we play as we speak the language. As we "play" a "language game," we indulge in a certain "form of life."

In linguistics, Worff's theory has mostly been superseded by the views of Noam Chomsky that there are "linguistic universals," i.e. structures that are common to all languages. That would mean that even if language creates reality, reality is going to contain certain universal constants. In philosophy, on the other hand, Wittgenstein is still regarded by many as the greatest philosopher of the 20th century. But his theory cannot avoid stumbling into an obvious breach of self-referential consistency, for the nature of language would clearly be part of the structure of the worldhis own language game. We don't have to play his language game if we don't want to. By his own principles, we can play a language game where the world has an independent structure, and whatever we say will be just as true as whatever Wittgenstein says. Thus, like every kind of relativism, Wittgenstein's theory cannot protect itself from its own contradiction. Nor can it avoid giving the impression of claiming for itself the very quality, objective truth, that it denies exists. If it does not make that claim, there is no reason why we need pay any attention to it. that is supposedly created by the structure of language. Wittgenstein's theory is just a theory about the nature of language, and as such it is merely the creation of

Although Protagoras gives us a principle of cognitive relativism, his own main interest was for its consequences in matters of value. Relativism applied to value -- that truths of right and wrong, good and evil, and the beautiful and the ugly, are relative -- is usually called moral relativism. This is inherently a more plausible theory than a general cognitive relativism, for people disagree much more about matters of value than they do about matters of fact. And if we are talking about something like justice or goodness, it is much more difficult even to say what we are talking about than it is when we are talking about things like tables and chairs. We can point to the tables and chairs and assume that other people can perceive them, but we have a much tougher time pointing to justice and goodness. Nevertheless, moral relativism suffers from the same kinds of self-referential paradoxes as cognitive relativism, even if we divorce it from cognitive relativism and place it in a world of objective factual truths. We can see this happen in the most important modern form of moral relativism: cultural relativism.

Cultural relativism is based on the undoubted truth that human cultures are very different from each other and often embody very different values. If Italians and Arabs value female chastity and Tahitians and Californians don't, it is hard to see how we are going to decide between these alternatives, especially if we are Californians. A classic and formative moment in this kind of debate came when a young Margaret Mead went to Sâmoa and discovered that casual sex, non-violence, and an easygoing attitude in general made adolescence in Sâmoa very much different from adolescence back in the United States. Her conclusions are still widely read in her book Coming of Age in Samoa. These discoveries simply confirmed the views of Mead's teacher, Franz Boaz, that a culture could institute pretty much any system of values and that no culture could claim access to any absolute system of values beyond that. Since Boaz and Mead were anthropologists, this gave cultural relativism the dignity, not just of a philosophical theory, but of a scientific discovery. Strong statements about cultural relativism are also associated with another famous anthropologist, and friend of Mead's, Ruth Benedict. Today the anthropological empirical evidence that cultures are different is usually regarded as the strongest support for cultural relativism, and so for moral relativism.

There are several things wrong with this. First of all, Mead's own "discoveries" in Sâmoa were profoundly flawed. What Sâmoans have always known is that Mead was deceived by her teasing adolescent informants and failed to perceive that female chastity was actually highly prized in Sâmoa and that there was very little of anything like "casual sex" going on there in Mead's day. Even in her book there are strange aspects, as when Mead characterizes a certain kind of casual sex as "clandestine rape." That has an odd ring -- until we discover that it really is a kind of rape, not a kind of casual sex. It also turns out that Sâmoan culture is rather far from being non-violent or easygoing [5]. The anthropological world has had a tough time coming to grips with this, because of Mead's prestige and because of the weight of ideological conclusions that has rested on it; but the whole story is now out in a book, Margaret Mead and Samoa, by an anthropologist from New Zealand named Derek Freeman. Now, there actually are6]. So it might be possible to reargue Mead's case with different data. But the point of this episode is that it shows us how easy it is for an anthropologist with ideological presuppositions to see what they want to see. This kind of "scientific evidence" is a slippery thing and it is too easy to draw the kinds of sweeping conclusions that were drawn about it. If an anthropological study is going to prove a fundamental point about the nature of value, we must be careful about what the point is supposed to be and how such a thing can be supported by evidence. other Polynesian cultures, such as in Tahiti, where attitudes about sex seem to be rather freer than they are in Sâmoa or even in the United States [

The great problem with the logic of something like Mead's "discoveries" is that even if we accept that cultures can have some very different values, this still doesn't prove culture relativism: for while cultural relativism must say that alldeny that, saying that not all values are relative to a particular culture, i.e. that some values are cultural universals. Thus, Margaret Mead could have visited a hundred Sâmoas and found all kinds of values that were different; but if there is even one value that is common to all those cultures, cultural relativism is refuted. That would be a matter for an empirical study too, although a much more arduous one. values are relative to a particular culture, a cultural absolutism merely needs to

But the deepest problem with cultural relativism and its anthropological vindication, whether by Mead or others, comes when we consider what it is supposed to be. As a methodological principle for anthropology, we might even say that cultural relativism is unobjectionable: anthropologists are basically supposed to describe what a culture is like, and it really doesn't fit in with that purpose to spend any time judging the culture or trying to change it. Those jobs can be left to other people. The anthropologist just does the description and then moves on to the next culture, all for the sake of scientific knowledge. Unfortunately, it is not always possible for an anthropologist to be so detached. Even in Coming of Age in Samoa, Mead clearly means to give us the impression that easygoing Sâmoan ways are better than those of her own culture (or ours). Since, as it turns out, Sâmoan culture wasn't that way after all, we end up with Mead in the curious position of making her own a priori claim about what kinds of cultural values in general are valuable, regardless of who might have them. She didn't just see what she wanted to see, but she saw the better world that she wanted to see. More importantly, cultural relativism, as many anthropologists end up talking about it, gets raised from a methodological principle for a scientific discipline into a moral principle that is supposed to apply to everyone: That since all values are specific to a given culture, then nobody has the right to impose the values from their culture on to any other culture or to tell any culture that their traditional values should be different.

However, with such a moral principle, we have the familiar problem of self-referential consistency: for as a moral value from what culture does cultural relativism come? And as a way of telling people how to treat cultures, does cultural relativism actually impose alien values on traditional cultures? The answer to the first question, of course, is that cultural relativism is initially the value of American and European anthropologists, or Western cultural relativists in general. The answer to the second question is that virtually no traditional cultures have anything like a sense of cultural relativism. The ancient Egyptians referred to their neighbors with unfriendly epithets like "accursed sand-farers" and "wretched Asiatics." In the objects from Tutankhamon's tomb, we can see the king slaughtering various enemies of Egypt, African and Asiatic. The Greeks actually gave us the word "barbarians," which was freely used by the Romans and which we use to translate comparable terms in Chinese, Japanese, etc. Traditional cultures tend to regard themselves as "the people," the "real people," or the "human beings," while everyone else is wicked, miserable, treacherous, sub-human, etc [7].

The result of this is that if we want to establish a moral principle to respect the values of other cultures, we cannot do so on the basis of cultural relativism; for our own principle would then mean that we cannot respect all the values of other cultures. There are going to be exceptions; and it actually isn't too difficult to make a list of other exceptions we might like to make: slavery, human sacrifice, torture, infanticide, female circumcision, and other bodily mutilations of children or criminals. Those are the easy ones. But once given those things, the task before us is clearly a more difficult and sobering one than what we contemplated through the easy out of cultural relativism. On the other hand, we might try to save cultural relativism by denying that it is a moral principle. Of course, if so, nobody would care about it, and there wouldn't be anything wrong with one culture conquering and exterminating another, especially since that has actually been the traditional practice of countless cultures during the ages. Instead, a principle of cultural relativism never enters public debate without it being used as a moral principle to forbid someone from altering or even from criticizing some or all the values of specific cultures. As a practical matter, then it is meaningless to try and save cultural relativism by erasing the moral content that is usually claimed for it.

Cognitive relativisms, of course, will always imply some kind of moral or cultural relativism. Historicism always does that, and, for linguistic relativism, Wittgenstein actually provides us with a nice term for relative systems of value: "forms of life." The hard part is when we then ask if Hitler and Stalin simply had their own "forms of life," which were different from but not better or worse, than ours. Only an ideologue, infatuated with relativism, would answer, "yes." But if we answer "yes," there is, of course, nothing wrong with us defeating and killing Hitler or Stalin. But neither would there be anything wrong with them defeating and killing us. We would have no moral right to try and stop them, but then they would have no moral right to complain about us trying to stop them -- except in terms of their own "form of life," which we don't have to care about. On the other hand, people who talk about "forms of life," and who even might answer "yes" to this kind of question, inevitably make the same move as Protagoras and try to start claiming that their "form of life" is "better" than Hitler's, or ours. So the whole cycle of paradox begins again.

The problem with recognizing the self-contradictory and self-defeating character of relativism is that it does remove the easy out. We may know thereby that there are absolute and objective truths and values, but this doesn't tell us what they are, how they exist, or how we can know them. In our day, it often seems that we are still not one iota closer to having the answers to those questions. Thus, the burden of proof in the history of philosophy is to provide those answers for any claims that might be made in matters of fact or value. Socrates and Plato got off to a good start, but the defects in Plato's theory, misunderstood by his student Aristotle, immediately tangled up the issues in a way that still has never been properly untangled. Most philosophers would probably say today that there has been progress in understanding all these issues, but then the embarrassment is that they mostly would not agree about just in what the progress consists. The relativists still think that progress is to return to what Protagoras thought in the first place. What they really want is that easy out, so as not to need to face the awesome task of justifying or discovering the true nature of being and value.


Ethics

History of Philosophy

Home Page

Copyright (c) 1996, 1998, 1999, 2000 Kelley L. Ross, Ph.D. All Rights Reserved

Relativism, Note 1


Protagoras, for his part, admitting as he does that everybody's opinion is true, must acknowledge the truth of his opponents' belief about his own belief, where they think he is wrong.

Theaetetus 171a. F.M. Cornford translation.

Return to text


Relativism, Note 2


Russell was originally trying to resolve paradoxes of self-reference in Set Theory. It was just a happy added benefit for Russell that his theory could be used to save Relativism. But the consensus now is that Set Theory is better off without the Theory of Types.

Return to text


Relativism, Note 3


On a more technical level, there is the question of which "type" the Theory of Types itself belongs to. Each "type" of expression can only refer to the next lower order type of thing (and never to itself), but the Theory of Types obviously refers to all types, and this violates the fundamental principle of the theory.

Return to text


Relativism, Note 4


But notice that Marx and Marxists must fall into a paradox of self-referential consistency anyway: there may be a cognitively absolute standpoint for knowledge, but Marx is not in it. Marx's own consciousness did not depend on a communist, proletarian mode of production; so he cannot really claim to be producing absolute knowledge, much as he would like to. Marx's own "mode of production" was actually to sponge off his relatives and friends, including his friend Engels, who derived his money from the family business--a factory: Engels was himself a capitalist.

Return to text


Relativism, Note 5


Although, outside of Sâmoa, Sâmoans themselves don't always like to admit this. On a personal note, the first thing I ever heard about Sâmoan behavior was from my first wife, who was Part Hawaiian (Hapa Haole) and had lived all her life in Hawaii. Once she happened to mention that Sâmoans in Honolulu had a reputation for violence--e.g. beating up sailors with baseball bats. Years later I saw some Sâmoans on television in Los Angeles, after an incident with the Los Angeles County Sheriff's Department, complaining about the "stereotype" of Sâmoans being violent, when I had never heard any such thing in Los Angeles. I suspect that most Angelenos would be surprised even to know that Sâmoans lived among them, much less have any ideas about what they are like. I only knew the "stereotype" because of my life in Hawaii.

While the crime rate in Sâmoa is a matter for police records, this all seems a matter of one stereotype against another: of Polynesia as a peaceful place of love, beaches, and hulas, as against harsher versions. The reality certainly was harsher: All of Polynesia was ruled by a warrior nobility, the ali'i in Hawai'i, ariki in New Zealand, etc. In Hawai'i some chiefs were so sacred (kapu) that commoners could be killed just for looking at them. War was familiar, though only the introduction of firearms made it possible for someone like Kamehameha I to actually unify so extensive a domain as Hawai'i: the extraordinary final battle of which was Kamehameha driving the army of the King of O'ahu over the spectacular cliff of the Nu'uanu Pali. So no one should be surprised, or ashamed either, that such a heritage could produce a certain ferocity even now, whether in Sâmoa or elsewhere. As the title of a recent movie about the Mâori of New Zealand puts it: Once Were Warriors

Return to text


Relativism, Note 6


The details of sex in Tahiti can be gathered from Robert I. Levy, Tahitians, Mind and Experience in the Society Islands [University of Chicago Press, 1973]. There are also, of course, the famous stories of Hawaiian girls swimming out naked to Captain Cook's ship, or to the later whalers, willing to bestow their charms for as little in return as an iron nail. Captain Cook began posting guards to repel such tender boarders, both out of concern for spreading venereal disease among them and out of worry that the ship might fall apart from all the extracted nails.

With so much free love, we might wonder, how did the inevitable children get supported? And didn't the Polynesians have any concern about parentage? Well, the whole picture may not add up to anything as free, open, and irresponsible as it might seem at first. For one thing, there was a considerable difference between commoners and the nobility (the ali'i in Hawai'i, ari'i in Tahiti, ariki in New Zealand, etc.). The nobility definitely were very concerned about parentage, since their status depended on their genealogies, which were remembered and chanted in care and detail. It is unlikely that there were any naked ali'i girls swimming out to the sailors.

In the second place, there are reports from various parts of the Pacific that an out of wedlock child, as evidence of fertility and health, enhanced a girl's marriage prospects. A girl only began to be considered "loose" if she had more than one premarital child. At a time when people did not live long, and it was common for women to die in childbirth, it is reasonable to suppose that marriageable girls would really not have much time for extra premarital pregnancies, and that few would want to risk continued pregnancies without the social connection that marriage would bestow.

At the same time, the care and status of any extramarital, or even marital, children was assured for other reasons. If Hawai'i is at all representative of the rest of Polynesian and the Pacific, then the institution of adoption or fosterage was fully capable of absorbing any children, premarital or otherwise, that a woman might not want to raise herself. In Hawaiian, "hânai" means (as a verb) "to raise, freed, nourish, etc." and (as a noun or adjective) "foster/adopted child." There hardly seems to be a difference between hânai fosterage and adoption, since the children were usually fully informed and aware of their natural parents, and reckoned their descent from them. Thus, Queen Lili'uokalani (1838-1917) was not raised by her natural parents but knew who they were and was fully conscious of her royal descent. Thus, there was no shame or secrecy about adoption, and any inconvenience occasioned by out of wedlock birth could be accommodated without stigma or disruption.

While it is tempting to praise these arrangements as humane and sensible, which they certainly seem to be, the viability of the institution really depended on a couple of factors that may no longer be possible:  One was the absence, as far as we can tell, of venereal disease. Today, much extra-marital sex runs the risk, not only of catching and passing fatal disease, but of courting sterility through less serious, but nevertheless damaging, infections. Also, the ease of hânai adoption depended on the casualness with which children could be circulated -- implying too a reciprocity among people who basically all knew each other. This becomes emotionally and legally rather more difficult in a larger, more impersonal, and legalistic society. Nevertheless, we might say that the modern prudent use of birth control, which limits unwanted pregnancies, and the restrained and prudent conduct of a small number of premarital sexual relationships, with an eye to avoiding disease, now has tended to reproduce the more restrained version of Polynesian sexual activity, rather more restricted that Meade's Sâmoa, but somewhat more open that the actual Sâmoa (where a victim of "clandestine rape" could only preserve her prospects in life by marrying the rapist).

All these considerations, of course, speak rather more for the universality of human nature, which adapts to circumstances, than for cultural relativism.

Return to "Relativism"

Return to "Gender Stereogypes and Sexual Archetypes"


Relativism, Note 7


The German word for "German" is Deutsch, which meant "of the people" and is related to theoda in Old English, to "Dutch" in Modern English, and to another Roman word for Germans, "Teutons." In the movie Little Big Man, considerable humor is derived from Chief Dan George speaking of his own people as the "human beings" and of others being adopted into the tribe as "becoming human beings." Islâm traditionally divides the whole world into the Dâru l'Islâm, "the House of Islâm," and the Dâru lH[.]arb, "the House of War," which means the realm of everybody else, where Islâm is ready to carry on the holy war (Jihâd). Every single of these peoples and traditions regarded their ways and their values as best and everyone else's as deficient or terrible.




.../...

Presumably, Sisyphus is unable to escape his condition through suicide. So if we can, why not? Arguably, there is no reason why not. But suicide is not the typical Existentialist answer. What can Sisyphus do to make his life endurable? Well, he can just decide that it is meaningful. The value and purpose that objectively don't exist in the world can be restored by an act of will. Again, this is what has struck people as liberating about Existentialism. To live one's life, one must exercise the freedom to create a life. Just going along with conventional values and forgetting about the absurdity of the world is not authentic. Authenticity is to exercise one's free will and to choose the activities and goals that will be meaningful for one's self. With this approach, even Sisyphus can be engaged and satisfied with what he is doing.

Now we can answer the question why "hell is other people." If we live our lives just because of the completely free and autonomous decisions that we make, this creates nothing that is common with others. If we adopt something that comes from someone else, which could give us a common basis to make a connection with them, this is inauthentic. If it just happens, by chance, that our own decisions produce something that matches those of someone else, well then we have a connection, but it is likely to be volatile. As we make new decisions, the probability of our connection with others continuing is going to decline. We are isolated by our own autonomy. The values and decision of others, whether authentic or inauthentic, will be foreign and irritating.

This sense of estrangement from others is found in another classic of Existentialism, the novel The Stranger (1942), by Camus. Like many of Camus' stories, this one is set in Algeria. It is about a fellow whose mother dies but who can't stand sitting up at her wake. He leaves, and offends the community by his evident disrespect. Later, he kills a local Arab. This is not something that the French colonial judicial system would ordinarily take very seriously, but local French opinion is so unsympathetic with our "stranger," just because he left his mother's wake, that he is condemned for the killing of the Arab. The absurdity of all this is the point of the story. An Existentialist is always a stranger to others and is certainly going to have no patience with conventions like wakes for the dead or, for that matter, laws about murder.
.../...
.../...
Dostoyevsky, although articulating the ideas, did not believe them; but there were real Existentialists-before-their-time. The most important was certainly Friedrich Nietzsche (1844-1900). There are at least three ways in which Nietzsche qualifies as a classic Existentialist, all of which we can see in what may have been his magnum opus, Thus Spoke Zarathustra (1883-1885).

The title itself is a bit of a puzzle. "Zarathustra" is a German rendering of Zarathushtra, the name in the language of the Avesta (Avestan), the sacred scripture of Zoroastrianism, of the founder of that religion, the Prophet Zoroaster (his name in Greek). Since Zoroaster preached a great cosmic conflict between Good and Evil, this is perplexing:  Nietzsche denies the reality of good and evil. But that may be the point. What Zoroaster started, he has now been brought back to end.

  1. Sartre's thought was founded on the non-existence of God as implying the non-existence of all value. Nietzsche expressed precisely this same thing in one of the most famous sayings in the history of philosophy, "God is dead" (a popular bumper-sticker back in the '60's said, "'God is Dead,' Nietzsche; 'Nietzsche is dead,' God") Since Nietzsche did not believe that there ever was a God, this expresses his view that the effective belief in God was dead, but he has a bit of fun with the metaphor of dying, decay, smell, etc. Unlike Sartre, he is a bit clearer that this is a catastrophe, since it leaves nothing; it leaves, indeed, Nihilism (Latin nihil="nothing"), which is the condition of not believing anything and having nothing to live for. Life cannot be lived like this and it is intolerable. Thus, if Existentialism in general is more profound than the thoughtless souls who think that an absurd world is fun, Nietzsche is a more profound thinker than the Existentialists who think that we can do without a God. Nietzsche's replacement for God is the Übermensch. This was originally translated "Superman" since the Latin super means "over," as does German über. In the 30's, however, a comic strip was started about "Superman," who could leap tall buildings in a single bound, etc. This made the philosophers and intellectuals uncomfortable, so later translators of Nietzsche, like the Existentialist Karl Jaspers (1883-1969), started translating Übermensch as "Overman." This does not, however, have nearly the same punch or ring to it. The Superman, indeed, is supposed to be the next evolutionary step beyond mere man -- where we really must say "man," and not "humanity" or any of the politically correct alternatives, since Nietzsche was not very interested in women and clearly despised the sort of liberal culture where equality for women was coming to hand. When Nietzche says "man" (Mensch), he means it -- someone egotistical, brawling, aggressive, arrogant, insensitive. The Superman is not vulnerable to taming and domesticity. He has broken free of it entirely.

  2. The Superman is free because all his own values flow from his own will. This is the second thing that makes Nietzsche an Existentialist-before-his-time. Value is a matter of decision, a matter of will. Because the Superman is free, he takes what he wants and does what he likes. He is authentic. And since what everyone really wants, if they could have their way, is power, the Superman will seize power without remorse, regret, or apology. The Superman, indeed, is like the Sophist Thrasymachus in Plato's Republic:  Justice is what he wants, and he will take it. The "slave morality" of altruism and self-denial, which the weak, miserable, crippled, envious, and resentful have formulated into Judeo-Christian ethics, in an attempt to deceive the strong into being weak like themselves, is contemptuously rejected and ignored by the Superman, in whom we find the triumphant "will to power."

    It is astonishing that this nasty and contemptuous philosophy has become the darling of the Left, who actually want a society very precisely of the "slave morality" of altruism and self-denial. Perhaps it is because (1) leftist intellectuals know that ordinary people don't actually read Nietzsche, and (2) that they see everyone else as slaves to them, where the masters' duty, noblesse oblige, is to arrange everyone else's lives in the proper way. This is certainly the most common use of Nietzsche, from Adolf Hitler to Garry Wills (cf. his recent authoritarian paean, A Necessary Evil: A History of American Distrust of Government, saying that the Constitutional principle of limited government "...is a tradition that belittles America, that asks us to love our country by hating our government"), to imagine one's self as the Superman, floating above others, dispensing justice, or wrathful punishment, to them. Nietzsche's own critique of Christianity, that the doctrine of love of others actually translates into resentful hatred of others, applies with full force to his most ardent devotees, whose talk about freedom and creativity translates into constant assaults on the freedom and preferences of others, and deep resentment for those, the industralists and inventors (as Ayn Rand understood), who have created the modern world and a better life for all.

    What Nietzsche's Superman gets is a little more durable than the decisions of Sisyphus, since Nietzsche always saw systems of value, like traditional religions, as persistent and living, endowing things with real value, if only for a time. The Superman thus need not suffer from the nausea and dread that are characteristic of later Existentialists, who are always poised on the edge of oblivion. But this is really less honest than the later fears. Making up values doesn't make them so, and Nietzsche himself made it possible for this to be felt so intensely later. After the Superman has "transvalued" his own values a few times, he may begin to detect an arbitrariness and emptiness in them. As Nietzsche himself said, you stare into the Void long enough and the Void begins to stare back. Thus, by the time we get to Camus, we get the Stranger, not the Superman.

  3. The third point, which is advanced as the greatest teaching of the Zarathustra, does the same job as Sartre's redefinition of "responsibility." This is the "Eternal Recurrence." The doctrine is based on a kind of metaphysical parable, that in an eternity of time, all possible things will have happened, which means that in the present, with an eternity of time behind us, everything has already happened, including what is happening now. Since every point where a time like the present has happened, or will happen, itself also has an eternity of time before it, then what is happening now has already happened an infinite number of times and will happen an infinite number of times again. How seriously Nietzsche takes the actual metaphysics of this is a good question, since it implies a fatalism that is otherwise contrary to Nietzsche's view of will. But the metaphysics is secondary. Since actions to Nietzsche are no longer good or evil, he feels the same loss of weight as does Sartre and wants some way to make actions seem more serious than they would be for your ordinary Nihilist. With the Eternal Recurrence, actions become weightier because one must be perpared to do them over and over again for eternity (like, indeed, Sisyphus). This still doesn't, after all, mean that they are right or wrong; it simply means that before you do something, you must determine that you really want to do it. Woody Allen jokes about this, that Nietzsche's Eternal Recurrence means that he will have to see the Ice-Capades over and over again. Unfortunately, it is not hard to imagine that the greatest criminals of history, from Jack the Ripper to Adolf Hitler, would be perfectly happy to repeat their crimes endlessly. So, as with Sartre again, Nietzsche's doctrine does little to make up for the loss of real morality, and the Eternal Recurrence has never been as sexy or popular a doctrine as the Superman or the Will to Power.

So far I have been considering atheistic Existentialists, like Sartre and Nietzsche; and the way they formulate their doctrines, it might seem that atheism would be intrinsic to Existentialist ideas. The absence of God implies the loss of value. However, that is not quite right, and as we continue into Existentialists-before-their-time, we cannot avoid encountering such a one, one of the earliest, who also happens to be a theistic Existentialist. Thus, in a sense Existentialism begins as a form of theism and only later appears in atheistic form.

Our theistic Existentialist is Søren Kierkegaard (1813-1855). Kierkegaard is an Existentialist because he accepts, as fully as Sartre or Camus, the absurdity of the world. But he does not begin with the postulate of the non-existence of God, but with the principle that nothing in the world, nothing available to sense or reason, provides any knowledge or reason to believe in God. While traditional Christian theologians, like St. Thomas Aquinas, saw the world as providing evidence of God's existence, and also thought that rational arguments a priori could establish the existence of God, Kierkegaard does not think that this is the case. But Kierkegaard's conclusion about this could just as easily be derived from Sartre's premises. After all, if the world is absurd, and everything we do is absurd anyway, why not do the most absurd thing imaginable? And what could be more absurd than to believe in God? So why not? The atheists don't have any reason to believe in anything else, or really even to disbelieve in that, so we may as well go for it!

This is sometimes compared to Blaise Pascal (1623-1662), who said, "The heart has reasons that the mind cannot understand"; but really, if the heart has reasons, then, indeed, there are reasons, and the world is not an absurd place. Pascal is a mystic (like some other mathematicians), not an Existentialist. The precedent for Kierkegaard is really more like the Latin Church Father Tertullian (c.160-220), who, when taunted about the absurdity of Christian doctrine, retorted that he believed it because it was absurd. Without reasons of heart or mind, Kierkegaard can only get to God by a "leap of faith." This is the equivalent of the acts of will in the classic Existentialists, and equally fragile. A leap of faith attains no reasons it did not have before, and so the position of faith remains irrational. But it does achieve something a little different. A position of faith, however it is attained, does bring with it certain responsibilities. Belief in a real God is going to bring with it the Law, as the moral teachings of one's religion, whichever it is, cannot then just be ignored. This returns one to the complications of traditional morality, the kind of thing that Nietzsche or Sartre or Heidegger would just as soon ignore. In retrospect, however, the three of them should have taken traditional morality a bit, or a lot, more seriously than they did. The project of dumping the whole business did not have edifying results, either personally or politically.

Kierkegaard's moral and religious seriousness offered a more promising basis for the development of Existentialist themes than the basically nihilistic, egocentric, and hopeless approach of Nietzsche, Sartre, and the others. Philosophers who make their own leap of faith to Marxism or Naziism have really discredited their own source of inspiration. Thus, while Sartre achieved for a time a higher profile in the fashionable literary world, theistic Existentialists, like Nikolay Berdyayev (1874-1948), Paul Tillich (1886-1965), and Martin Buber (1878-1965) continued Kierkegaard's work with updated approaches to traditional religions. Atheistic Existentialism really exhausted itself:  The effort of will required for Sisyphus to maintain his enthusiasm is really beyond most human capacity, and better the solace of traditional religion than the vicious pseudo-religions of communism or fascism.

The personal failures of Sartre or Heidegger, however, do demonstrate their seriousness, and the fact that the absurdity of the world for them was not a joke, was not fun, but a terror. Their failure was in the direction of the solution they sought, a solution that could not be bound by some fairly simple and fundamental moral considerations. It wasn't just that they couldn't bring themselves to believe in God. They couldn't bring themselves to believe in right and wrong. But the principle of Dostoyevsky's nihilist, that "without God, all is permitted," really represents an impoverished reading of the history of philosophy, and of religion also. Plato's Forms did not depend on God, nor Schopenhauer's sense of justice and compassion (of which Nietzsche cannot have been unaware), while the Buddha Dharma is the moral teaching of a religion that explictly rejects the existence of a God. Thus, Nietzsche and Sartre base their thought on a false inference. It simply does not follow that if there is no God, then all is permitted. It doesn't even follow that there is no religion. Nor does it follow that everything is without meaning. When Beethoven faced his own growing deafness, he knew that he could still create music, create beauty, even if completely deaf. That is what happened. But Plato already pointed out that beauty is a tangible kind of value, something we can see and touch (or hear), and a clue to the reality of all value, even the kind that we cannot see. The Existentialists, even the theistic ones, seem to have overlooked that.

The final word on Existentialism may come from the immortal detective novel The Maltese Falcon by Dashiell Hammett [1929]. In Chapter 7, "G in the Air," the detective Sam Spade tells his client, Brigid O'Shaughnessy, a story while they are waiting to meet Joel Cairo. This was left out of the 1941 movie by John Huston. The story was about "A man named Flitcraft" who "had left his real-estate-office, in Tacoma, to go to luncheon one day and had never returned." The man vanished completely, and there was never even a hint that there had been any financial trouble, lovers, kidnapping, or anything else understandable. He was just gone. Seven years later, Spade was sent out to check on a man who had been seen in Spokane and identified as Flitcraft. It was him, living as he had in Tacoma, with a family and a successful business. Flitcraft explained that as he was going to lunch years earlier, he had passed a building under construction. A beam had fallen from the building "and smacked the sidewalk alongside him." He was unhurt except for a cut from a bit of concrete that the beam had chipped off on impact. What bothered him was the senselessness of it all. His nice orderly life might have been destroyed by a totally random event. "He felt like somebody had taken the lid off life and let him look at the works." He had always felt "in step with his surroundings." Now he saw that "in sensibly ordering his affairs he had got out of step, and not into step, with life." So he adjusted. He took a boat to San Francisco and began to live a life as random as the falling beam. This lasted for a while, but eventually he drifted back to Washington State and settled down in Spokane.

I don't think he even knew he had settled back naturally into the same groove he had jumped out of in Tacoma. But that's the part of it I always liked. He adjusted himself to beams falling, and then no more of them fell, and he adjusted himself to them not falling.

This seems to be what happened to many Existentialists. The empty, meaningless world is ultimately intolerable, and major figures of Existentialism seem to drift towards some tangible source of value and meaning. Thus, Sartre took up Marxism, and Heidegger Naziism. Camus may have been spared this by an early death, at 47. The most vernable and durable form of Existentialism, the theistic, obviously gains the most substantial referent for value and meaning.

There ended up being more than a little of Flitcraft in Dashiell Hammett himself, who was living a conventional life with a family and a job with the Pinkerton's Detective Agency. When he was diagnosed with tuberculosis, he abandoned his wife to devote his last energies to writing. The world of his "Continental Op" detective, and then Sam Spade, is a bleak one, where the only real value seems to be the determination of the detective to do his job. Hammett was unable to sustain this vision in his own life. He got religion by joining the Communist Party. In this company he crossed paths with Lillian Hellman. Their relationship inspired his last published novel, The Thin Man, which featured the only sympathetic woman in his writings. Unfortunately, Hammett's fame and the sophisticated literary circles in which he then lived ruined his career. Hellman and her friends expected him to begin writing "serious" literature, not just popular detective stories. He never made the transition, but also lost his inspiration even for the detective stories. And his devotion to the Party got him some jail time, when he was found in contempt of Congress for refusing to testify to the House Un-American Activities Committee (HUAC) [note].

This was a sad end for Hammett, almost worse than Heidegger's Naziism. Neither defeat nor disgrace ever slowed down Heidegger's productivity, while both Sartre and Heidegger continued to be celebrated, regardless of the folly or viciousness of their politics. Hammett, in gaining a lover and a reputation, lost his Muse and only managed to martyr himself for Stalinism. Talk about an absurd universe.


Friedrich Nietzsche (1844-1900)

On Heidegger's Nazism and Philosophy, by Tom Rockmore

WOODY ALLEN:  That's quite a lovely Jackson Pollock, isn't it?

GIRL IN MUSEUM:  Yes it is.

WOODY ALLEN:  What does it say to you?

GIRL IN MUSEUM:  It restates the negativeness of the universe, the hideous lonely emptiness of existence, nothingness, the predicament of man forced to live in a barren, godless eternity, like a tiny flame flickering in an immense void, with nothing but waste, horror, and degradation, forming a useless bleak straightjacket in a black absurd cosmos.

WOODY ALLEN:  What are you doing Saturday night?

GIRL IN MUSEUM:  Committing suicide.

WOODY ALLEN:  What about Friday night?

GIRL IN MUSEUM: [leaves silently]

"Play It Again, Sam", Paramount Pictures, 1972






http://www.friesian.com/numinos.htm

The Kant-Friesian Theory of Religion
and Religious Value

including Kant, Fries, Schopenhauer, Nelson, Otto, Jung, & Eliade


Nil sine Numine

"Nothing without Divine Will," Motto of the State of Colorado

Dei sub Numine Viget

"It/She Flourishes under the Might of God," Motto of Princeton University


The notion of the sacred or the numinous as a category for understanding religion was substantially launched by Rudolf Otto (1869-1937) in his classic Idea of the Holy (Das Heilige) in 1917. Otto, indeed, coined the term "numinous," which has now become part of common usage. Otto's influence on thought about religion extends from C.G. Jung (1875-1961) to the "Chicago School" of history of religion founded by Mircea Eliade (1907-1986). On the other hand, Otto's influence on the philosophy of religion has been less strong, perhaps because he was professionally more of a theologian (not rigorous enough for philosophers) and is too easily misunderstood and dismissed as describing some kind of mysticism. Even in the history of religion, Otto's own analysis often does not persuade because of his clear preferences for Christianity and his devaluation of religions that do not measure up to Christian paradigms.

The history of Rudolf Otto's theory of the sacred begins, however, more than a century earlier, with the great philosopher Immanuel Kant (1724-1804). Otto's own confidence in his ability to talk about God has its origin in Kant's own theory about the basis of the concept of God in human reason. In the Critique of Pure Reason (1781) Kant had reworked the traditional distinction between the immanent (within the world) and the transcendent (outside the world) by distinguishing between phenomena and things-in-themselves. "Phenomena" are how it is that objects appear in our own conscious minds. We do not have access to the world outside of the experiences we enjoy through our own consciousness, and Kant believed that consciousness itself, or the possibility of conscious experience, imposes certain conditions on the manner in which phenomenal objects appear to us. Among those conditions are the forms of space and time and the abstract forms of connections between events and objects such as the concept of substance and the relation between cause and effect. David Hume (1711-1776) had challenged philosophers to show why it is that we believe in principles such as the one that every event must have a cause. Kant's answer, then, was that the mind itself constructs a phenomenal reality according to just such a rule.

Things-in-themselves, in turn, are the way that reality exists apart from our experience, our consciousness, our minds, and all the conditions that our minds might impose on phenomenal objects. The question occurs, then, whether concepts like substance and cause and effect apply to things-in-themselves the same way that they do to phenomena. Kant did not think that we could know. However, he did notice something very curious: it isn't just that we apply the principle of cause and effect to phenomena, it is that we apply in a certain way. In phenomenal reality, cause and effect are applied in a continuous series. Every effect has a cause, but every cause also has a cause, and so forth, ad infinitum. This adds up to a philosophical principle of determinism, that everything is causally determined to act in a certain way. Kant believed that science sees things that way, but we do not, for the idea of free will contradicts determinism. Free will involves a free cause, i.e. a cause that is not determined by some prior cause. We can also call that an unconditioned cause, since it is free of a prior causal condition, and it occurred to Kant that a characteristic of phenomenal reality was that everything was conditioned by something else. In that, Kant hit upon the same point that had been made earlier in Buddhist philosophy: in the reality that we see, everything is conditioned by everything else. One example of this in Buddhist thought is the doctrine of Relative Existence or No Self Nature: Nothing has a essence, nature, or character by itself. Things in isolation are shûnya, "empty." The nature of things only exists in relation to everything else that exists. Existence as we know it is thus completely relative and conditioned by everything else. Only Nirvâna would be unconditioned, although we cannot know what it is like.

If an unconditioned cause cannot occur in phenomenal reality, then it could only occur among things-in-themselves. But for all we know, even if cause and effect do apply among things-in-themselves, determinism may even be true there also. Other kinds of unconditioned objects, like God or the soul -- God is not conditioned by anything, and the soul is free of most of the conditions of phenomenal reality, like corruptibility -- might also exist among things-in-themselves, but we cannot be sure about that either. Thus, Kant did not believe that it was possible to prove things about things-in-themselves. If we try to do so, we create what Kant called "dialectical illusion," involving contradictions in reason itself, e.g. between determinism and free will, which Kant called Antinomies. Kant's Fourth Antinomy lays out equally compelling arguments for and against the idea of a Necessary Being, i.e. a God. Nevertheless, Kant believed that the existence of things like God, freedom, and the soul could not be disproved; and in the Critique of Practical Reason (1788), he decides that the Moral Law provides us a basis for making certain decisions about transcendent objects that mere theoretical reason could not do. Thus, we believe in free will because we must if we are to use moral concepts like responsibility, guilt, praise, blame, retribution, punishment, etc.; for according to determinism, no one is actually responsible for their actions, and scientific explanations will always reduce people to creatures of remote causes, e.g. genetics, childhood, society, drugs, disease, etc. All three of what Kant called the "Ideas" of pure reason in the First Critique--God, freedom, and immortality--Kant comes to believe are motivated as objects of rational belief, on the basis of moral considerations, in the Second Critique.

The next step towards Otto comes with an obscure post-Kantian philosopher, Jakob Fries (1773-1843). Friesian theory is little known today, but Fries does rate honorable mention by perhaps the greatest recent philosopher, Sir Karl Popper (1902-1994), who says, apropos of G.W.F. Hegel's dialectical method: "For the truth is, I think, that it was not at first taken really seriously by serious men (such as Schopenhauer, or J.F. Fries)..." [Karl Popper, The Open Society and Its Enemies Vol II, Princeton University Press, 1966 (1945), p. 27]. Popper himself, in his seminal The Logic of Scientific Discovery (1934), says that he considers his own system of thought a successor to the Friesian pattern, probably because Fries revived the Aristotelian principle that not every proposition needs to be proven. Popper believed that scientific theories do not need to be proven because they are actually falsified instead. As with Schopenhauer, Fries was attempting to come to terms the problems in the Critical Philosophy of Immanuel Kant.

Fries was not impressed by Kant's arguments for belief based on practical reason. Like Kant, however, he did believe that the notions of God, freedom, and immortality are necessitated by reason. He concluded, in his Wissen, Glaube und Ahndung [1805; available in English as Knowledge, Belief, and Aesthetic Sense, translated by Kent Richter, Jürgen Dinter Verlag, 1989], that we simply must accept that these concepts spring from theoretical reason directly as, indeed, a kind of rational belief (Glaube), which we will not be able to understand in the way that we understand science or the world of experience (what Fries called Wissen, "knowledge"). On the other hand, Fries was put off by the bloodless rationalism and moralism of Kant's theory. Kant had provided a place for feeling in his system, in the Third Critique, the Critique of Judgment (1790); but his view was that the feelings of the beautiful and the sublime did not arise from any direct relationship to external reality but only from a subjective harmony of our own mental faculties. Fries did not think such a theory was good enough. He thought that aesthetic and religious feelings were real cognitions of their objects but that they existed in dissociation from any concepts that would make them real matters of Wissen or understanding. Kant himself had earlier believed in such an aesthetic realism, but he later decided that only morality related directly to things-in-themselves.

Thus, after a fashion, Fries extended Kant's own theory of the mind: Kant had thought that experience arose from the synthesis, the active unification, of sensations according to rules provided by pure concepts of the understanding. Kant then figured that reason had some concepts, the Ideas of God, freedom, and immortality, for which we have no corresponding sensations and so no corresponding experience or understanding. Fries merely added that there are corresponding sensations, aesthetic and religious feelings, but that synthesis, experience, and understanding still do not actually occur between the sensations and concepts. Fries calls these feelings that we have independent of reason and understanding Ahndung, or "intimation." They are intimations of the transcendent.

Where Kant had thought that it was only through reason and morality that we are related to things-in-themselves, Fries adds that there is a component of feeling to this relationship as well. Fries was thus bound to see religion differently from Kant. In Religion within the Limits of Reason Alone (1793), Kant had reduced religion to a phenomenon of reason and morality. Kant believed, indeed, that morality was what religion was all about and that it provided a basis for rational belief in concepts like God, freedom, and immortality; but this provided no ground for any other aspects of traditional religious practice, belief, or experience. Fries was able to add an important component, that the central aspect of religion was not so much reason as feeling. But Fries still provided little room for most of the traditional contents of religion. Even though both Kant and Fries were, in some general and cultural sense, Christians, there was nevertheless no reason in their systems of philosophy to believe anything more than that Jesus Christ was a particularly good moral teacher. Fries might make Jesus some kind of poet in addition, but there was still no way that he could admit anything like traditional Christian views about the status and function of Jesus in the nature of reality or the scheme of salvation. Indeed, Fries had a moral objection to the idea that Jesus might have suffered for our sins and redeemed us from damnation. Salvation itself could only remain an alien concept to both Kant and Fries, but it is hard to see what something like Christianity (or Islam or Hinduism or Buddhism) could possibly mean as a religion without the idea of salvation.

Kant and Fries thus both represent a strong sort of philosophical rationalism, albeit one with much more room for something like religion than the reductionistic materialism that became common in the 19th century (continued and continuing among many in the 20th). Fries himself was simply forgotten until rediscovered by a later German philosopher, Leonard Nelson (1882-1927). Nelson added little beyond lucid exposition and restatement to Fries's view of religion, but Nelson did introduce Fries to a colleague of his at the University of Göttingen: Rudolf Otto. Otto had a clear sense that there was much more to religious feeling than what his philosopher friend allowed through a sense of the beautiful and the sublime. But he also thought that there was no reason not to add that extra feeling into the very fine metaphysical and epistemological framework, the theory of Ahndung, that Kant and Fries had actually provided. Thus, as a purely descriptive matter, Otto believed that we are related to the transcendent, not just through morality, and not just through the beautiful and the sublime, but through a sense of the holy and the sacred, categories of value that are unique and characteristic of religion.

Otto takes the Latin word numen, "the might of a deity, majesty, divinity," and coins the term "numinous" to describe either religious feelings or the religious aspect attributed by those feelings to experiences and objects. He characterizes the feelings as involving 1) ultimacy, 2) mystery (mysterium), 3) awe (tremendum), 4) fascination (fascinans), and 5) satisfaction. Unassociated with any objects, the sense of the numinous is a feeling of "daemonic dread," a sense of the uncanny, frightful, eerie, weird, or supernatural. These feelings make us feel vulnerable and overpowered, what Otto calls "creature feeling." A lot of this now sounds like it would go with a horror movie and be associated more with the "forces of evil" than with the God of Judaism, Christianity, or Islam, let alone with Jesus or the Buddha. However, the "forces of evil," if taken seriously enough, as Satan, demons, etc., actually are supernatural and numinous; at one time most religions did not clearly distinguish between benevolent powers (Orisis) and malevolent ones (Seth) as such; and, finally, the God of the Old Testament and the Qur'ân really is a terrifying, overpowering, awesome, even dreadful being--not because He is at all evil, but just because He is genuinely supernatural and uncanny. The expression "fear of God" is not appropriate because wrongful harm is necessarily to be feared from God, but because the kind of reality God represents is superlatively awesome and frightening just because of what it is. Even in Buddhism, this sense turns up in the "Wrathful Deities" who are particular manifestations of Buddhas and Bodhisattvas in the Vajrayana form of Buddhism found, for instance, in Tibet.

Taking Otto to be a mystic, which is typical in philosophy of religion, involves a serious misunderstanding or distortion of his theory. Mysticism might be defined as some kind of direct, immediate, or perceptual knowledge of transcendent objects, e.g. God, angels, etc. That might have been the experience of Abraham, Moses, Job, etc.; but it is not the experience of most ordinary religious believers, and it is not what numinosity is particularly about, although Otto's language may suggest that at times, and Otto was interested in mysticism. Instead, Otto clearly distinguishes our concepts of the ultimate transcendent objects of religion from the ordinary rites and experiences common to most religious believers, which contain the numinous feelings about non-supernatural objects. The concepts, as far as he is concerned, come from reason, just as Kant or Fries would have thought. A mystic claims more than that, and Otto does not seem particularly inclined to credit this as real except as an enthusiastic overinterpretation of numinous feelings, or in extraordinary moments of religious revelation about which we may have to suspend judgment.

Indeed, it is the importance that Otto assigns to reason that creates the greatest difficulties for this theory. Otto accounts for the difference between historical religions in two ways: 1) religions reflect different degrees to which ethical questions have been assimilated into religious consciousness. He calls this the ethical "schematization" of religion, and he seems quite justified in regarding this as a historical innovation. Greek philosophers, the Jewish prophets, the Iranian prophet Zoroaster, and the Buddha all introduce strong moralizing tendencies into their religions. Now it is hard to imagine religion without a moral aspect, but that really had little to do with early Greek, Egyptian, Babylonian, or any other ancient religion or, for that matter, modern Shintoism. Otto also believed that, 2) religions reflect different degrees to which the Kant-Friesian religious Ideas have been assimilated into religion. Thus, not all religions have a single creator God, and Otto is willing to dismiss the sophistication of Buddhism, along with ancient polytheisms, as insufficiently developed compared with Christianity. Since he believes that Judaism and Islam are insufficiently developed morally compared with Christianity, Otto comes to the, for him, comfortable conclusion that Christianity is the supreme religion.

The force of these views, however, rests on the credibility of Kant's and Fries's arguments for their rational faith in Ideas like God, freedom, and immortality. Fries's confidence in rational Glaube seems unwarranted because of the serious rationality of a Buddhist philosophical tradition that contains nothing like what he regards as "natural" to reason. Kant was somewhat more agnostic and careful than Fries, but he produces only the lamest of arguments for Ideas like God and immortality. Even Kant's argument for free will, which seems more credible than the others, is confused and defused by the traditional Islamic doctrine that there is free will but that God, as the only cause of everything that happens, including our own actions, is the only being who has free will. Since the important philosophers Baruch Spinoza (1632-1677) and Arthur Schopenhauer (1788-1860) both propose very similar theories, it is hard to see how their disagreements do not demonstrate the very "dialectical illusion" that Kant himself describes.

Even the ethical "schematization" of religion creates a problem for Otto. The traditional Problem of Evil does not even arise when the gods are both good and evil. Nor does it arise very much for Islam, where the Qur'ân plainly says that God does what He pleases and that it is not our business to question Him. But where an omnipotent, omniscience God is ethicized to the extent that He is supposed to be perfectly moral, then the difficulty of the existence of evil in the world, and its evident toleration by God, becomes acute. For many modern Christians, Jews, and others, the moral reproach to God of the world, especially after the horrors of the 20th century, has become so acute that it destroys faith and denuminizes God and religion altogether. Buddhism is certainly in better shape when the presence of suffering is simply taken as a given, no attempt is even made to explain why the world is structured so as to allow such a thing, and the Buddha can be charged with no responsibility for a situation to which he only offers the solution, not the explanation. The result, of course, is there is no explanation whatsoever for the ultimate nature of reality. That may be the most modest and wisest position, but it is also one that tempts even Buddhists into occasional speculations. As Kant would certainly say, this is not a situation that our reason has an easy time leaving alone; and most sophisticated religions, apart from Buddhism, attempt some sort of explanation, however much those must become part of "dialectical illusion."

Stripping away the confident positive "rational" side of Kantian and Friesian theory would leave Otto with a much more credible theory. However religious concepts are related to reason, historical religions present us with very different views of ultimate reality and the purposes of human life. Kant's own theory of the Antinomies describes this situation better than anything else, and it is in turn suggestively conformable to the Buddhist doctrine of the Four Fold Negation: the Buddha had affirmed, for instance, that the person who attains Nirvâna neither 1) exists, nor 2) does not exist, nor 3) both exists and does not exist, nor 4) neither exists nor does not exist. While this powerfully expresses the magnitude of our disability to say anything positive about the transcendent, its logical force is simply to posit a contradiction, which is thus equivalent to the contradictions in Kant's theory of the Antinomies. These considerations, however, carry us well beyond Otto's own theory and the historical form of the Kant-Friesian tradition. Further development will therefore be handled separately.

Finally, however, a couple of additions to Otto's theory should be noted. The first is made by Mircea Eliade. Eliade claimed that one of the most important senses of a hierophany, an appearance of the holy, was as an ontophany, an appearance of Being. Sacred realities thus represent real existence while profane or mundane realities are in some ultimate sense merely non-existence. In terms of space, this means that the creation of the cosmos, which is accomplished by numinous beings (but recapitulated in the founding rituals of cities, buildings, tombs, etc.), sets it off as true existence from the chaos which preexisted it and which remains, perhaps, outside its boundaries. The chaos is thus, in a profound sense, non-existence. We might say now that the chaos is the empty, absurd, horrible, and meaningless merely mundane and factual world so honestly represented by the Existentialist philosophers. Since we do not really want to say that mundane reality does not exist, we could regard chaos as something like Mâyâ in the Advaita Vedanta theory of Shankara: neither existing nor non-existing nor both nor neither, as opposed to the existence of Brahman. Sacred space thus reverses the situation in Buddhism, where the visible world has a prima facie existence, while Nirvâna involves the Four Fold Negation. On the other hand, in terms of time, we face the Antinomy-like paradox of cyclical time versus linear time. Eliade himself speaks of the terror of history, and he seems to be right that sacred time always involves a return to a paradigmatic mythic time in the past, the time of the creation, the Exodus, the Last Supper, etc. On the other hand, cyclical time contains its own terrors, as may be well perceived in Nietzsche's theory of the Eternal Recurrence, or in the endless and futile cycles within cycles of Hindu Deep Time. Thus, it should be clear that sacred time in religion is rather like a synthesis of the eternal and sacred in illo tempore ("in that time"--Eliade loves his Latin) with the actual historical linearity of the present. The Antinomy allows us neither simple linearity nor simple cyclicity.

A second important addition to Otto's theory may be seen in C.G. Jung with his theory of "synchronicity," which he calls "an acausal connecting principle." The word "synchronicity" can simply mean "together in time." Jung proposes the theory of synchronicity to deal with the occurrence of "meaningful coincidences." Events have always been meaningfully associated with each other, e.g. Halley's Comet appearing at the time of William the Conqueror's invasion of England, when it is now obvious that there can be no causal connection between them and when the slightest bit of scientific sophistication leads us to dismiss any such connections as superstition. Jung can take such connections seriously, not just because he is a psychologist who is interested in whatever appears "meaningful" to people, but also because he is actually a rather faithful Kantian who understands that causal connections themselves are problematic among things-in-themselves. The "meaning" of such connections, of course, is the same kind of meaning that Jung's Archetypes of the Collective Unconscious have, for which Jung self-consciously uses Otto's own term, numinosity. Synchronicity, therefore, coupled with Eliade's ontophany, is about the manner in which connections between events can strike us as real and meaningful, especially religiously meaningful, when there is no sensible, causal, and phenomenal reason for believing that there is a connection at all. This holds off, not the terror of history, but the terror of the arbitrary, random, pointless, and meaningless.

An example of synchronicity is recounted by the great physicist Richard Feynman (1918-1988) [In his quasi-autobiographical book "Surely You're Joking, Mr. Feynman!"], who is the kind of person who clearly and self-consciously lived in a mundane of world of science and nothing else. During World War II Feynman's first wife, Arlene, died of tuberculosis. She had been living in Albuquerque, New Mexico, while Feynman was working on the atomic bomb, not far away, at Los Alamos. Feynman was present when she died. Later he noticed that the clock by her bed, a rare digital clock (for that era) which had been a special gift from him, had stopped at the precise moment, according to the the death certificate, that she had died. That coincidence impressed him; but he comforted himself, after a fashion, by recalling that the nurse had moved the clock to check the time of death, which could have stopped its sensitive works. For the rest of his life he never for a moment doubted that the clock had stopped either from a very mundane cause or by nothing more than an extraordinary coincidence. To be sure, it was a coincidence--but truly a meaningful coincidence. Feynman's universe did not contain Jung's category for him to speak about it.

A more intriguing example of synchronicity is from an area that was of interest to Jung, astrology. Liz Greene is both a Jungian psychoanalyst and an astrologer, often using horoscopes as guides to psychoanalysis. In her 1983 book The Outer Planets & Their Cycles, The Astrology of the Collective [CRCS Publications, Reno, Nevada], Greene gives examples of "birth charts," not just of several historically famous persons, but of some countries, including the Soviet Union. While discussing the Soviet chart, she says, "I think it's worth considering now the conjunction which is approaching toward the end of the decade [i.e. the 80's], because the Russian chart of all the national charts we have looked at is the most strongly affected by it... I would therefore expect that, although the conjunction represents many other things on a deeper level, one of its effects is to produce concrete changes in Russia... It's very possible that the Russian regime may topple..." [p. 122]. Since this was written, or at least published, eight years before the fall of the Soviet Union, it stands as a fairly impressive bit of astrological forecasting, however qualified as merely "possible." Now, it is the only bit of astrology I have ever seen that is quite that impressive, so we can hardly say that this verifies astrological forecasting, in the face of all its falsification by failed predictions. On the other hand, this is the only forecast of trouble for the Soviet Union in the late 80's that I have ever seen from any source, including more presumptively scientific ones in economics, political science, sociology, or history. About the only forecasts for the failure of the Soviet economy that were ever made were those of Ludwig von Mises and F.A. Hayek. While Hayek did live to see the collapse of communist command economies and the fall of the Soviet Union (1989-1991), even he does not seem to have predicted prior to the event just when this would happen. Thus, while we don't want to say that Liz Greene has vindicated astrology, it is definitely a meaningful coincidence that she is about the only person in either science or para-science to have made so specific a forecast so much in advance, especially when respected economists were still writing, at the same time, that the Soviet economy worked and was successful. Certainly Jung would have been pleased and intrigued.


Editorial Note

This was delivered as a paper, "The Roots of Rudolf Otto's Theory of Numinosity in Immanuel Kant, Jakob Fries, and Leonard Nelson," to the Philosophy of Religion section of The Southern California Philosophy Conference at the University of California, Irvine, on Saturday, October 26, 1996. Some additions have been made to the essay in 1999.


The New Friesian Theory of Religious Value

Philosophy of Religion

Home Page

Copyright (c) 1996, 1999, 2004 Kelley L. Ross, Ph.D. All Rights Reserved, except as hereby granted:
All rights are reserved, but fair and good faith use with attribution may be made of all contents for any non-commercial educational, scholarly, and personal purposes, including reposting, with links to the original page, on the internet. It is not necessary to obtain copyright release for such uses, but the Proceedings would be grateful to be voluntarily informed, for informational purposes only, of the use of its materials. Commercial use of these materials may not be made without written permission.



No comments:

Blog Archive