The Role of Philosophy in Daily Life

One might read the previous chapter and question whether philosophy is more than esoteric navel-gazing.  Admittedly, I didn’t do a very good job of presenting it in a manner that would appeal to “Plumber Joe”.  Why should one concern oneself with trying to figure out all the little details about how the universe operates and why?  Shouldn’t it be sufficient to figure out how these more concrete tools at my disposal can contribute to my quality of life?  I can make more money, get better employee benefits, and have more self-satisfaction if I simply tend my garden[1] and work on much more real things.  Besides: lifting weights, buying cars, and playing guitar are easier activities than questioning fundamental assumptions about reality and considerably increase my value in the sexual market by comparison.

I, myself, feed my growing family by way of more practical considerations than discussing the specific ontological status of contracts.  I’m a facilities manager by trade and a philosopher by vocation.  Given that practical considerations generally have more market value than philosophical ones, why would one choose to engage philosophy?  There are a number of answers that, cumulatively, make a compelling case for such activity.  For now, I will focus on the more practical aspects and save the more psychological and ephemeral ones for later in this book.

One of the key aspects of the philosophical exercise is epistemology.  What epistemology effectively boils down to is the study of knowledge: what it means to know something and by what mechanism one comes to know something.  At first, it may seem like a dumb line of inquiry.  One knows something if they believe something and it happens to be true; they know these things because experience leads them to believe such things with accuracy.

As anyone who has had experience with mind-altering substances, mental illness, or living with a pathological liar, will attest, sometimes knowing things isn’t as easy as people initially think.  This has been the case throughout history, as well.  If I see an omen or an angel comes down and tells me something will happen at an appointed time, could that belief rightly be called knowledge?  What if an authority figure tells me something?  Hell, even my senses are suspect; how many times has someone looked at an object and misjudged its size or distance, witnessed a mirage, heard or felt something that didn’t correspond to anyone else’s experience, or any number of other illusions?

Descartes[2] wondered if he was the only mind in existence and that there may be a spirit of some sort causing him to have a vision of all the other phenomena he experienced.  This line of reasoning is called solipsism[3].  This solipsistic reasoning has been extended to “Matrix”-like brain-in-a-vat thought experiments and universe-simulation theories.  One doesn’t need to get as involved as Descartes, though, a quick trip on drugs or mental instability will give one sufficient experience of “seeing things that aren’t really there” to begin doubting one’s senses.

Epistemic problems don’t even need to be that far-reaching, either.  For example, inexplicably, there are a growing number of people that believe the Earth is flat, that crystals have magic healing powers, that children should be encouraged to undergo irreversible unhealthy and life-altering plastic surgery, and so many more absurdities.  Just yesterday, I was led to believe that I had to be somewhere at a certain time… and both the time and location were incorrect.

Understanding the nature of knowledge in a deeper and more reflective manner has, however, been quite useful in preventing situations such as the one that occurred yesterday.  For example, exploring common occurrences of human fallibility in theory helps to identify instances in reality and navigate people through them.  When attempting to coordinate multiple contractors, administrators, and customers, heightened awareness of epistemic difficulties and solutions has been invaluable.

Something related to epistemology and equal in utility is the study of ontology.  Ontology is the study of existence, things that exist, and in what manner.  Again, this may seem to be as obviously superfluous as epistemology at first, and one could just as easily be surprised.  The earlier epistemic examples of “experiencing things that aren’t really there apply to ontology as well, of course.  But what if I told you that a great many things we take for granted as existants[4] are of dubious ontological status?

There’s the obvious things like God, space aliens, astrological energies, political authority, true love… and some less obvious things like consciousness, free will, fundamental particles, or that fortune that Nigerian prince still owes you[5].  One can’t be certain of the existence (or non-existence) of these things if one doesn’t have a firm grasp on one’s methods of knowing things but, even then, it can be difficult to prove or disprove the existence such things.

This is where the bottom-up approach of philosophy
I mentioned in the previous chapter becomes pertinent.  If one can secure knowledge of or, at least, confidence in the existence of some things, it becomes easier to bring other things into that sphere of knowledge by way of understanding the relationships between the two.  Since Descartes’s famous cogito[6], philosophers have largely attempted to prove their own existence or the existence of the phenomena experienced by themselves and used that as a starting place by which to prove the existence of the other furniture of the world that we all take for granted.

I’m sure that this doesn’t seem practical just yet.  “I know I’m hungry because I feel hungry and I know that this bacon cheeseburger I’m about to eat is real because I can see, smell, touch, and taste it.”  Fair enough.  But what if there is a God and he hates people who eat cheeseburgers?  Alternatively, what if that meat isn’t real meat but is some science experiment grown in a vat and happens to be riddled with prions[7]?  Knowing either of those circumstances may give one sufficient reason to modify one’s behavior.

The same goes for whether or not the cow and pig that were, ostensibly, butchered to produce one’s meal possess consciousness and are capable of experiencing meaningful mental events.  If one were convinced that were the case, one would likely become a vegetarian, posthaste.  Otherwise, why wouldn’t one eat baby-burgers with dolphin sauce?

That took a dark turn, but the question still stands.  There is a great deal of human suffering that one can witness and, assuming one believes that other humans exist and are capable of comparable mental faculties to oneself.  A good portion of this suffering is, directly or indirectly, a result of epistemic or ontological mistakes made by either those that are suffering or by others who have those unfortunate individuals within their sphere of influence.

This is why ethics is the oldest and most-engaged field of study throughout the history of philosophy.  The pre-Socratics[8] were primarily concerned with “how does one live the good life” and secondarily concerned with “how does the world work?”  Socrates, Plato, and Aristotle had similar priorities.  Medieval thinkers in Europe and the Middle East alike were also primarily concerned with “How does one be holy?” and secondarily concerned with “How does God work?”.  Enlightenment-era and modern thinkers have been primarily concerned with “what is justice?” and secondarily concerned with political institutions such as monarchy and various forms of socialism (such as democracy, republicanism, communism, etc.).  Only recently has postmodernism shifted the focus from “how does one live the good life?” to “how can we best undermine all of the institutions which were built by Europeans of bygone eras?” with living the good life becoming a secondary philosophical pursuit.

Of course, one can’t know how one ought to act without first knowing at least a little bit about the world one is trying to navigate, hence my initial focus on epistemology and ontology.  For example, one cannot determine that one ought to act to minimize the suffering of others if one does not first establish that there are others who can suffer and that suffering is undesirable.  The same dilemma applies when determining that one ought to live by the prescriptions of a book written thousands of years ago or refraining from eating a delicious and juicy steak.

A quick survey of ethical theories will present so many varieties of premises and conclusions that one is liable to despair at the outset of such an investigation.  Do not worry; I hope that, by the end of this book, you will have a firm enough grasp of philosophical methodology and (possibly) the reality of the matter which philosophy engages that you will be well on your way to making sense of ethics.

For now, I think it should suffice to say that ethics is the most practically applicable area of philosophy because its primary focus is influencing how one acts.  Ethics takes into account the various circumstances an actor finds himself in and applies a rubric by which he can or should act.  As the ancient Greeks phrased it, the problem is “how does one live the good life?”  Such an inquiry is obviously directed at happiness and, hey, who doesn’t like being genuinely happy?

Admittedly, this rubric must take into account objective facts about the world, such as what things exist and in what manner as well as subjective matters such as the objective of the individual actor, and that process is where things get hairy.  The methodology one uses to sort through the furniture of the world and the subjective goals of the individual actor is the source of the plethora of divergent ethical theories[9].

Ultimately, this introduction to the basics of philosophy is directed at establishing in your mind the plausibility of philosophy having practical utility in daily life.  I do not know you, the reader, personally but I am confident that it is a rare exception to find an individual completely lacking in ethical awareness.  How often does one encounter phrases like “that’s just wrong,” “people should just,” “such-and-such are as bad as Hitler,” “you really should go vegan/to church/vote/to college” or other variations of statements directed at modifying or justifying one’s behavior?  Whether those claims relate to a consistent and expansive network of ethical calculations and value judgements or not, those are ethical frameworks in action.

Even if one isn’t aware of the genealogy of those ethical compunctions, I can guarantee that they are derived from some philosophical work or another.  It is important to be aware of that genealogy, though; without the ability to critically examine the consistency of ethical claims one can fall victim to con artists and well-meaning do-gooders alike.  How many political campaigns have stemmed from undeserved patriotism or lies generating outrage?  How many people donate money to charities that simply show a sad image and ask for money, only to line the pockets of fraudsters?  Philosophy can help prevent such things.

[1] This is a barely-veiled allusion to “Candide” by Voltaire.  It’s an exceptional work of scathing philosophical satire.  It’s not as much fun if one hasn’t familiarized oneself with Leibnitz’ optimism.

[2] Rene Descartes: French philosopher from the turn of the 17th century; began a series of inquiries in modern philosophy named “Cartesian” which center on mind-body dualism and problems of knowledge.

[3] Solipsism: The belief that one’s self is the only thing that can be known to exist as such.

[4] Existants (n): Things that exist.

[5] If you don’t get the reference, just look up “Nigerian Prince scam” on the internet.

[6] “Cogito ergo sum.” translated as “I think, therefore I am.”

[7] A prion is a unique vector of disease wherein mutated proteins migrate through a host organism and reproduce, much like a virus.

[8] Pre-Socratics (n): The philosophers who lived in the Mediterranean region before the time of Socrates (the end of the 5th century BC).

[9] This dilemma is made strikingly clear by the observation of David Hume in “A Treatise of Human Nature” wherein he indicates that moral obligation is a concept of a different category than facts about the world.  This is commonly called the is-ought divide.  I will address this particular issue in the chapter on human action.

The Nature of Philosophy

As is the case with most cultural pursuits which hearken back into the dark recesses of history, philosophy has no universally-agreed upon definition.  Even in academic circles, the definitions of the enterprise called “philosophy” is likely to be as numerous as the number of philosophy department chairs one asks.  This is a phenomenon[1] that vexes many analytic-minded[2] philosophers, given their obsession with necessary and sufficient conditions[3].

While I write and think very much like an analytic, I do not feel that it should be absolutely crucial to assign a definition to philosophy which outlines necessary and sufficient conditions.  At the same time, however, I am not inclined to do as postmodern[4] and continental[5] thinkers tend and simply hand-wave the issue and say “it’s a family of activities that generally resemble each other”.  The only remaining option, then, is to make an attempt at crafting a heuristic[6] for identifying philosophical activities as opposed to any other activities within the scope of human intellectual experience.

Looking at the historical context of philosophy, one may get a feel for the “family resemblance” of philosophical activities.  The helps one create a genealogy of philosophy.  This genealogy begins with ancient thinkers were predominantly concerned with “living the good life” as well as understanding how the world worked.  One of the tools that was of utmost importance to the ancient thinkers and has maintained its utility (at least, up until the point where the postmodernists have taken over) is logic.  In the middle ages of Europe and comparable periods of time in locales such as India and Japan, there was a burgeoning attempt to ascertain the fundamental qualities of existence; admittedly, this was universally in a religious or theistic context of some form or another, but that does not negate the contributions made.

In the more modern eras, from the enlightenment[7] to today, the philosophical enterprise has been a predominantly directed at understanding the manner in which man interacts with reality, from the nature of sense experience to the nature of knowledge and its acquisition.  Additionally, there has been a lot of emphasis on the manner in which the individual interacts with mankind at large and how that interaction ought to be conducted.

Depending on one’s definitions and motivations for constructing a narrative, philosophy can be seen as the progenitor of, handmaid to, or companion of nearly other activity in human intellectual life.  Modern scientific methods are the product of ancient natural studies and enlightenment-era epistemology[8].  Computer science is predicated on mathematical principles and linguistic theories which have been formed through philosophical discourse.  Theology is, by and large, the application of philosophical tools to puzzles related to spiritual revelations and religious doctrines.  Economics[9] is the result of a-priori[10] reasoning in conjunction with philosophical tools of introspection and observation.  These relationships cannot be ignored, but the exact nature of these relationships is at the heart of many lively debates.

I can (and have) gone on a much more rigorous exploration of the necessary and sufficient conditions for something to be considered philosophy, but that sort of exercise is better suited for a longer, more exhaustive, procedural work.  For now, I think it would be most prudent to do a quick breakdown of the etymology[11] of the word “philosophy”.  The word, itself, hails from ancient Greek and effectively means “love of wisdom”.

Of course, nothing in Greek translates so directly into English.  For example, ancient Greek has at least four words for love (arguably, there are a few more).  This particular root, “-philia”, would be most appropriately used in the context of a dispassionate desire for (non-sexual) intimacy, such as that of close friends.  Additionally, “sophos” is a Greek word the denotes a wide array of practical and virtuous skills and habits regarding wisdom, rather than just the sterile modern English concept of knowing a lot or having advanced experience.

The best I can do to describe the Greek root of the term is to say that it is “an actionable desire to develop intellectual virtue and put it into practice in the world at large”.  This takes many different forms, as demonstrated by Socrates and Diogenes relentlessly badgering their neighbors concerning how wrong their ideas of how the world worked really were, while Aristotle, Pythagoras, Epicurus and Zeno started schools and lectured ad-nauseam.  Later in history, the general attitude of a philosopher had largely homogenized into academic bookishness and the writing of essays and long-form treatises.  The exact nature of each essay and treatise may be radically divergent with regards to content, method, and end, though.

Ultimately, taking into account all these diverse enterprises and the influence of postmodern thought, I believe that any human enterprise directed at creating an internally consistent, logically sound, empirically viable, and universal worldview which possesses ethical actionability, utility, and (ultimately) Truth can be rightly considered to be “philosophy”.[12]

In order to attempt to construct a worldview that correlates to reality, there are a great many prerequisites that must first be met.  For example, there is the assumption that there is a reality to which a worldview can correlate.  Another example would be establishing the fundamentals of logic in such a way so as to be certain of their utility[13].  Yet another assumption would be that one is capable of constructing a worldview at all.

Rather than dragging my readers through the most meticulous and technical aspects of post-enlightenment thought, I’d like to discuss the general methodology of philosophy and, if my readers are so inclined so as to investigate these problems in their fullness, I can recommend some starting places.[14]  These problems of philosophy are quite significant, and I believe that these issues ought to be examined, but they are not issues for beginners or the faint of heart.

Instead, I recommend familiarizing oneself with the fundamentals of philosophical methodology and begin exploring this new way of perceiving reality, first.  Even though it has taken many different forms throughout history and our contemporary academic landscape, the fundamental methodology of philosophy has found no better expression than that of the trivium and quadrivium of the middle-ages in Europe.  Although these fields of study were crafted in a theistic environment and are, therefore, often ignored or denigrated by modern (leftist) scholars, the methodology they present are still quite valid, even if they may have been used to reach illicit conclusions.

The trivium consists of three stages of thought: the logic, the grammar, and the rhetoric.  Initially, these stages of thought were applied exclusively to language (hence their names).  The logic was the basis of linguistic thought; it contained the a priori principles such as the law of identity[15], the principle of non-contradiction[16], and the resultant laws of induction.  The grammar demonstrated the rules of language which reflected the logical principles outlined earlier; subject-object relations and other syntax relationships are important to maintaining fidelity to the logical principles underlying that communication[17].  The rhetoric refined the above skill sets so as to aid a thinker[18] in convincing others of the facts which he had uncovered through the application of logic and grammar.

Since its inception as a linguistic methodology, the trivium quickly expanded into a philosophical methodology.  This is partly due to the close relationship that language and philosophy has always held and partly due to the axiomatic nature of the trivium lending itself to the inquiries of philosophy.  In essence, a thinker must first establish the furniture of the world (the fundamental principles and objects of those principles), then explore the relationships between those objects, and then must find a means by which to express those relationships.  For example, the “Socrates is a man” syllogism I referenced in the footnote on this page contains material that isn’t merely linguistic.  For example, the categories “Socrates”, “man”, and “being” are assumed to correlate to realities in the observable world.  Additionally, the grammar of the statement establishes a relationship to those categories which are assumed to correlate to the observable world.  This trend is maintained through the rest of the syllogism:

Socrates is a man,

All men are mortal,

∴ Socrates is a mortal.

At each level of the syllogism, new categories and relationships are assumed or established.  On a linguistic level, logic serves as the structural framework for the grammar to populate with the symbols for Socrates, man, etc. and the rhetoric is the manner in which one would express this syllogism to others and defend the validity of the syllogism.  On a philosophical level, the logic serves as the source for the objects Socrates, man, etc. the grammar denotes the relationships between those symbols, and the rhetoric serves as the means by which these ideas move from my mind to the page for your mind to reassemble[19].

This quick introduction into the methodology of philosophy will be expounded upon in the next chapter, as we explore the role of philosophy in daily life or, as the ancient Greeks put it, “how does one live the good life?”

[1] Phenomenon (n): The object of a person’s perception or discussion; an event of which the senses or the mind are aware.

[2] Analytic Philosophy (n): A school or tradition of philosophical thought predominantly populated by English-speaking philosophers which emphasizes procedural methodology and strict definitions and application of logic.

[3] Necessary and Sufficient Conditions (n):  The requirements of any given subject to meet a definition; necessary qualities are qualities which, if absent, preclude subjects from being defined as such and sufficient qualities are qualities that, if present, allow a subject to be defined as such.

[4] Postmodern (adj): Relating to a school of thought which maintains certain attitudes such as indefinability, plurality of reality, and subjective narrative ontologically trumping objective reality.

[5] Continental (adj): Relating to a school or tradition of philosophical thought predominantly populated by thinkers from mainland Europe which emphasizes meta-philosophical influences on philosophy such as culture and economics.

[6] Heuristic (n): A method or system of interpreting ideas as they are presented.

[7] Enlightenment Era (n): A period in European philosophical history, commonly accepted to be from as early as the 16th century to the end of the 18th century; the era is marked by a sudden surge in scientific advance, political upheaval, and sheer number of philosophical schools of thought.

[8] Epistemology (n): The study of knowledge, the manner and mechanisms by which one knows.

[9] Austrian Economics.  This will be discussed in Chapter 4: Political Philosophy and its Discontents.

[10] A priori (adj): A logical justification for a claim based on syllogisms, moving from given premises to their necessary conclusions.  This is often set in opposition to a posteriori or “empirical” reasoning.

[11] Etymology (n): The study of the meaning of words and the changes of those meanings throughout history.

[12] There is a good amount of jargon in this proposed definition; as these terms appear later in this book, they will be defined in more detail.

[13] Utility (n): The capacity for a thing to provide or contribute to accomplishing one’s end, usually in the context of alleviating discomfort.

[14] “The problems of Philosophy” by Bertrand Russell, “Cartesian Meditations” by (((Edmund Husserl))), and (for the preeminent masochist) “Critique of Pure Reason” by Immanuel Kant

[15] Law of Identity (logic): A=A (A equals A), A≠¬A (A does not equal not-A)

[16] Principle of Non-Contradiction (logic): The logical principle that something cannot both be and not be in the same mode at the same time. (Abbreviated as PNC)

[17] For example, in the over-used case of the “Socrates is a man” syllogism, if you were to mistake the subject-object relationship, you can end up with things like “Man is a Socrates” which is not only incorrect, but it is nonsensical.

[18] i.e. The philosopher

[19] There are deeper epistemic realities hidden in this discussion of the trivium method, but those will be addressed in the coming chapters of this book.

Language Barrier

Pod-and-blog-fade seems to be running rampant in the post-election libertarian and philosophy circles. I can’t help but wonder if it’s a combination of political hangover and something like a sigh of relief as certain existential threats have been postponed. Everywhere else, lefty entertainment and philosophy podcasts and blogs have begun their four-to-eight year pity-party, wherein they cry about the president to the exclusion of any other form of content. Technically, that’s why I voted for Trump, was to make these people cry… but I’ve got a bit of buyers’ remorse, now.

Anyway, I’m back on the content-producing bandwagon. Today, I’m talking about words.

 

I expect most of my readers will be well aware of the rules of grammar and have a decently expansive vocabulary. I’m not going to make a “top ten” list of fun punctuation marks… I mean, who hasn’t heard of an interrobang I’m not going to share my fun story about arguing about ancient Greek grammar with Jehova’s Witnesses (subject-object relationships are more important when you haven’t discovered punctuation yet). Instead, I’m discussing the philosophy of language in broad strokes.

As far as I can tell, most people haven’t critically examined the relationship between language and the world around them (unless they’ve smoked a lot of weed or have suffered severe concussions). As such, most people have intuitively just assumed one of two paradigms concerning the operation of language. If this describes you, Understand I’m not talking-down to you, as this is something esoteric enough in the realm of philosophy so as to be compared to particle physics or studying neolithic attitudes towards one’s in-laws. It is, however, an important issue to address when engaging in philosophical discussions.

Now that the disclaimers are out of the way, what are these two paradigms of language people assume? The first is that of what could be called “linguistic realism”: it’s a belief that words and sentences directly correlate to reality (in some cases, one could even say that words and reality are commensurate). In the case of thinkers like Plato and Aristotle, the word “justice” is an actual expression of some form or concept. When a poor soul makes the mistake of using the word “justice” near Socrates, Socrates assumes that the man must know the platonic form of justice so thoroughly so as to be able to utter the word, itself. Aristotle is a little more grounded, but he still assumes a sort of direct correlation between the word “justice” and manifestations in meatspace of someone “giving that which is owed”. In the modern age, that attitude is usually expressed by people who really enjoy Rhonda Byrne, people who think that bad words are bad words due to some innate quality of the word itself, and people who deride the idea of words changing meaning over time as well as the creation of new words. I used to be a linguistic realist.

The second paradigm of language could be called “postmodern nominalism” or “naive nominalism”. This position holds that words have very little correlation to reality; as a matter of fact, the best way to describe the position would be “the belief that words exist as nothing more than a game between individuals wherein rules are made up concerning the meaning and use of words, with little to no relation to the world outside of said game.” In the case of thinkers like Peter Abelard and Ludwig Wittgenstein, the meaning of a word depends on something along the lines of social consensus and common usage. When I say “tree”, it only means “that thing growing out of the ground, made out of wood, and bearing leaves” if I am speaking to someone who comprehends English and understands the botanical context of the statement. In a different context, the term “tree” could refer to a shape, such as that of a propane tree, a family tree, or a decision tree. To a non-English-speaker, it may as well be any other set of phonemes: it’s pure gibberish. In the modern age, that attitude is usually expressed by people who really enjoy saying “a rose by any other name…”, people who think that bad words are bad because of some historical or class-related context, and people who live-tweet their netflix-and-chill experience with their cis-gendered binary life-partner.

One of the clearest ways to delineate between these two positions is to inquire as to the nature of dictionaries. For example, if I hear or read a word I do not recognize, I obviously go to the dictionary… well… to google’s dictionary, at least. When I read the definition of the word, I am reading one of two things: I’m either reading the common context for the use of the particular term at the time of publication, or I am reading the “actual meaning” of the word. For example, if I were given the word “obacerate”, I would obviously have to google it or look it up in a century-old edition of the OED. When I get the definition “to interrupt one’s speech”, is that what the word means in some innate sense, or is that simply a description of how the word has been used in the past? If I were to begin using the word in colloquial conversation, would it mean “to interrupt one’s speech”, or could it take on a new meaning based on the context in which I use it or the context in which others understand it? If I only ever used the word “obacerate” when referencing covering someone’s mouth or punching them in the jaw, could the word take on that connotation?

If one says “the word means what the word means, regardless of context” one is likely a linguistic realist. If one says “the word hasn’t been used for almost a hundred years, it can mean whatever society begins to use it as” one is likely a naive nominalist. A more apparent, but less cut-and-dried example would be the use of words like “tweet”, wherein it could either be onomatopoeia for bird sounds or an activity which takes place on the website, twitter. If the word were to fall out of common parlance concerning birds, would the meaning of the word have changed once Webster cuts out the atavistic use of the word?

As is typically the case, I get the feeling that most people who bother to read this far are asking themselves “Why do I care about this hair-splitting over words?” If you are, you are right to do so. In day-to-day conversation, words just mean what they mean. If there is a misunderstanding, we need merely exchange one word for a synonym or offer a definition to contextualize the use of a particular word. In philosophy (and, therefore, any sufficiently advanced field of thought), though, these sorts of distinctions become important.

For example, if I assume that words have innate meanings and are either direct representations of something or are a sort of manifestation of the thing, itself, then when I start talking about something like colors, thoughts, phenomena, property norms… you know, abstractions, it can get hairy if I’m speaking to someone from a different set of preconceptions about language. I’m a sort of compatibilist nominalist. I greatly appreciate Peter Abelard’s contributions to the philosophy of language and I’m a recovering linguistic realist. As I will eventually get to in the 95 Theses, and I have already covered in the Patreon subscribers-only content, the human experience appears to be one which takes place entirely within one’s mind.

Whoah. Hit the brakes. That likely seems either patently obvious or totally insane, depending on who’s reading it. It’s either obvious that one has a consciousness which navigates a never-ending stream of sense-data and never grasps a “thing-in-itself” beyond those sense-inputs, or it’s insane to start talking like a Cartesian or Kantian solipsist: of course one sees, touches, tastes, smells, and hears the world around them and discusses these things with others…

…Which is a similar divide as the one between the linguistic realists and the postmodern nominalists. As far as I’m concerned, though, my mind is locked away from the world and only sees it as mediated through sense organs, nerve connections, chemical emulsions, brain wrinkles, and more. The only way I can make sense of all those inputs is to pick out regularities and assign concepts to those regularities. Through this systematic approach to those sense inputs, one can create a noetic and epistemic framework by which one can interact (albeit though similar mediation as the senses) with the world outside of one’s mind.

After all that fancy noesis and epistemology is underway, it becomes useful to apply language to this framework. If I consistently see a woody creature growing from the earth and bearing leaves and fruit, and I wish to express that set of concepts to someone else (who is obviously a similar set of sense perceptions, but I assume to be someone like myself), it helps to have a name, a sound, a mark, etc. to signify that set of concepts. And the basis for the word “tree” is created. The intuitive concepts such as causality, correlation, etc. also exist in that bundle of sense inputs and later receive names. If trees, causality, or even a world beyond the phenomena don’t actually exist, the sense inputs I have mistaken for these things still do. The reason I bring up abstractions of relationships, such as causality, is because they seem to relate to certain aspects of grammar. For example, subject-object relationships and prepositions seem to presuppose these causal and abstracted relationships.

Now, of course, there’s hundreds of years of philosophy of language at work and I couldn’t hope to go through even a thorough examination of my particular flavor of philosophy of language. The reason I tried to give this 2,000-word summary of the idea is twofold. First, I think that this is an issue that underlies a lot of misunderstandings and disagreements on the more superficial levels of human interaction. From the comical dilemmas over who’s allowed to say “faggot” or “nigger” to the more fundamental issues of whether or not “rights” or “norms” exist and in what manner, these conflicting theories of language are at play. The 95 Theses will go into the idea more in-depth and if the Patreon subscribers demand it, I’ll explore the idea further.

Second, I want to announce the upcoming glossary page on the website. I am often accused of mutilating language or using words in a way that only I can understand them. Less often, I’m accused of using too many technical words for people to keep up. I hope to remedy some of these issues by providing a cheat sheet of sorts to help people keep up with me and to understand what I am saying when I use words in a more precise way than they are commonly presented in dictionary definitions and colloquial use. Of course, I need feedback on which words should go in said glossary so, please, do comment on this post and send me emails about my abuses of language.

TL;DR: Philosophy of language is a very involved field of study, but nearly everyone is a philosopher of language, provided they speak a language. Even if one hasn’t critically analyzed their understanding of how language relates to the world, they are walking around with a bundle of assumptions as to what they mean when they speak certain words, and whether or not those words have some innate quality to them or whether they are just some sort of social game being played with other speakers of that same dialect. Most of those assumptions can be categorized as being that of “linguistic realism” (words are directly related to things and act as an avatar of the things they relate) or that of “postmodern nominalism” (words don’t mean anything in and of themselves and only vaguely gesture at socially agreed upon concepts). There are other, more nuanced positions that people can hold, but usually only as a result of actively engaging in the philosophy of language, an exercise I strongly recommend for those that are able.

A Frank Discussion of Rights

Previously, I have written on my blog and on social media concerning rights and all the things surrounding rights in common discourse. As far as I can tell, I have not written the word “right” in quite a while… and I’ve only mentioned it a few times out-loud in private conversations as I explored the ideas I am planning to write on, today.

Today, I want to begin a frank discussion of rights. Given my self-imposed word limit and general mental constraints, I want to ask and contextualize three questions and make one follow-up (potentially) controversial statement. One may be able to trace the evolution of my ideas alluded to in previous posts to where I am now by reading though my published posts and the book-exclusive material, and one certainly could do so if they know me on social media or in-person; regardless, this is where I am at in my exploration of the concept of rights. So now, some questions:

  1. What function does the concept of rights serve?
  2. What is the ontology or metaphysics concerning rights?
  3. Are there more philosophically resilient alternatives to the concept of rights?

I will save my statement for later.

Rights seem to be a shorthand for ethical and moral reasoning. In classical texts I’m familiar with, “rights” are less a concern than they tend to be in modern and postmodern texts. As a matter of fact, when the Greeks and Romans addressed concepts that look like “rights”, they tended to focus more on what the term “privileges” covers in the modern age: a liberty granted to an individual or group by the guy(s) in charge. In a lot of ways, moral and ethical argumentation either had everything to do with virtue and ignored rights entirely, or centered entirely on one’s responsibilities as derived from one’s privileges. In the middle-ages, the concept had evolved slightly so as to include what amounts to “privileges granted by God”; a prime example would be the so-called “divine right of kings” or the liberties taken by the Church.

In the 1700’s, there was a major shift in popular philosophy. With the sudden explosion of productive technologies (such as the printing press and general industry), the subsequent decentralization of cultural production and consumption, and the sub-subsequent weakening of governmental power, certain theories that were only whispered about in the middle ages became widely popular. One such set of theories would be those of classical liberalism; another would be social contract theory; and one more example would be the rise of secular humanism.

One theme that was central to all three of those sets of theories was this niggling question: “If our rights aren’t derived from the king’s (or God’s) permission, how can morality exist?” The answer that seems to have won out in the marketplace of ideas is the straightforward, “People have rights because they are people, just because. Rights are something intrinsic instead of some contingent set of permissions.” Given how liberalism, democracy, and humanism have played out over the last few centuries, I doubt anyone with a basic understanding of modern history could honestly deny that the answer provided above is fraught with pitfalls. Even the SJWs demanding that free college, getting paid just for existing, and having permission to murder one’s offspring are intrinsic rights, just because, will tell you that people are mis-applying the concept.

Ultimately, every application of rights I am familiar with revolves around the essential question(s): “What can I get away with and what am I entitled to?” This is the reason I say it seems to be the case that rights are used as shorthand for ethical and moral reasoning; the focus of the rights discussion seems to be largely the same focus of ethical argumentation in general. If I have a negative right (the moral claim to be exempt from some obligation or another), such as the right to be left alone, that would mean that I “can’t get away with” harassing others (because they have the same right). If I have a positive right (the moral claim to be served by others), such as medical care, that would mean that anyone who can provide me with medical care is obligated to do so.

Depending on the theory, rights derive their ontology from different underpinnings. Some theories posit that rights are God-given, others posit that rights are brute facts, yet other theories posit that rights are derived from the general acceptance of society, and on and on. I think this diversity of suggestions is a result of the above discussed function of rights. Ethics and morality are, by their nature, abstract. Ethics and morality don’t make things happen in the world, at least not directly; they are descriptions of how one ought to act, but they don’t make someone act in a particular way. Rights, as a shorthand for parameters of acceptable human action are at least equally abstract. Where one can observe an apple falling in the orchard and posit a theory as to the mechanisms by which such an event occurs and the regularity with which such an occurrence is likely, one does not have the opportunity to observe a right and speculate as to the mechanisms by which the right accomplished its end.

Instead, more often than not, a philosopher or political activist will ask themselves, “What do I want to achieve? By what mechanism can I empower people to give me what I want and disenfranchise those who would get in the way of my goals?” This may sound like a very cynical take on Locke, Montesquieu, Smith… but one must remember that “What I want to achieve” may in fact be “peace on Earth and goodwill towards (wo)men” or some other fruitcake ideal. Upon answering these questions, the strong zeitgeist of rights becomes a valuable tool in accomplishing those ends. One need only come up with a source of rights that is compatible with one’s pre-existing ontological commitments and promotes one’s agenda.

Of course, this cynical reading of the history of philosophy presents a series of arguments concerning rights that have more to do with sophistry and political theory than it does with a genuine pursuit of Truth. If one were to make a genuine attempt to ground rights in a reliable ontological or metaphysical framework, I imagine it would look a lot like the cases made by a number of Rothbardian philosophers. Unfortunately, the level of abstraction required to make a case for the existence and nature of rights rivals the cases for the existence and nature of God. I only have enough bandwidth for one God-level case at a time, and people should know by now which one I’ve taken on. Instead, I just want to point out that a theory of rights which anchors itself in some moral or ontological case needs something metaphysical which lacks direct interaction with the physical world, some sort of platonic realism, and a theory of rights which anchors itself in utilitarian or sociological cases results in a utilitarian ethical framework which is sufficient to replace a similar doctrine of rights altogether.

So, what if a grounded theory of rights is better just left as an ethical framework without the concept of rights? Well, for one, doing so effectively neuters the ongoing social justice commentary as well as the general statist narratives wherein people claim positive rights which must be produced by state slavery. Additionally, It expedites certain discussions within and without my particular school of thought when one focuses on the principles and facts available which concern themselves with issues most people refer to as “rights issues”. What I mean to say is that the rhetoric and traditions of rights may only muddy the waters if there is an equally or more philosophically resilient alternative.

Despite the likelihood of being accused of all manner of character flaws, such as that of being a materialist, being a nominalist, or of being some sort of pagan or atheist, I think we can ground any discussion of “rights issues” in a far more easily defined and effective set of terms and principles. For example, I believe Hans Hermann Hoppe’s premises for argumentation ethics obtain nicely. One such premise is that private property is an inescapable feature of the human condition; the very fact that one has access to and control over one’s body demonstrates the principle of self-ownership in a way that cannot be abrogated by any instance or degree of criminal trespass or chemical interference.

So, ever the quintessential AnCap, I think that exploration of the logical, physical, and metaphysical features of property will sort out all of the issues commonly presented as “rights issues” and will, more often than not, produce results that jive with rational intuition. For example, a good portion of the classical liberal “negative rights” are the immediate logical consequent of the nature of property: the right to secure oneself against coercion, murder, and theft is less a “right” and more a natural result of the nature of self-ownership; If I own my body (and by extension that which my body produces), given the definitive quality of property that is “exclusivity”, I may exclude others from use of that property by whatever means that does not involve trespass on my part. There: without “rights”, I’ve established the justifiability of self-defense and, due to the universal nature of property, have also denied the justifiability of trespasses such as murder, coercion, and theft.

If there were any rationally defensible claim to what is often called a positive right, an argument for such a claim could be made stronger by avoiding a discussion of rights, itself, and focusing on the reality of property, instead. Perhaps the most defensible claim of positive rights is that of the Catholics: the “right to life”. For example, a “right to life” can not be taken seriously, lest it result in absurdity given the above alluded to discussion concerning the relationship between positive rights and state slavery. Death is inevitable, so to have a right to escape such an inevitable phenomena would require that mankind collectively devote every resource available to the discovery of immortality which would, itself, result in the deaths of everyone involved.

Instead, acknowledging the unborn human’s ownership of its body, the propertarian obligations of a landlord (or, in this case, a mother), the degree of action either is able to engage in, and other features of property and the human condition would result in positions which directly parallel the traditional positions of the Catholic Church concerning abortion, evictionism, self-defense, euthanasia, and care for the elderly. As an added bonus, such an activity would demonstrate the absurdity of the “right to choose”, “right to birth control”, and etc.

The time has come for my controversial claim (as if this hasn’t been controversial so far). The Catholic Church made a grave error in adopting the enlightenment-era’s rhetoric concerning rights. I kinda’ already alluded to that claim in the last section of the post, but I think it is important enough to warrant explicit attention. In engaging a secular humanist agenda on its own flawed terms instead of continuing its pursuits in determining the truth of the matter, the Church made itself more popular in an adversarial world. In the process, though, it laid the groundwork for the current social and ethical battles it finds itself buried under. That is not to say that the Doctrinal positions of the Church, or even the moral and ethical teachings of the Church as a whole are inaccurate, but it is to say that the use of flawed theories and terminology obfuscates the veracity of those teachings. Because of this obfuscation, it is not an unfair accusation to blame the SJWs on the Church and to point out that the Church has backed itself into a corner concerning the pursuit of knowledge of creation (most noticeable of which being economics). This mistake can be rectified if teachers and clergy make a concerted effort to pursue truth as opposed to political expedience… but how long it will take to do so is very much a live question.

TL;DR: Rights, in their most resilient formulation can best be described as “temporary privileges granted by the guys in charge” or, alternatively, “an ethical or moral shorthand for determining justification of actions”. There are a number of frameworks in which people try to ground rights and accomplish the ends for which the have created those rights, some are more reasonable than others, but they all present issues I do not believe can be resolved. Additionally, there is far too much baggage and theory in the realm of discourse concerning rights to expect calm, rational debate. Property, and the logical and material consequences of property provide a resilient alternative to the discussion of rights which also achieves intuitive outcomes. For these and other reasons, I think that it would be a better rhetorical move to simply deny the existence of rights altogether and demonstrate the efficacy and utility of property in dispute resolution and moral or ethical dilemmas.

Also, here’s some George Carlin, for your entertainment.

patreon-logo

Liberty Classroom: an Invaluable Tool

If you are reading this near the end of November in 2016, you can get some major discounts and provide a great deal of support to the Mad Philosopher project by going to Tom Woods Liberty Classroom and subscribing.  If you are reading this at any other time, you can still provide a great amount of value to the project by doing so.

Tom Woods Liberty Classroom is easily one of the most undervalued resources available on the internet, as it provides a legitimate PhD-level resource on a number of crucial subjects such as history and economics.  The term “legitimate” is important, here, as what most universities provide is only half-true and full of leftist propaganda.  This resource is the closest to comprehensive and the closest to unbiased as can be found.

Click Here to get some coupon codes and subscribe.  This affiliate program is definitely one of the best ways to support the Mad Philosopher project, second only to just sending me Bitcoin directly.

 

Here’s some free samples (the best stuff is behind the paywall, obviously):

the best way to fulfill the maxim “Carpe Veritas” is to subscribe to Liberty Classroom and take advantage of everything such a subscription provides.

patreon-logo

Chapter 3: Orders of Knowledge

Chapter 3: Orders of Knowledge

We have thus far introduced ratio and intellectus. As a quick refresher, intellectus (or intellect) is the inborn faculty which experiences the self and the predecessor to reason, and reason or ratio is the development of said faculty. However, in addressing the human epistemic experience and briefly examining the manner in which our mind operates, we have completely overlooked the primary concern of modern epistemology. Knowledge, in all of its complexity, still haunts our exploration of our epistemic assumptions.

While the exact definition and importance of knowledge is hotly contested in this postmodern environment, one definition tends to maintain its resilience. Knowledge, in my mind, is limited to what is called “propositional knowledge”. The experiential basis of propositional knowledge we have already discussed ought to simply be called “experience”. I define propositional knowledge as “justified true belief”. Now, as the contentious discussion that rages on will demonstrate, this definition is not flawless and self-sufficient, but that should not overshadow the usefulness or accuracy of this definition.

A brief examination of the Stanford Encyclopedia of Philosophy’s page on knowledge1 illustrates the key issues with the above definition, drawing on the works of those such as Gettier. No mater how complex and detailed the discussion becomes, the utility of the above definition is undeniable. Much like Russel’s discussion of our knowledge of universals,2 we already have an intuitive understanding of what knowledge is. As a matter of fact, we use that intuitive understanding to critique our proposed definitions, the chief example of this is the Gettier problems. A brief explanation of the Gettier problems is in order; the Gettier problems are a series of hypothetical instances contrived such that the definitive requirements for knowledge are met, but the conclusion flies in the face of our intuitive understanding of knowledge. A workable solution to such a dilemma is simple: we must accommodate for such an intuitive element in our definition. For now, “a justified true belief in which the justification is factual and sufficiently related to the truth at hand” will suffice. As that is, more or less, our intuitive understanding (ignoring the verbosity of the definition) of knowledge. “Justified true belief” is a good shorthand for this definition. More work clearly ought to be done to develop a rigorous and categorical definition for knowledge, but that is not the intent of this work. Besides, I am confident that whatever rigorous categorical definition is found will simply be a more detailed and explicit form of the one I have given.

Now why, at the beginning of chapter three, do I suddenly launch into definitions, qualifications, and disclaimers with nary a mention of the next thesis in the sequence of ninety-five? Simply put, the next several theses operate with this definition of knowledge in mind and the mere definition of a word does not justify the use of a thesis when I am limited to a mere ninety five. One more minor but crucial point must first be made, however; our intuitive use for knowledge is the formation of a reliable worldview, predicated on the reliability of the mind. As with my explanation of experiential knowledge, man is a habitual creature: our understanding, use of, and reliance on propositional knowledge is no exception. With this tedium out of the way, we may now proceed.

Thesis #7: One gains first-order knowledge by the exploration of logic as pertains to “self-apparent” principles and facts…

As I explicated in the first two chapters, “self-apparent” principles and facts are experiential in nature. Even the existence of a “self” is derived from the experience of reflecting on one’s experiences; this knowledge is not inherent to the mind, brain, man, whatever. Even the definitive and logical truths we find to be “self-apparent” are derived from a more primary experience. The easiest example of which would be that of a triangle. A triangle is a closed two-dimensional polygon with three angles and sides, the angles of which total one hundred eighty degrees. We can identify triangles by these factors, but before we could discover these attributes of triangles, we must first have an experiential knowledge of spatial relationships and basic math/geometry before we can identify or express these characteristics.

In the last chapter, we established certain epistemic tools through our mental experiences. While it is quite productive and enlightening to turn these tools on themselves in a manner similar to which Hegel discusses in his Introduction to the Philosophical Encyclopedia3, it is not required in order to begin observing and acknowledging the world at large. We can establish undeniable matters of truth and fact using syllogistic reasoning coupled with experience (most especially self-apparent facts). Our definitions of knowledge and triangles are prime examples of such a practice. This method is simple enough; one first states a definitive fact derived from experience, then through the use of the PNC explores the implications of such a fact, so long as nothing is self-contradictory or contrary to experience it can be assumed to be first-order knowledge (or, knowledge proper). If the logical exploration results in a contradiction, one must first check their logic before throwing out the initial premise. This work is, itself, an example of such a practice; our first chapter begins with three assumptions made due to their self-apparent nature, and here we are, two chapters later, still exploring the logical ramifications of such assumptions.

My current experience, aside from self apparent principles, is my only source of immediate knowledge. If our friend Mike, from the first chapter, is experiencing a particular event, say the fateful day he shot himself in the leg, he has a whole array of experiential facts at his disposal as well as deductive reasoning to assist him in knowing certain facts. He has the experience of a raw coldness in his thigh as well as a ringing in his ears which are undeniable. Mike calls such an experience “pain” or “injury”. Also, he experiences recalling memories of having dropped the handgun and attempting to recover it on its descent.4 Deductive reasoning may not be able to establish with certainty who or what is at fault for his current circumstance, but it is sufficient in analyzing the circumstance itself. Which, to be frank, is far more important when faced with a circumstance such as:

  • I am experiencing phenomena congruent with severe injury

  • If one wishes not to die, when faced with serious injury, one ought to pursue medical assistance

  • I do not want to die

  • I should seek out medical assistance

rather than to pursue the line of inquiry consistent with “why?”

Syllogistic, or deductive, reasoning is ultimately a practice in exploring the ramifications of the PNC as it applies to a particular claim. In the above example, it pertains to one’s particular experiences of pains and desires. As an astute logician will note, the above syllogism cleverly cheated; it introduced a non-immediate experience or a non-deductive inference. The premise, “if one wishes not to die, when faced with serious injury, one ought to pursue medical assistance,” is not necessarily an experiential fact or a deductively ascertained claim. However, herein lies two details which require attention: intuition and second-order knowledge. The latter will be discussed soon, all we need note now is that one can make legitimate first-order claims which are informed by second-order knowledge, so long as one is cognizant that they are doing so and verify its congruence with the paradigm5 established by one’s first-order knowledge. The case of intuition, though, is slightly more complex. As discussed earlier6, there is a distinctly observable reality that the human mind inherently possesses certain faculties, the ones addressed so far being intelligence and instinct. As far as what the exact cause of these inherent faculties is, is beside our current line of investigation. We will simply play the pragmatist for now; we will treat intuition as a brute fact and discuss its causes and specifics later. In the case of Mike, he would likely have an intuitive response to his gun wound to attempt to staunch the blood flow and such, a shorthand for these series of responses would be, “to pursue medical assistance”.

…it is highly falsifiable, and applies to physical and metaphysical fact as well as matters of truth

The above is a particular instance of what is essentially the only true type of knowledge: the only circumstance of a “justified true belief”. Anything beyond the definitive and falsifiable justification of immediate experience and deductive reasoning cannot provide certainty to a greater degree. This certainty is not, however, absolute. It qualifies to be called certain due to its immediacy and falsifiability. Falsifiability is the circumstance and burden of proof one would have in disproving a particular claim.7

Karl Popper, having posited falsifiability as crucial to epistemological study and having built an entire body of work on such a principle, is a valuable asset to one such as myself. Anchoring an entire philosophical worldview on a few epistemic assumptions, I must be diligent in exploring these assumptions and securing them as best I can. Unfortunately for me, Popper is simultaneously more pessimistic and optimistic than myself; making use of his work will require diligence. We both agree that knowledge is always suspect. It is always subject to criticism and correction. In his ardent desire to avoid supporting authoritarianism8, he seems to fall into a trap of epistemological absurdity in which “all knowledge is human… it is mixed with our errors, our prejudices, our dreams, and our hopes… all we can do is grope for truth even though it be beyond our reach.”9 As the previous chapters10 show, I agree that our knowledge is limited and influenced by the human condition but to assert (unfalsifiably, I might add) that truth in unobtainable due to that reality undermines the very premise of such a claim. Besides, to strive for the admittedly impossible is to waste one’s time. One’s energy would be better spent, at a minimum, on more practical asymptotic activities instead (like curing disease or pursuing pleasure or enlightenment).

With how jealously I withhold the title of “knowledge”, the degree of confidence one can have in their beliefs hinges on falsifiability. In order to claim something as knowledge11, one must be making a claim which is immediately apparent and clearly falsifiable. Falsification of this (and every other) form of knowledge is, in truth, a good thing. Falsification provides an opportunity for better refinement and correction of an otherwise flawed worldview.12 One should always open themselves to rational and rigorous criticisms, so as to avoid becoming a relic-bearer of Lady-Philosophy’s garment.13

This isn’t to say that the first time something unpredictable or inconsistent emerges one ought to throw out their entire worldview and sequester themselves in a mire of Cartesian doubt. Quite the opposite is the case, one ought to defend such a claim until such a time as it is sufficiently disproven or falsified. We will explore this more later. For now, it will suffice to point out that single incidents of inaccuracy in one’s beliefs may in fact be flukes, only cumulative or consistent error is sufficient cause for radical reevaluation.

Now, many may mistake this epistemic framework for some Kantian a-priori reasoning or some assertion of continental brute facts. Neither of these is the case at hand. These self-apparent facts are, in fact, theory-laden. Even the most fundamental facts one can select, such as the Cartesian cogito,14 still contain some degree of implicit theory. In the case of the cogito there is at least the predicate assumption that there is a causal relationship between actions and existants (that the experience of thought must be attributed to a thinker) and that the PNC obtains. The issue is not one of selecting a brute fact or discovering an a-priori truth, but rather to find a sufficient fact on which to vest one’s philosophy because all self-apparent facts are, without exception, theory-laden.15

Of all the things we have allowed into our ontology thus far, this theory-ladenness itself must either be a form of brute fact, an inherent fact that there is no fundamental starting-place to understanding the world,16 or must be an inextricable attribute of man’s mind. I am in favor of both of the proposed options, actually. I believe that the universe is an elegant and logically constituted entity which has no one logical predicate on which all else hinges, but rather is an intricate and interdependent network of logically constituted laws in which the absence of any one equally would cause a total collapse. Because of that holistic nature of reality, our minds are equally constituted as such in order to accurately form a conception of the universe. This inherent holisticism, then, is an aspect of one’s intellect.

As mentioned, this knowledge pertains to physical and metaphysical fact, as well as truth claims. So far, in this work, the most prominent first order claim pertaining to physical fact I have made is that one has embodied experiences. Falsifying such a claim may be somewhat difficult to do experientially with our current technological limitations. However, it could be quite easy to locate a logical inconsistency with the claim. For example, one could at least cast doubt on such a claim by finding an inconsistency between the epistemic claim that one is capable of abstract thought while insisting the primacy of material senses. I clearly have not found one, lest I would have asserted otherwise, but the purpose of publishing a work as such is to allow others to double-check my claims.

In similar fashion, we have made first-order metaphysical claims. Chief among them would be that one’s understanding dictates one’s behavior. Rather, a more specific case in that assertion would be that man operates with an intermediary function between stimulus and response. The easiest manner in which one could falsify such a claim, as far as I can tell, would be to demonstrate that it is superfluous to forming a sufficient paradigm for all second and third order reasoning. I have not yet addressed the framework in which one would do so, but we will get to it shortly.

This naturally brings us to truth claims. Technically, either everything or nothing we have discussed thus far qualifies as a truth claim, given the common usage of the term “truth claim”. As far as I am concerned, a “truth claim” is distinguished from a factual claim (such as the two we discussed above) with regards to its subject matter. A factual claim has to do with a state of affairs in specific or categorical situations whereas a truth claim regards a matter of transcendental realities. This will be addressed in more detail in the next chapter, but for now, we can refer to the PNC as one such claim. While I believe it to be impossible, one can falsify the PNC simply by illustrating a logically cogent circumstance in which something both is and is not in the same mode at the same time.

Thesis # 8: Through the marrying of multiple first-order concepts and further introduction of experience, one gains second-order knowledge…

As the thesis indicates, second-order knowledge17 is predicated on first-order knowledge. The sum total of one’s first-order knowledge creates a paradigm on which one’s second-order knowledge can be built. Having already shown themselves to be self-apparent, rationally cogent, and non-contradictory, first-order claims can be relied upon to fact check one’s second-order claims. In such a circumstance that one encounters or forms a second-order claim, they must critically assess its validity against the paradigm in which they are operating.

Through the application of deductive reasoning, one takes self-apparent logical principles and analyzes their relationships. By analyzing the relationships between their conclusions, they remove themselves from the self-apparent by a minor degree. This line of reasoning has few applications outside of mathematics without the added element of experience. Practically speaking, the marrying of multiple first order concepts and adding experiential data is fairly straightforward.

Mike, now medically stabilized, can effortlessly begin to assess what happened from the perspective of strong belief. He has already ascertained that he is injured and that he dropped a loaded gun. By drawing from experience, he knows it is incredibly likely that, in fumbling to catch the gun, he may have pulled the trigger. He also has a strong belief that the other two people who had possession of a handgun at the time were executing proper gun safety and were not in such a position so as to fire a gun at an angle corresponding to his wound. All of this evidence along with the deductive arsenal provided by his first-order paradigm can (rightly) lead him to the conclusion that he did, in fact, shoot himself in the leg.

The belief he has that his companions were executing proper gun safety is primarily due to experience and collaboration. He has witnessed them demonstrate their skill, knowledgeably, and contentiousness many times before while shooting. Additionally, they are responsible for his knowledge of the rules and basics of gun safety and use. Adding to his certainty that he did in fact shoot himself would be one of his companions serving as a witness to the event, “Dude, you just shot yourself!” In their own way, collaboration and communication are a form of experience which are useable in the development of second-order knowledge. Any one stranger can present a claim to another; without a well-developed discourse between the two, in addition to the critical thinking skills required to assess that discourse, such an interaction is meaningless. If some stranger (or even a friend) simply walks up to you and makes a claim, anything from “the sky is blue” to “Elvis lives”, and leaves promptly thereafter, there was no opportunity to expand one’s knowledge base. However, as will be explored later in this chapter and especially in the next chapter, someone can make an argument for a second-order belief and that allows for the opportunity to expand one’s knowledge base or at least reassess one’s existing knowledge base.

To one familiar with logic, this thesis essentially concerns itself with induction. While Russell explores induction quite thoroughly in chapter six of his “Problems of Philosophy”, he fails to provide a concise definition for quick reference. I will suggest a definition and then recommend that the more ambitious of my readers read Russel for more detail. I would define induction as, “the rational function by which one forms a strong belief by repeated experience and logical inference.”

Clearly, the study of physics18 lands solidly in this category. The empirical and observational study of the world which makes use of logic, mathematics, and repeated experimentation has been developed with the intent and end19 of forming a cohesive and reliable framework of second-order knowledge. Physics has proven invaluable in expanding our knowledge and providing for vast improvements in our quality of life and shows no signs of slowing in pursuit of that end. However, some have fallen victim to the ideology of scientism, believing that this material study of the world must be predicated on a purely material ontology and is the alpha and omega of knowledge. As I have already illustrated, science is predicated on a first-order paradigm and is part of a larger framework of philosophy. I am reminded again of Russell:

“The man who has fed the chicken every day throughout its life as last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken”20

As an aside that my broader ideology and disposition will not allow me to leave unaddressed, who is crazier, the chicken who distrusts the farmer and awaits and prepares for such a time that the common belief in the farmer’s benevolence is falsified, or the chickens who are content with the utility of daily meals?

… this order of knowledge is less falsifiable than the first.

Like first order claims, second order claims cannot contradict each other. In the popular case of science, it is easy to make a claim that this is not the case. For example, Newtonian gravity is still used universally for most every day-to-day practical application of physics, such as architecture or demolition while Einstein’s theories on relativity have effectively falsified newton’s theories. That claim, though, is naive; certain aspects of Newtonian mechanics have been shown inaccurate and ineffective, but that does not mean that there were not accurate observations, predictions, and knowledge claims contained therein.21 In less esoteric knowledge bases, this reality is more evident. One cannot simultaneously claim that the sun will rise tomorrow and claim that it will not. Mike can not claim that he had shot himself in the leg and that he did not, nor can the chicken claim that the farmer will wring their necks and that he will refrain from doing so.

In reality, if any two second-order claims are found be contradictory, they are likely inconsistent with the first order paradigm one established prior to making such second-order claims. This is because no second-order claim can be made without first assuming the accuracy of one’s first-order paradigm and verifying that second order claim against it. In such a circumstance that there is a true contradiction between two second-order claims (as opposed to a merely apparent contradiction) which are both supported or necessitated by one’s first-order paradigm, one must reassess their first-order paradigm in order to ensure that some mistake was not made which would result in such a contradiction.

If there is no flaw in the first-order paradigm, one must move on to pitting the contradictory strong beliefs against each other and attempt to falsify them. In most cases, second-order claims are experientially falsifiable. Induction, as its primary use, makes predictions about the world and about certain logical results. In these cases, one needs only to seek out instances in which the predictions made are consistently or severely inaccurate.

Thesis # 9: Through the extension of trends in the aforementioned orders of knowledge and the marrying of multiple second-order concepts, one can gain third-order knowledge: this order is rarely falsifiable by any means other than proving logical inconsistencies concerning the first-and-second-order paradigms and between third-order knowledge claims

While it may not be clear, in what I have written thus far, I have attempted to remain as politically correct and uncontroversial as possible while still saying what is necessary to convey my point. Unfortunately, this is the point at which I must descend into touchy material. Mike may have a weak belief that he shot himself because of karma or divine punishment. He may believe that he was predestined to shoot himself or that the CIA had implanted a microchip in his ass that made him do so. Any or all of these beliefs may be true. So long as they do not contradict the paradigms established by the first-and-second-order knowledge sets or each other, it is justifiable to believe such things22. Those examples are clearly a bit extreme, but it wouldn’t be out of line to say that Mike’s justifications for these claims may be more well reasoned and defensible than many claims at people at large take to be determined matters of fact. We will address that in the next section of this chapter.

Typically, third-order knowledge claims reside in realm of such things as esoteric sciences, religious discussions, conspiracy theories, and (especially) politics. Not always are these realms populated solely by third-order claims, but they do tend towards that in the common man’s mind. Other than by showing a logical inconsistency with the pre-existing paradigms, it is difficult to establish a falsifying element in third-order claims, which is likely to be part of the reason why the average man tends to vest so much of their mental narrative in the realm of weak beliefs, because they have the illusion of being bulletproof to the logically illiterate.

This is not a dismissal of weak belief. While this type of knowledge is frequently abused, it does have its utility. Sufficient practical reliability and utility can secure third-order concepts against ridicule. Many times throughout history, some person or organization has made a third-order claim which, by way of abductive reasoning or by advances in the rational or technological tools at man’s disposal, has since established itself as second-order knowledge. Abductive reasoning can best be described as an appeal to a compelling explanation for an otherwise unintelligible or gratuitous circumstance. In the words of C.S. Peirce, “The surprising fact, C, is observed. But if A were true, C would e a matter of course. Hence, there is reason to suspect that A is true.”23 This abductive reasoning is easily third-order knowledge, and can even see itself promoted to the second order, given sufficient supporting evidence.

In the case of scientific and religious discussion, one ought to be diligent in first securing their claims well within the realm of second-order knowledge. Many times, a great deal of cultural upheaval and unnecessary suffering result from people aggressively supporting and advancing weak beliefs in such a way so as to make them mandatory for all. Two easy, controversial, opposed, and equally ridiculous examples are those of six-day-creationism and Neo-Darwinism. Both stand on weak paradigms and contradict matters of scientific and metaphysical fact which are quite cemented as second-order knowledge. It is acceptable to hold religious or scientific beliefs which are third-order, but only so long as one remembers that they are beholden to the standards established by their preceding paradigms.

Thesis# 10: Through the collaboration of certain philosophers (and philosophy’s constituents) throughout history, there have been established a series of compelling arguments and traditions as apply to the truth and meaning of the universe; one must be willing to adopt certain elements from these traditions, but not without first assessing the validity of and categorizing such elements

All of this chapter thus far likely appears to be a matter of stating the obvious. It is possible that one or another of my readers will claim that this model in no way resembles the actual process of knowing and knowledge. I challenge such a reader to provide a more practical, reliable, and accurate model so that I may adopt it. For now, I will extoll the cash value24 of this model.

An interesting concept introduced by the sophists in the “new atheism” movement is meme theory.25 A grossly oversimplified view of meme theory is simple: individuals create and transmit memes betwixt one another much like viruses, only instead of deadly illness, they are ideas held in the mind. The memes that survive are those which provide the most utility or are in some other way given opportunity to spread. This theory was created with the express purpose of attempting to discredit religions as some sort of “meme engineering scheme” in which religious leaders, over the course of centuries and millennia, create and finely tune memes which grant the leaders control over those infected by the memes. If true, this would make religions some sort of mental terrorist organizations.

All sentient creatures, in communicating, are meme engineers. When I form a thought and pass it on to another, I am a meme engineer. When taking ideas in and deciding which to share, which to disregard, and which to modify, I am also participating in meme engineering. All of philosophy, including science and theology both, is party to meme engineering. This does not mean that philosophy is some evil organization creating zombies from a careful application of a trade milennia old, but rather the opposite. While there are bad actors which do attempt to abuse ideology and reason to bend the weak-minded to their devices26, meme engineering is the primary engine of progress.

It is important to note that memes are more like sound bytes than full-fledged ideas. Certain images, affectations, or catchphrases are good representations of memes. Where one can easily remember, recite, or recognize a phrase like, “Form follows function,” they may have no concept of it’s point of origin or even what it means. Only through some form of learning or education does one come to know that it is a principle that is key to the architectural field, and too often forgotten.

Many people, for any number of possible reasons, do not critically assess their belief structures. Our culture has engendered a distinctly emotional and anti-reason attitude. Many insist that, “people need to learn to think” when what they really mean is that, “they ought to learn to think like me.” Social understanding of the term, “critical thought” has been switched to dogmatic neoliberal belief. Our political, religious, educational, and economic landscape clearly illustrates this attitude. Additionally, a popular activity that has emerged is asking elementary questions concerning these subjects of a random selection of people off the street and sharing their absolutely incoherent answers.

Ultimately, this unwillingness to critically assess one’s beliefs in the manner I have thus far outlined has become so widespread for so long that many cultures of intolerance to reason have developed. It is, quite literally, impossible to speak cogently, intelligently, and civilly with a large swath of the population. Neoliberalism, fundamentalism, scientism, fideism, and any number more “-ism”s have evolved from their origins as mere theories or rubrics for action into monstrous, insular, intolerant, and aggressive codes of dogma which cannot coexist in a world with rational actors capable of critical thought. This does not mean that all that ascribe to “-ism”s are mindless warrior drones ever ready to jihad in the name of science, faith, or civil rights; some are quite intelligent, if mistaken. Likewise, some number of “-ism”s have managed to maintain their proper mindset, application, and scope in an otherwise irrational environment.

If one is careful to examine both their own and others’ belief structures, one can inoculate themselves against bad memes and avoid being misdirected. Nearly every individual is rational to some degree. As a result, even the most unintelligent or mistaken individual tends to utter claims which bear some degree of truth. I hope that, though this work and those to follow, I may be successful in distilling said truths from the many, many ideologies and theories to which I have been exposed and arrange them in such a fashion so as to be accurate enough to piss absolutely everyone off. I believe that with proper education or training in logical thought, many will be able to make use of this model of knowing and believing in such a way that, even if they are unsuccessful in forming an accurate worldview, they may at least be able to behave and discuss in a civil and intelligent manner.

As can be inferred from the discussion of this framework, the order in which a particular piece of knowledge falls is contingent on the knower, not the meme (or claim). The argument to the concept establishes its order, not the idea itself. A clear example would be in the realm of ethics, in which one can make a particular claim (murder is wrong), and depending on one’s method of determining the claim can land it in any particular category. Kant can claim “Murder is wrong because blah, blah, categorical imperative, blah, blah,” and it would at least qualify as a strong belief. “Murder is wrong,” says the local minister, “because I have a strong abductive argument for the existence of God and the Bible as a moral authority,” and his claim would be, at a minimum, third-order knowledge. When you ask the first person you see at the super market (as I have) and get the response, “Murder is wrong because… what are you, a psycho? It just is!” you have just encountered a claim with no knowledge content worth consideration.

One cannot possibly double-check every claim that they encounter, especially in this era of information overload. Categorization of ideas can help. Our current society sees an instinctive application of this solution; when presenting an idea (especially concerning a political issue) to one’s acquaintances, one is frequently faced with a dismissive response coupled with a particular categorization (“Oh, this is just that liberal/republican crap”). This can be done in a conscious and responsible manner. After assessing a claim one encounters, they can categorize the claim based on premises, subject matter, the stances that others tend to take on issues other than the claim at hand. In doing this, the next time one encounters the same or related claims, they can expediently determine whether said claims operate in an acceptable and cogent framework. Admittedly, this process can result in one overlooking valuable information due to the manner in which it is presented. For this reason, I find that it would be ideal for one to maintain a stoic agnosticism when overwhelmed and explore one claim at a time, remembering always the larger picture.

The necessity and importance of collaboration cannot be overshadowed by the pitfalls of the human condition. In interacting with others in the philosophical space, one is able to expand their knowledge base, refine and correct mistakes, and increase the number of creative minds working on any given problem. Also, this interaction tends to leave a record. Once upon a time, letters, books, and diaries left a record for later philosophers to engage. In today’s era, those technologies certainly persist, but we have the additional technologies of the internet and all it has to offer. Most notable of which is the permanence and accessibility of data, which are attributes that will likely increase in scope as cryptography and open-source technologies become a cultural mainstay.

Many ideas which have survived the ravages of human history have been passed down generationally, being improved, corrected, reassessed, with each passing century. Not all, but likely some of these ideas and worldviews contain a series of compelling arguments and methodological traditions, hence their survival. It would be a missed opportunity if one did not make an earnest attempt to analyze and selectively accept the accurate and useful from these traditions. As long as one’s first order claims are factual and true, it ultimately doesn’t matter which first-order claims are made, a properly formed reason has the capacity to derive the type of worldview pursued by the philosophers: one that is internally consistent, logically sound, empirically viable and universal, possessing ethical agency, utility, and Truth.

95 Theses

1 http://plato.stanford.edu/entries/knowledge-analysis/

2“Problems of Philosophy” Chapter 9

3Hegel, Encyclopaedia of the Philosophical Sciences p10

4Gun safety protip: don’t do that.

5 Which will be discussed later in this chapter as well

6Ch 2: The Embodied Mind

7Falsifiability is a concept I have shamelessly stolen from Karl Popper and turned to my own uses. I will point the curious reader to hes “Conjectures and Refutations”.

8A desire I share as an anarchist.

9Karl Popper Conjectures and Refutations p39

10As well as thesis 95

11First-order knowledge

12Popper p35

13Boethius’ Consolation of Philosophy p2

14Descartes “Meditations on first Philosophy” Chapter 2

15An idea that, while appearing to be simple, contains implicit meanings and beliefs within it.

16Holistic theory of knowledge

17Also called “strong belief”

18 The branch of philosophy which concerns itself with what our modern culture calls science, namely, a study of the material world

19Greek: telos. “That for the sake of which”

20Russell “Problems of Philosophy” Chapter 6

21For a more thorough exploration of both this specific example, and the principles which underlay it, I reference the reader to Thomas Kuhn’s “Structure of Scientific Revolutions”.

22 I seriously wonder what paradigms he would have to establish in order to simultaneously believe all four claims. If he has reliable second-order knowledge to base his accusations against the CIA, I want to hear it

23Groothuis “Christian Apologetics” p434

24The practical results of embracing a particular idea

25Richard Dawkins “The Selfish Gene”

26We will call these people “sophists” or “government officials”.

From Value to Voting

Today’s post is a far cry from my original podcast episode (and most popular post to-date). As far as I can tell, all of the points I raised on both sides of that dialogue still apply, but I have had about four years to think about it and have some more ideas to throw around.

Earlier this year, I had a surprising revelation which was earth-shattering for me, but would probably come across to my readers as obvious as the revelation I had in my post concerning surprises, themselves. That revelation is that not only is value subjective, but value is ordinal, not cardinal. Half of you are probably saying “I don’t even know what that means” and the other half are saying “Well, duh.” Cardinality, with regards to numbers, is essentially numbering: “one, two, three…” Ordinality, essentially means that something is ordered; with regards to lists of things, it would mean that rather than using numbers, one would use superlatives and relationships: “This more than that, that more than the other thing, etc.”

This is one of those things that usually goes unexamined by just about everyone, myself included. The reason this comes as a surprise to me is a result of my Marxist and Classical roots. One of the pipe-dreams of the communists is the idea of a scientifically-engineered economy; for a prime example of this pipe-dream, one need only look as far as Keynesian (or mainstream) economics and the arch-Keynesian, Paul Krugman. The only way this fiction could appear remotely possible is if one is capable of empirically evaluating individuals’ subjective preferences. Empirical studies require numbers and raw data, which one cannot acquire if value is ordinal, not cardinal. Therefore cardinal value is taken by Marxists as a given, and usually only unconsciously.

If anyone has worked in engineering in any capacity, they can understand that if one changes something even very minor and unobserved in the design of a building, machine, or piece of software one of two possibilities are likely to occur: either the general design can continue operation unaffected, or the whole system will fail horribly and unexpectedly, resulting in all sorts of confusion and hair-pulling. In this case, I knew intuitively that as I realized this minor difference, it would impact my philosophical comprehension concerning all sorts of things, including but not limited to my reductivist understanding of reality, the psychology of man, linguistic quirks, and the ethics of voting.

I have been careful in my use of language concerning preferences already: pointing out that certain options were “not preferable” or “least bad”, in order to not leave the impression that I would endorse such an option. If I recall correctly, a good example of this quirk is lurking in my post on crime and vice but I could be mistaken. Upon examination, though, I’m not so sure that such a linguistic turn is appropriate. In reality, with value being subjective and ordinal, there really is no such thing as “not preferable” or even “less bad”; instead, there’s simply varying degrees of preference, relative between options that are available. At this moment, I prefer sleep to food and working on this blog post to sleep. When one looks at action in the context of consequences, I generally prefer working my job and getting paid to sleeping at my desk and getting fired. When one looks at general principles, I prefer verisimilitude to fantasy and moral action to immoral action.

I’ve thus far demonstrated a preference for living over dying, pleasure over pain, quality over quantity, etc. At any given moment, given a particular context, I may act in contradistinction to these general preferences: acting in such a way so as to cause pain in the immediate future for pleasure in the long run, for example. If I were starving to death in a desert and the only prospect for food in any redemptive about of time were a bowl of cyanide-laced curry, I may choose to act against my preference for remaining alive given the morbid prospects on all sides. These are just examples, but I think you get the point.

These examples are not examples of a violation of some sort of principle or character trait but are, instead, examples of the subjectivity of human action. Action requires an assessment of the facts at hand, a desire for a particular outcome, and the possibility of that outcome being achieved; it’s a uniquely human activity. As such, even though I have a general preference for such things, the facts on the ground may disallow certain possible outcomes, limiting the opportunities for action to options that are, in the abstract, less preferable than the options usually available.

This, in a way, is informed by my description of ethics. If ethics is the rational investigation of actionable goals, ethics is really the source of a framework by which to determine preferences and actions to be taken to achieve said preferences. It is also informed by my description of responsibilities in my discussion of intellectual property. If one cannot be responsible for the ideas that others concoct from available sense experience, one is not endorsing a particular course of action on a moral basis by expressing a preference by way of action or word. In other words, I would not be endorsing suicide as a moral maxim in the case of a desert with poisoned curry; I would merely be acting on a preference specific to myself and the particular context in which I found myself. Sorry Kant, Aquinas, and other positivists, you’re wrong in this case.

I’m sure most of my readers have played some variation of “would your rather?” In most variations of this game, there is a set of options (usually two) offered with no context. “Would you rather die of exposure to heat or exposure to cold?” or, “Would you rather make out with a movie star or drive a sweet car?” are good examples of such options. Most normal people simply weigh the options based either on immediate circumstances: “Well, right now I’m hot, so it would be a sort of relief and cruel irony all at once to die of cold…” or they weigh the options based on a self-assessment of character, “Well, one set of lips is more or less the same as any other (to me), but I’m never gonna get to drive something like a Formula 1 if I don’t take this chance…” The sophomoric philosophical types (myself included) more often answer with nonsense responses which try to contextualize the options or point out that “Neither option is preferable, so I’d just let whichever one happens first to happen.” I’ve since learned the error of my ways and I’m trying to navigate this new understanding of subjective value.

So, today, I find myself in a convoluted and Kafkaesque context for certain actions and opportunities (or lack thereof) to express my preferences. Any of my readers are likely aware of my default list of complaints, so I don’t need to rehash them today. The reason that list of complaints becomes pertinent today is this: when one is faced with a hyper-inclusive mass-democracy which possesses a monopoly on violence and perceived legitimacy, one is forced to either roll over and take whatever abuse comes one’s way, engage in one-tenth measures to perform damage control, or to fight or flee.

There’s several popular analogies and limit-cases anarchists and statists alike like to appeal to in order to demonstrate some aspect or another of voting. There’s also a lot of cases people throw around concerning whether one has an obligation to vote, whether voting is a violation of the NAP, whether a vote is an endorsement of a particular candidate and everything he will do, whether voting is an act of self-defense or an act of legitimizing the crimes of the state, and so much more; it’s an insane rabbit-hole that I’ve been spelunking in for a while, now.

At the end of the day, though, only individuals act and one doesn’t bear responsibility for the actions of other individuals. As such, the moral and ethical status of voting relies entirely on the nature of communication and preferences. Is voting a means by which one endorses another individual or delegates authority? Or, alternatively, is voting nothing more than a voicing of a preference. If it is voicing a preference, is it voicing a preference in the context of availability, like in a game of “would you rather”, where you have only choice A or choice B? Or is it voicing a preference in the abstract, where you’re offered choice A or B, but you could just say “I’m gonna look for better options”?

For four years, I have been a principled anarchist non-voter. For those four years, my conscience has been clean. This has probably been for a number of reasons: the most primary of which is that, given the ontological framework I was working with, voting was both unethical and immoral. This position was best described, in writing, in my initial post on voting. During that time, I still had a lot of Marxist predispositions I hadn’t yet analyzed or even come to be aware of, most notable of which is the fact that I was an expressivist as opposed to a realist and that value is ordinal not cardinal.

I would love to take my time and sort out all of the answers in as long a timeline as is needed, but this year’s ballot is coming due in a matter of days and I am doing what I can to be as virtuous and as moral as I can be despite access to the truth of the matter. It doesn’t help that previous elections have been presented as a choice between socialism and socialism-lite while this election, if my understanding is accurate, can easily play out to be the choice between real war versus proxy war, full-blown self-destruction and merely bad economic choices, and socialists propagating versus socialists killing themselves or moving away. Really, I’d almost sell my soul just to see the Clintons in prison, anyway.

The way I see it right now, if I fill out a ballot and turn it in, all I have done is draw some lines on paper and send that paper to some socialist who’s going to pretend to interpret those lines in accordance with my preferences. If I’m doing so to voice a preference between one candidate or another, or raising versus maintaining taxes, or using the violent apparatus of the state to force people to by things they don’t want and sell to people they don’t like or to let people mind their own business, I’m simply playing a game of “would you rather” in the context of a world in which there is a violent gang that is going to pretend to be acting on my preferences.

If they actually did act on my preferences in the abstract, they would systematically shut down all operations and auction off assets to make bankruptcy payments to those that own US Federal debt. In more contextualized circumstances, I’d rather use tax dollars to build walls and reduce the flood of welfare-seekers as opposed to subsidizing the importation of the same and I’d rather use the bully-pulpit of the presidency to promote masculinity, productivity, and competitiveness as opposed to death, destruction, terrorism, and weakness.

Admittedly, this looks more like a personal aesthetic choice to me than a moral one. The current opportunity-cost associated with filling out a ballot, for me, is the 45 minutes it would take to consider the options, google a few judges and local representatives, and drop it off on my way to work. Seeing as how those 45 minutes would probably be spent playing DOOM or watching anime, I think I can spare them. I hope, in the future to be so productive so as to be unable to afford that cost. Then I can go back to being a non-voter because I’m going the ethically-superior route for expressing my preferences, a-la Assange.

Yes, I know that the rampant voter and election fraud swamp my singular vote and that the electoral college doesn’t give a damn about the popular vote. Yes, I know that democracy is the least legitimate of all the forms of government (of which, all are illegitimate) and that I’ve said in the past that killing voters might not be a violation of the NAP. Yes, I know that the group of individuals calling themselves “the state” will continue to murder and rape at more-or-less the same rate. All this considered, it doesn’t change the fact that the one-tenth measure of simply saying “I’d rather you rape me a little more gently” would be preferable to just rolling over and taking it.

ready-to-vote

TL;DR: I’ve recently discovered the fact that value is ordinal, not cardinal. Where that would normally mean very little to most people, it has altered my ontology sufficiently so as to make me reconsider a great many things. Most pertinent to this fall is the moral status of voting. I’m writing this blog post to follow up on one of my first posts concerning voting and to kick around some newer considerations I have concerning moral, ethical, and aesthetically appealing action. As always, this is intended to be a setpiece for conversation, not some doctrine to which anyone must hold fast.

Oh, and P.S. I’m going to try and actually make a follow-up post showing exactly how I’m going to vote and to encourage you to do as I do. Spoiler alert: Hilary is evil incarnate and all of the third-party candidates are almost as bad for various reasons.

P.P.S. Don’t forget to support this project on Patreon!

Just Another Friendly Argument #2: Contracts and the NAP

If you couldn’t tell, I came into this conversation with a little bit of a cavalier attitude.  James, however, was very well-prepared and had a number of notes he was going to send me in an email, but we both thought it would be more fun to do an argument episode of the podcast.

We discuss property rights, contracts, and the NAP.  I was already coming into a newer and more nuanced position on contracts since the last conversation James and I had concerning the matter, so this episode was less an argument than it was an interview, but we had a lot of fun and I think listeners can get a lot of good material from it.

 

school.of-athens-youtube-template-2014

 

Contracts and the NAP

A while back, I mentioned that I think contracts are bullshit. Some day, I hope to get into a full ontology of contracts, but I doubt many of my readers really have much interest in such things. Instead, I’m going to Start a conversation with a few people I know in real life concerning the nuances of the NAP with regards to contracts.

 

Would breach of contract be a violation of the non aggression principle? What about scheduled payments in the future, non-compete, and nondisclosure agreements?

Given that I think contracts are bullshit, I bet most people would assume that the answer I have is simple and straightforward: “no”. Of course, I can never let something be simple. For the sake of this discussion, we’ll just assume the definition I expect to use for the full post on the ontology of contracts and say, “a contract is merely an external explication of an agreement between two or more parties”. In other words, Bruce and Alfred come to an agreement concerning their affairs, say a nondisclosure agreement. That agreement exists as a relationship between the two but, for the sake of clarity (given the human condition), they decide to write the entire thing down and, content that the written document explicates the agreement sufficiently, sign the document to signify their provisional assent to the agreement and the accuracy of the document written to reflect that agreement. Then Bruce and Alfred put the document somewhere where it can be referenced but not altered by either Bruce or Alfred.

That’s a contract, right? It sounds pretty similar to a previous discussion we’ve had. So, lets say the agreement is that Bruce will pay Alfred for services rendered at a certain rate so long as Alfred does not let anyone know some secret Bruce is trying to keep, either by actively communicating that information to someone or letting them figure it out on their own through some form of neglect. Would Alfred be aggressing against Bruce by telling the secret? We can certainly agree that doing so would be dishonorable and vicious, but would it be criminal? Another way to ask would be to say “Can Bruce justifiably kill Alfred if he does so?”

I haven’t gone into that issue in full detail yet, either, but the easy way to put it is I stand by Cantwell’s philosophy of paperclips; It is theoretically justifiable to shoot someone over stealing a paperclip. Admittedly, the odds of encountering someone who would both steal a paperclip and allow the situation to escalate to the point of lethal force are statistically negligible and the odds of encountering someone who values the sanctity of one’s ownership of paperclips over the exorbitant cost of a bullet are equally negligible. However, the moral reasoning remains sound, even if the tactical choice would be tolerance.

Why am I talking about lethal force and paperclips when I should be talking about contracts? Well, is Alfred committing a crime against Bruce if he violates the contract? Can Bruce justifiably kill Alfred for doing so? Surely, the cost of the secret is greater than that of a paperclip. Even so, I argue that the secret is of a different category than that of the paperclip. Whereas a paperclip is property, a secret is nothing more than an abstraction of an individual’s ideas. The primary historical role of contracts such as nondisclosure agreements is an attempt to use the law to transmute mental things into material things, which can then be treated as property. So, even though Alfred may be dishonorable and breach his agreement with Bruce, he isn’t “stealing” anything from him.

What recourse would Bruce have in such a circumstance? Under the legal fictions currently in place, contracts are largely treated as laws are: if one violates a contract and then continues to refuse to play by the rules of the contract concerning breach of contract, eventually the issue would escalate to an encounter with law enforcement, which if the dishonorable man still refuses to comply, will be killed by law enforcement. Because of this, the current state of contract law is every contract follows the formula “We agree to do these things. If we don’t do these things, someone’s gonna fucking die.” Just like a law.

The same is the case if Bruce does not pay Alfred for his services, just for the sake of clarification.

I am obviously not impressed with this formula. As such, I have been exploring contract theories and trying to figure out the exact relationship between the ontology of contracts and the nature of the NAP. Thus far, I have found two possible answers to the question above, and they are mutually exclusive. As such, I’m presenting this post as a conversation-starter (as is the custom at this point).

Option #1: Contracts are 100% bullshit. In this case, the reality of the situation is straightforward: caveat emptor. If Bruce and Alfred make an agreement that Alfred will do butler stuff and Bruce will pay him at the end of the month and either one fails to do so, it renders the agreement void. If Alfred fails to do butler stuff, Bruce doesn’t have to pay him and if Bruce doesn’t pay Alfred, he doesn’t have to do butler stuff. The reality is that all that exists is the agreement between the two with their honor and social standing at stake.

While this solution is simple, it does have some complications. For example, the agreement is temporal in nature: Alfred spends a month of his life performing a service for Bruce before not receiving payment or, if paid in advance, Bruce pays a month’s salary before not receiving the agreed upon service. There are a few technologies which can be employed to prevent such instances, but in the words of Sov Tsu: “If you create a technology to solve a moral problem, you didn’t actually solve the problem.” So, instead, I will simply point out the obvious circumstance surrounding contract-violators: if one is living in a society of a reasonable size, there will be little opportunity to violate agreements without destroying one’s reputation and being dishonored or declared an outlaw. These extenuating circumstances are enough to keep a majority of potential frauds at bay, even in our overpopulated cities and towns.

Of the technologies available to increase the effectiveness of social accountability is that of reputation systems (which I generally dislike); one can have an Angie’s list or a yelp which operates much like a credit score: if one doesn’t have enough honor points, you probably don’t want to get into a contract with them. Another is that of outlaw status; if someone violates fundamental social mores, they can be declared an outlaw by the offended parties, which basically puts them outside of the general functioning of society: you breach a contract without making proper amends, you are refused service at many businesses and won’t be defended if someone were to try to rob or kill you.

Or, alternatively, we can look to the free (black) markets that have existed outside of normal contract law since forever and see what technologies exist there. The one that comes to mind right away is that of escrow holdings: Bruce puts Alfred’s payment into an escrow account at the start of the month, to be paid out to Alfred after a month of service, and they place a third party in charge of that account. Another free market device is that of word-of-mouth; someone trusted would have to vouch for the trustworthiness of each party. In this case, Thomas, Bruce’s father, vouched for Alfred and so Bruce trusts him (and vice versa).

There is opportunity for abuse in this resolution, as with any. Reputation systems can be gamed, are open to corruption, and can become oppressive forms of governance as opposed to useful tools for self-actualization. Public shaming is only as effective as a society is homogeneous, culturally speaking. Escrow services work great for payment plans and such, but do nothing with regards to agreements which do not concern direct exchange of goods. This is why self-empowerment, social cohesion, and populations within the Bunbar number are crucial to a truly prosperous society: the natural market functions of such a society drastically mitigate the harm caused by fraudsters and indolence without resorting to the criminal activities of the state.

Option #2: Contracts have a social function and are therefore not 100% bullshit. In this formulation, contracts have impetus insofar as they can be enforced without violation of the NAP. So, unlike laws, I don’t think one could pretend a contract is valid if it were enforced with the same mechanism (“do X, or we’ll fucking kill you.”). If one agrees to arbitration by a third party and consequences for breach-of-contract as part of the agreement, it is conceivable that polycentric legal systems could manage to serve as a lubricant for commerce in societies, both big and small.

This polycentric system of agreed upon contractual obligations (and punishments) and arbitrators is certainly preferable to the monopolized and criminal system currently in-place throughout the developed world. Between the competitive nature of the market for “justice” and the voluntary nature of contracts (in theory, at least), this system would likely produce something resembling courts which maintains a reasonably high level of satisfaction with legal arbitration. Given the versatility of anarcho-capitalist theory concerning polycentric law, I imagine that such competition would demonstrate the forms of contract theory which produce the most utility over time, independent of their truth-value, of course. If I were to venture a guess, of what that would look like, I’m guessing that the theories of Stephan Kinsella will likely produce the most utility as well as most closely reflect the facts of the matter, even if he has more faith in contracts than I do.

There are two problems I see with this position, though. First, the issue of honor still plays an inescapable role in this dilemma: a dishonorable person who will not honor an agreement will be equally unlikely to honor the specific clause concerning retribution or the presumed authority of the courts. Ultimately, then, we find ourselves in the initial situation presented in option #1. Second, I believe the harm-reduction and forward-thinking provided by standard financial and interpersonal practices far outperform any sort of contract and arbitration service beyond that which is contained in standard interpersonal and fiscal practices. What I mean is putting lenders in-charge of their own interest rates and application process will enable market functions to weed out the honorable and dishonorable, as does actually knowing one’s customers, etc.

This obviously didn’t cover all the nuances of contracts and such, but it is a starting place for a discussion. I need to do more research into the old tort systems and read more Stephan Kinsella. For the meanwhile, I propose that contracts are bullshit and one ought to strive to be honorable and surround oneself with honorable people. It couldn’t hurt to keep records of one’s agreements and obligations, though. Really, the approach one ought to take to contracts is the same as one ought to take to any service that is currently monopolized by government: ask “can this service be provided without the intrinsic threat of murder AND does this service have any necessity in a free society?

TL;DR: Contracts are bullshit, but they are still an important area of discussion to AnCaps and normies, alike. Insofar as that discussion applies to my project, I guess I’m halfway obligated to write about them. Contracts really seem to simply exist as an external point of reference for agreements, which are relational between two or more parties. As such, whether or not violating a contract or agreement (fraud, essentially) is a violation of the NAP is what is really at the heart of the discussion. I argue that most, if not all, cases of fraud are not actually violations of the NAP and that the old adage of “caveat emptor” ought to be kept in mind. As such, the initiation of force against a fraudster is, itself, a violation of the NAP. However, all the finer points of contract theory are currently beyond my expertise and from what I know of Stephan Kinsella, he would be the guy to read for ideas.

cropped-From-Scratch-4.2-Background-and-name.png

Expression Theory vs Realism

About a month ago, I came to a realization concerning something that has been confusing me for years. As is typically the case, I have no easy way to express it in terms most people can understand. In the easy, precise technical terminology I use, the barrier to communication between me and most “normal” people about crime and punishment is that I’ve been assuming people are reductive realists when they are, in fact, expressivists.

According to expression theory, feelings and ideas can exist independent of the mind experiencing them, which allows for direct communication of ideas and feelings. One largely-known application of expression theory is Leo Tolstoy’s expression theory of art, which I will use as a paradigm example of expression theory at large. Tolstoy argues that the definitive quality of art is the communication of feeling from the artist to the audience. The ontology (and/or metaphysics) that is built around such a definition is the concept that an idea or feeling can exist independent of an agent which could be called a knower or a feeler.

In order for such an ontology to exist, it would require an even more intense version of substance dualism/pluralism than that to which I ascribe. Where I have argued that there must be a substance independent of the material substance which constitutes one’s brain (or anything else that physics looks at) which could be called a “mental substance”, that argument is limited to the existence of a “knowing/thinking thing” which is not fully explained by the interaction of matter with itself. An expressivist must allow for the existence of such a mental substance, but must also argue that the thing known is, itself made up of that substance, independent from any mind that may be knowing it. In essence, to an expressivist, the idea of expressivism is somehow currently contained in this set of black and white pixels on your computer screen.

In such a case, a painting or song could be imbued with the artist’s sadness or joy. When one hears the Haffner Symphony and feels happiness, that’s because Mozart imbued his sheet music with his happiness, and every copy of that sheet music made and, later, the orchestra’s playing from that sheet music have all been imbued with that happiness secondhand. So when one listens to said symphony and feels happy, it’s actually Mozart’s happiness infecting the listener. (Example shamelessly lifted from Douglas Groothuis.) I promise I tried to make that example sound as charitable as I could…

What this means, in the case of “crime and punishment”, is that an expressivist, on some level, believes that a criminal is expressing “crime” by committing said crime. They are imbuing the scene of the crime with “criminality” which may infect the minds of others (causing them to commit crimes, as well). “Society’s” response to that crime, then, will also express a response to the crime, imbuing “Society’s” environment with whatever that response is communicating, which will also possibly infect others.

It took me far to long to realize that this is what people meant when they say such absurd things as “We can’t rehabilitate drug offenders with medical science, we must lock them in rape cages… we don’t want to send the wrong message!” What such an individual believes is that a criminal is infected with an idea of criminality which could have been transmitted to them by another individual, by coming into contact with a thing imbued with “criminality” or by a criminal idea that simply happened to float by at that given moment. I’m not certain whether the belief is that the criminal lacks any free will, such that they are merely the slave to whichever ideas and feelings they are exposed to or if one would have free will, but only insofar as one could fight off the infection of an idea or feeling in the same way one fights off a cold or flu virus… the literature is murky in that regard.

If I had to venture a guess, though, I would point out that Toltsoy is a proto-Marxist and sympathetic to anarcho-communism. Because of this, I think his cultural influences would lead him to argue that individuals only have free will insofar as they can overcome the influence of capitalist marketing and join something akin to the communist revolution, which would mean that most people are merely slaves to the ideas foisted upon them and only the great men of history can rise above mere servitude. In full disclosure though, Tolstoy was not a fan of revolution, he was too much a fan of Buddhism for that. For example:

“The anarchists are right in everything: in the rejection of the current state of affairs and in the assertion that under contemporary moral conditions there can be nothing worse than governmental violence. However, they are profoundly mistaken in believing that anarchy can be established through a revolution. Anarchy can only be established by the process of people becoming less and less reliant upon governmental authority and by people becoming more and more ashamed of participating in this authority.”

To get back on subject, though, I am convinced that despite Toltsoy’s positive contributions to philosophy and culture, expression theory is riddled with absurdities which could not be reconciled with any ideology other than a naive platonic idealism, one which claims that the only thing that exists are ideas that exist independent of any particular media which may contain said idea… that everything which exists is nothing more than a perception of some ideal divine form beyond direct human apprehension. This is, conceivably, self-consistent, but requires an incredibly complex ontological and metaphysical framework to be constructed around each individual aspect of the human experience which could more elegantly and directly be explained by simply allowing the material things with which one interacts to be real. Instead of reifying (making real) ideas and feelings, instead of making them exist as non-contingent and independent entities, would it not make more sense to apply Occam’s Razor and ask if ideas and feelings are not merely phenomenological experiences contingent upon the sense-perceptions and brain-states of the experiencer?

A (reductive) realist will restrain their ontology to only include that which must necessarily exist and/or observably exists. To such a realist, ideas and emotions are phenomenological events confined to individual minds, derived from stimuli. Meanwhile, a realist will look at actions, incentives, and outcomes with regards to individual actors, or “communities” by way of statistical aggregate. So, a criminal, then, is choosing to commit crime, based on whatever phenomenological event is occurring within her own mind, and expressing nothing. Subsequently, any individual/institution punishing a criminal is not expressing anything, but merely attempting to accomplish an end by physical means (reform, punishment, removal from the general population, sending a market signal that “crime doesn’t pay”…) What little explanative power the expressionists have concerning crime or social stigma being “contagious” can better be accounted for by what amounts to “market signals”.

For clarification, what a signal amounts to is a discrete physical phenomena (such as black and white pixels on the screen) which lend themselves to individuals observing and constructing an idea from that stimuli, which then informs their action (such as decoding the sentence constructed from these pixels and understanding, to some degree, the idea in my head). In the case of market signals, prior events provide stimuli for constructing ideas which inform market functions such as risk-assessment, cost-benefit, and value acquisition.

I didn’t really set out on this blog post to argue with Tolstoy and his unknowing inheritors, though. I am writing this post to bring attention to a language barrier I’ve discovered between myself and a great number of people. I believe this language barrier is derived from a distinctly separate and unaddressed ontology. This post is really just a call for feedback so that I can come to a better understanding of how my audience sees the world and to increase the dialogue between me and my readers. This issue, I think, is surprisingly central to all of the disagreements between statists and anarchists as well as between AnComs and AnCaps, and I therefore feel I need to come to a better understanding of all sides of the issue… if for no other reason than to secure my paradigmatic awareness for future discussions.

TL;DR: This post is short enough that I don’t think it really needs a “too long; didn’t read” section. Instead, I want to take this portion of the post to express my gratitude to those of you readers that have provided support for this project by way of donations, getting things from amazon wish list, using my affiliate links, and sharing this content on social media. I also want to give the readers/listeners an update. A few of you have noticed that the site has been getting a little less attention of late, with a lack of podcast episodes and the timing of blog post releases. I’m honored that you noticed and felt that yo should let me know. I recently switched jobs, moving from a low-level grunt to management. My new workload and schedule precludes being able to write blog posts while at work, and we are still trying to get family life back into a regimen we can survive with the new schedule. Hopefully, but the end of this month we will be operating at full-capacity again. Thank you.

From Scratch 4.2 Background and name

Thus Spake Zarathustra

This weekend, I hosted one of my philosophy club sessions for the summer. The discussion was on Nietzsche’s magnum opus: Thus Spake Zarathustra. A reader of this blog was recently kind enough to purchase a copy of the text for me from my wishlist, and I couldn’t let that act of charity go unpunished. Today, I am doing a “teaching from the text” post.

For a bit of context, Friedrich Wilhelm Nietzsche was born in the mid-19th century. He was a very clever Prussian/German child, quickly grasping academics and rising through the social and official ranks in university. His main focus was that of a cultural critic and philologist, both of which naturally lend themselves to philosophical activity as well. When he was relatively young, he started to suffer from a mental illness which has never been fully diagnosed. Many believe it to be Syphilis, but there is considerable reason to doubt that diagnosis.

During his time as a productive member of the continental philosophical culture, the western world was reveling in it’s own greatness. Between the ongoing rise of industry, the new form of nationalism that was emerging, and the social fallout from the enlightenment era, mainstream culture was very self-satisfied. Nietzsche, however, was largely unimpressed. He found the post-enlightenment culture to be hypocritical and could sense the looming prospect of the century of total war to come.

His philosophical writings themselves, due to the political climate in his later life and after his death in conjunction with his continental style of writing, generally serve as a sort of ink-blot test for his readers; a punky young college freshman will read “Beyond Good and Evil” and immediately become a Nihilist, whereas a more well-read individual may read “The Gay Science” and hold a deep discussion with someone over the nature of science and the indispensable role of levity and partying in one’s pursuit of virtue. Many who have been educated in modern American colleges and universities, when they read “Thus Spake Zarathustra”, see Nazi propaganda and elitist nonsense…

Fortunately, enough scholarship has been done on the original writings of Nietzsche and the later editions and translations such that one can actually see beyond the veil of history and get to know the actual philosophy of the man… with a little bit of effort. An important historical fact that puts things into context is that Nietzsche is the Aristotle to Schopenhauer’s Plato. Arthur Schopenhauer was a German idealist from the early 19th century who had a very distinct philosophy. He drew heavily on the material available from eastern philosophy, most especially Buddhism, and mixed it with German Idealism as well as his own curmudgeonly intuitions. The most famous of his works, and the basis of his ontology, is “The world as Will and Representation”; spanning three volumes, Schopenhauer builds a world that consists of a creative force which simply swells up out of nothingness, namely, will.

Nietzsche discovered philosophy through reading Schopenhauer, but he spent a good portion of his time arguing against things that Schopenhauer had said. Most especially that of the universe as will; Nietzsche argued that will alone is inert and that it must be coupled with power, the ability to execute one’s will, and the world would therefore have to at least be the “will to power”. This will to power is at the heart of the rest of Nietzsche’s project, and it’s one that I, myself, am sympathetic to.

Thus Spake Zarathustra is a sort of novel wherein the main character preaches Nietzsche’s worldview to the masses of modernists in the German countryside, to varying effect. Zarathustra is, at the same time, both an avatar for the author as well as a manifestation of his philosophy. The general plot is fairly straightforward: Zarathustra lives alone on top of a mountain, generally being awesome and waiting for the coming of the Ubermench (Superman), he then decides to go down from the mountain to preach to the peasants of Germany. While down there, he preaches “the truth” and some people start following him, but most would rather mock and avoid him. So, Zarathustra takes on a few disciples, leaving “the rabble” to their own devices. After a while, he can’t stand being around lesser men anymore and he returns to the mountaintop.

A while later, he has a vision which tells him that people are perverting and ruining his teachings, so Zarathustra has to condescend again to the rabble and try to sort things out. He makes a couple more friends and preaches some more, sings some songs, goes to some parties, laments that he is so awesome he can’t help it and bemoans how he can’t help but bestow his awesomeness on everyone else… Then he starts showing everyone how to really have a good time and cut loose. All and all, for all of Zarathustra’s solemnity when dealing with the rabble and the false prophets (that is, all of them) of the modern world, his exhortation is always that to be joyous and celebratory, because that’s all that there is that makes life worthwhile in a world wherein God is dead for grief of his love of man.

Despite how reductionist and flippant I am when describing the plot of the story, there is a lot of great fodder for discussion and examination in the text. Zarathustra’s words and actions are pointed and weighty; he brings to bear a striking series of accusations against the hypocrisy of post-enlightenment culture, the solemnity with which people address the absurd (in a pre-existentialist way), the futility of attempting to enjoy a life divorced from one’s own personal virtue. Zarathustra takes social conventions, such as friendships, and professes that everyone has the idea backwards. Where modern culture would insist that a friend is one who will support you in every endeavor and turn against those who do not, Zarathustra reminds his audience that one can only become greater than they are by being made aware of one’s faults and weaknesses. One can only achieve power by way of keeping those close who would remind one of one’s errors and shortcomings. A true friendship, one rooted in will to power, is one wherein a friend desires greatness for his friends, even at his own expense. For example: “If a friend doeth thee wrong, then say: ‘I forgiveth thee what thou hast done unto me; that thou has done it to thyself, however, I could not forgive that!” because in doing ill to one’s friend, one is behaving viciously and injuring oneself.

Ideas like solidarity in the state are also turned upside-down.

“Somewhere there are still peoples and herds, but not with us, my brethren: here there are states.
A state? What is that? Well! open now your ears unto me, for now will I say unto you my word concerning the death of peoples.
A state, is called the coldest of all cold monsters. Coldly lieth it also; and this lie creepeth from its mouth: “I, the state, am the people.”
It is a lie! Creators were they who created peoples, and hung a faith and a love over them: thus they served life.
Destroyers, are they who lay snares for many, and call it the state: they hang a sword and a hundred cravings over them.
Where there is still a people, there the state is not understood, but hated as the evil eye, and as sin against laws and customs… This sign I give unto you: every people speaketh its language of good and evil: this its neighbour understandeth not. Its language hath it devised for itself in laws and customs.
But the state lieth in all languages of good and evil; and whatever it saith it lieth; and whatever it hath it hath stolen.
False is everything in it; with stolen teeth it biteth, the biting one. False are even its bowels… Everything will it give you, if ye worship it, the new idol: thus it purchaseth the lustre of your virtue, and the glance of your proud eyes… The state, I call it, where all are poison-drinkers, the good and the bad: the state, where all lose themselves, the good and the bad: the state, where the slow suicide of all—is called “life.”…
Do go out of the way of the bad odour! Withdraw from the idolatry of the superfluous!
Do go out of the way of the bad odour! Withdraw from the steam of these human sacrifices!
Open still remaineth the earth for great souls. Empty are still many sites for lone ones and twain ones, around which floateth the odour of tranquil seas.
Open still remaineth a free life for great souls. Verily, he who possesseth little is so much the less possessed: blessed be moderate poverty!
There, where the state ceaseth—there only commenceth the man who is not superfluous: there commenceth the song of the necessary ones, the single and irreplaceable melody.
There, where the state ceaseth—pray look thither, my brethren! Do ye not see it, the rainbow and the bridges of the Superman?”

He has harsher words, still, for those he calls “tarantulas”.

Welcome, tarantula! Black on thy back is thy triangle and symbol; and I know also what is in thy soul…
Revenge is in thy soul: wherever thou bitest, there ariseth black scab; with revenge, thy poison maketh the soul giddy!
Thus do I speak unto you in parable, ye who make the soul giddy, ye preachers of equality! Tarantulas are ye unto me, and secretly revengeful ones!
Therefore do I tear at your web, that your rage may lure you out of your den of lies, and that your revenge may leap forth from behind your word “justice.”
Because, for man to be redeemed from revenge—that is for me the bridge to the highest hope, and a rainbow after long storms…
Ye preachers of equality, the tyrant-frenzy of impotence crieth thus in you for “equality”: your most secret tyrant-longings disguise themselves thus in virtue-words!
But thus do I counsel you, my friends: distrust all in whom the impulse to punish is powerful!
They are people of bad race and lineage; out of their countenances peer the hangman and the sleuth-hound.
And when they call themselves “the good and just,” forget not, that for them to be Pharisees, nothing is lacking but—power!
My friends, I will not be mixed up and confounded with others.
There are those who preach my doctrine of life, and are at the same time preachers of equality, and tarantulas…
With these preachers of equality will I not be mixed up and confounded. For thus speaketh justice unto me: “Men are not equal.”
And neither shall they become so! What would be my love to the Superman, if I spake otherwise?”

If you couldn’t tell by the couple selections I chose to share with you, there are at least a few things Nietzsche has to say to which I am very sympathetic. I used to bristle when people would call him an elitist, because that word was a pejorative in my Marxist vocabulary. As time has gone on, though, I’ve learned that, in fact, both Nietzsche and myself are elitists of a sort: those who can be great ought to do so, and not everyone has that ability or will bother to follow through with such an exercise. In that way, both Zarathustra and myself have a certain attitude: “Not to the people is Zarathustra to speak, but to companions!… I am not to be a herdsman or a grave-digger. Not any more will I discourse unto the people; for the last time have I spoken to the dead.” This wasn’t always my attitude and, reading Nietzsche’s works in chronological order, I get the feeling that wasn’t his original attitude, either.

There is a lot in Zarathustra that certainly isn’t as truthful or as poignant as the other parts… his discourses on the nature of women and religious sentiments themselves somewhat miss the mark, but still ought to be read, so as to better inform one’s position nonetheless. There are a fair number of people that one will run into in the course of daily life, at work, school, the grocery store parking lot, etc. who are unwitting disciples of halfwit Nietzschean professors. So, when someone cuts you off in the parking lot screaming racist obscenities before getting out of his car and sauntering up to the water-cooler next to your cubicle and going on-and-on about how women’s sole virtue is their love of men, you can understand “Oh, this guy must have had a Nietzschean professor back in college and he never grew out of being a frat boy…” and you can decide whether to lay some real Nietzsche on him or to smugly await the superman with the knowledge that rabble like your coworker will soon be obsolete.

Some translations of the work are better than others, as well. There are some that are so far removed from the original German so as to render a totally different ideology from that originally espoused in the text. That is why my favorite edition of the text is the JiaHu Books German/English edition; the translation is pretty solid and the original German is on full display so one can double-check the translators’ work if one so desired.

This work only barely didn’t make my Suggested Reading Lists, but it is an excellent companion to either of the Nietzsche works that did make the lists, as it explores them in a more poetic and novel way.

Zarathustra_big

From Scratch 4.2 Background and name

Chapter 2: The Embodied Mind

Chapter 2: The Embodied Mind

Thesis #5: One’s experience is phenomenological in nature and derived from the senses; the development of the mind and our understanding of the universe is therefore derived from sense experience and interpretation of said experience

In the previous chapter1, I established that all knowledge is experiential. Even matters of “divine revelation”, ESP, or any other alleged spontaneous acquisitions of knowledge are still experiential in nature, as one is still experiencing such an event within their own mind, regardless of whether or not it is actually happening in a manner consistent with how one perceives it taking place. When we first addressed this state of affairs, it was in the context of one being solely informed by experience. In this instance, we are approaching it from an incrementally more nuanced position: that one’s experience is phenomenological in nature and derived from the senses.

Man has an inborn faculty of intellect. The intellect is a complex and frustratingly mysterious thing; I will describe it in as concrete and simple terms as possible. In the words of medieval philosophers, the intellect is the capacity to which matters of fact make themselves apparent, “like a landscape to the eye”2. This is the primary faculty by which one experiences the world, providing man with direct apprehension of the things around him. Essentially, intellect is the seed containing the mind, the ratio3 within man. This is seen in an infant as he begins to focus on various elements within his environment and as he gathers rudimentary sense data.

With sufficient time and experience, the capacity (seed) of intellectus can grow into the faculty of reason. Again, using the medieval scholars, “Ratio is the power of discursive, logical thought, of examination, of definition and drawing conclusions.”4 A more modern and specific definition would be, possessing the qualities of, or capacities for, self-awareness and a fundamental potential to learn and think logically”. The manner in which the intellect receives those experiences is sensational5; an infant may have a very basic set of instinctual “programs” by which they “know” how to feed, breathe, cry, and squirm, but they do not even have control over the movements of their own limbs, let alone any cognitive faculties. The intellect allows the infant to begin gaining control of their movements through the repeated cycle of stimulus and response in each of its limbs. Through prolonged exposure to patterns in environmental stimuli, the infant begins to expect the patters to continue in the same manner: the first fledgeling sparks of reason.

Before continuing to analyze the relationship between intellect and reason, it would be prudent to expand on thesis number two. “Reason dictates one’s understanding of the universe.” Reason, or the ratio we defined above, is a uniquely human experience. As mentioned previously, animal “experience” is nothing more than a perpetual cycle of stimulus and response. Conversely, humans have the experience of experiencing; or rather, the intellect serves as an intermediary step between stimulus and response. The intellect, as it develops into reason, begins to identify apparent patterns and categories. This pattern recognition is not infallible6, but is the basis of all human experience. While reluctant to abandon his skepticism, Bertrand Russell expresses a very similar and more detailed opinion as this in his Problems of Philosophy7. His term for this process, which I will borrow, is “induction”.

Following induction, both Russell and I approach “deduction”. Deductive reasoning, also called syllogistic reasoning, are matters of logical calculation. Through induction, one can begin to assume patterns, and can even express them syllogistically. “If the stove top is red, it is hot” is a simple premise, which can be derived from simple experiences. Upon witnessing that the stove top is in fact red, one can assert, “if the stove is red, it is hot. The stove is red. Therefore it is hot.”. This is an assertion which is derived from a combination of experience and reason. However, no degree of experience can account for the initial element of reason that dictates that such a syllogism is possible, let alone reliable. Modern research into early human development, though, has discovered that there are strong indications of innate mathematical reasoning within infants. I assert that these mathematical operations are an example of that very intellectus earlier mentioned. Ultimately, mathematics is an expression of logic in it’s purest form8, meaning that logic is something more than just a mere brute fact9: it is a faculty inherent to man.

Deduction can express hypotheses beyond the realm of immediate experience. While our first example was purely experiential and practical, a brief survey of the philosophical tradition will show that deductive reasoning can be (and is) applied to every imaginable circumstance. The accuracy of these deductions is wholly contingent on two virtues, the accuracy of premises as they relate to reality and its adherence to what Russell calls the”Laws of Thought”10. They are as follows:

  • The law of identity: ‘Whatever is, is.’
  • The law of contradiction: ‘Nothing can both be and not be.’
  • The law of excluded middle: ‘Everything must either be or not be.’

In other words, the “Laws of Thought” is another manner of describing the principle of non-contradiction. The best formulation I have seen of the PNC to-date is, “The logical principle that something cannot both be and not be in the same mode at the same time.”

We are fortunate that we are inherently conditioned such that these principles are immediately apparent to us as they are, themselves, unprovable. Our experiences can serve to reinforce these principles and, through their applications, prove their utility even if one cannot prove them in themselves. Through experiences of particular instances, we can come to a greater understanding of the nuances of such a simple and self-apparent set of principles. All the laws of reason, which will be explained and elaborated as they become pertinent in this work, are simply expressions of the particular nuances of the PNC.

The more abstract or complex lines of deductive and inductive thought are no doubt somewhat removed from immediate experience, either by way of their conceptual nature setting them apart from the physical world or by speaking of physical events that are not within a proximate vicinity to the one deducing. This does not make the reasoning any more or less valid. For example, one can engage in mathematical exercises concerning triangles without referring to any actually existing instances of a triangle. Another instance would be a deduction that determines all kangaroos are mammals, even if one has never seen one before (and isn’t likely to… how many people go to Australia, really?). Both of which are valid regardless of whether the one doing the deducing is immediately experientially present to the subject matter or not.

These rules of logic and their applications obtain in such a manner that renders relativism (in all but its softest forms) impossible. Something is said to “obtain” if it is necessarily true in every instance, such as triangles having three sides or the PNC. I say that these obtain in such a way so as to render relativism impossible because relativism is, fundamentally, a denial of objective truth. Extreme relativism denies all objective truths whereas softer forms only deny particular categories of truth such as moral truths. This denial necessarily results in violations or denials of the PNC. Any instance in which one says, “there is no objective truth,” is an instance in which they are categorically denying categorical statements. This is an age-old objection to relativist thinking11 which has simply been hand-waved by the proponents of relativism. Admittedly, there are more refined and delicate relativist arguments, but they all fall prey to this fallacy at some point or another.

Thesis #6: The mind is an embodied entity; all language and imagining is clearly based in bodily experience and all imaginable entities outside the immediate physical world are conceptualized in a sensational metaphor.

This experiential and embodied basis of our knowledge is clearly evident in our language. Every aspect of our imagination physical in nature. It is fitting that, when discussing material circumstances, one should use material language. For instance, “that dog is sitting under the tree.” That statement can be a literal expression of a matter of fact. However, while it may feel intuitive, the same material language is used to express abstract concepts. For instance, “The prospect of war weighs heavy on my heart.” In this case, “the prospect of war” is immaterial and possesses no weight as a result. Additionally, one’s heart is unaffected by some immaterial state of affairs external to the person in whose chest it resides. I do not mean to claim that the above statement is devoid of meaning or veracity, but wish to illustrate the metaphorical nature in which we express immaterial concepts. While I lack the space and attention span to enumerate the various metaphorical uses of material language in the style of Wittgenstein, I contend that there is no instance of using language in a literal and comprehensible manner when expressing an immaterial state of affairs.

Upon brief inspection, I see three common uses of embodied language as referencing phenomena metaphorically. Firstly, it is used with regards to invisible material things, many of which we see the effects of but never the things themselves. Secondly, it is used with regards to metaphysical or spiritual12 entities. Thirdly, we employ embodied language with regards to ontological, or divine, concepts13. It would be prudent to, at least exemplify each of these categories and the relationships between them.

Many will object to me asserting that we use embodied language with regards to material objects metaphorically. “Of course we use material language when speaking of material things!” they say, “why would it be a metaphorical use?” With some invisible material things, like most gasses or electrical currents, metaphorical language in unnecessary; it is literally the case that air can push, pull, heat, or cool things as well as electric currents14 and the like. However, in the case of more esoteric fields such as particle physics or quantum mechanics, we do use physical language metaphorically. A couple easy examples would be the “color” of quarks or the “spin” of particles. Quarks are too small to be directly perceived by way of light and color, but the choice of “colors” provide certain useful conceptual assumptions based on our knowledge of actual colors. The same type of metaphor applies to the “spin” of particles, providing those that study and discuss these things with applicable and comprehendable language to do so even if the terms are literally meaningless is such a context.

Admittedly, I have not yet allowed metaphysical or ontological existants15 into this framework but that doesn’t disallow this analysis of language to enter into our discussion. Even if such immaterial things do not actually exist, we still speak of them and the manner in which we speak of them is indicative of the point I am making presently. Metaphysical entities, such as the principles of logic which were discussed earlier or the fundamental laws of physics, are frequently discussed in the language of math or logic; however, they are frequently expressed in physical language in order to make it practically useful. In the case of a particle’s “spin”, quantum particles travel along vectors as if they have angular momentum, like a spinning object, despite not necessarily spinning. Additionally, in the case of non physical narratives, whether fictional or real, such as dreams, out-of-body experiences, revelations, ghosts, angels, etc. are expressed in physical metaphor. An easy example would be the common narrative which occurs in reports of out-of-body experiences, “I was outside my body, kind of floating above it. I was there, but I wasn’t; I could see everything, but not like one does with their eyes. I was also in the next room over and still inside my body at the same time. I could see a long, dark tunnel, but it wasn’t really there, with a light at the end.” The only intelligible manner in which we embodied creatures can describe a circumstance which was clearly non-spatial and non-bodily is by use of spatial and visual language in an approximate metaphor.

Before we discuss ontolocal language, we must first define “ontology”. Ontology, as it is frequently used, is typically assumed to mean “the philosophy of hierarchy” or “the study of existants”. In my usage, ontology is best defined as “the philosophy of that which precedes physics and metaphysics”. This means that there are ontological commitments inherent within the fields of physics and metaphysics which, themselves, require investigation. These commitments typically involve the status of things as either existing or not, the relationships and nature of substances and logical principles.

As one can assume from the above definition, ontological language tends to be complex and ambiguous at times. This area of study tends to involve exclusively mathematical concepts, the nature of eternity/infinity, discussions pertaining to God, and ideas16. Not one of the things on that list are material or sensual things. Typically, in the case of God, anthropomorphic language has become so prevalent so as to make caricatures of the actual concepts themselves (ie. God is a bearded angry old man in the sky who smites people for petty acts of impoliteness.) Not one of those terms are easily applicable to ontology, let alone accurate metaphorical language for ontological concepts. However, this gross abuse of language does not detract from the fact that the only way a human can grasp such concepts as infinity, especially when attempting to avoid instantiating an infinite17, is through metaphorical use of embodied language.

Additionally, we, as (apparently) willing creatures, tend to use mindful language to express the behavior of non-willing and/or necessary beings. Where we may have refined our language in physics since Empedocles, “Things fall because like things desire to be proximate to like things,”18 we certainly still fall into this trap. Again, it is most common in the more esoteric areas of physics and in ontological discussions, such as particles “seeking out each other” or being “entangled” despite lacking a will or an actual entangling medium. That doesn’t change he fact that we use a language that is limited to embodied experience as a metaphor for more advanced concepts.

There is yet another likely mistake that one can make in reading this chapter. That mistake would be assuming that I am conflating the mind with the body (or the brain). I will not make a case to either materialism, idealism, or substance dualism here. Instead, I intend to explore the manner in which we express such concepts linguistically.

One of the most interesting cases of language operating in an unexpected manner is with regards to the self. For example, phrases commonly used are “my body”, “my mind”, “my soul”, and “my self”. We speak of certain aspects of ourselves in the same manner we would speak of our property; “my car”, “my robot slave”, etc. This linguistic phenomenon implies two things. Firstly, it implies that one’s mind, body, soul, self, and property are each distinct entities which are not reducible to one or the other. Additionally, it implies that what exactly an individual is is either an amalgamation of the above listed possessives, or something radically distinct from them.

We will address the question of what exact relationship the mind and body have, whether they are the same thing, one reduced to the other, or as two distinct and intermarried elements, later in this book19. The additional question of what, precisely, the individual is will be addressed briefly, but it will require far more space and time in order to reach a meaningful answer than I have available in this work. It will also require more intermediary steps than the mere twenty needed to discuss the mind-body problem. For now, it will suffice to merely express the manner in which our mind is embodied, practically speaking.

For fear of being accused of making the same mistake that Nietzsche made,20 I feel compelled to leave a disclaimer at the end of this chapter. I recognize that being a young American, my sole focus in this chapter is the way an individual thinks and speaks in American English. However, I believe, based on my limited grasp of Latin and Japanese as well as my exposure to Hebrew, Greek, and Spanish, this argument still obtains in some manner or another in every human language, with some slight modifications.

95 Theses

1Ch1, “Epistemic Assumptions”

2 Pieper pg 139

3Reason

4 Pieper 139

Also, Thesis #22

5Pertaining to the senses

6A state of epistemic affairs where one in incapable of being wrong

7Russell, Problems of Philosophy chapter 6

8 citation

9Something that simply exists without the possibility of explanation

10Russell ch 7

11The discussion between Thrasymachus and Socrates in Plato’s Republic (Book one, Chapter one) is an easy example.

12 I am not equivocating the two, mind you

13 In this case, the two may at times be equivocated

14Also, electromagnetism

15Simply defined, “a thing which exists”

16 Not to mention imaginary things like unicorns and free national healthcare

17For an introductory example of this type of reasoning, I recommend reading “The Cambridge Companion to Arabic Philosophy”.

18Aristotle attributes such a claim to Empedocles in his work De Anima

19 Chapters 8 & 9

20Namely, being a philologist instead of something a little more… real.

Just Another Friendly Argument 1: Dan

 

Discussing:

Water rights, the tragedy of the commons, cost-benefit analysis,(im)migration, how I may very well be incorrect, muh roads/highways, competition between railroads and highways, ethics vs economic utility and government vs individuals, cardinal vs ordinal values, ethics vs. morals and “thou shalt not murder”, evolutionary biology/psychology, Sustainability in human action, Zomia and the nature of History, Transgender restrooms and democracy, the psychology of voting, the housing crisis, Keynesian economics and my communist roots, Trump-flavored cancer, mass extinction, labor prices and economic growth, minimum wage and education.

This is an audio-only post, and I expect that (provided this becomes a recurring segment) it will remain audio-only.  It’s a little bit longer than most podcasts, but I hope you enjoy it.  As always, I crave feedback, so let me know what you think, so I can do a better job.

Carpe Veritas,

Mad Philosopher

Chapter 1: Epistemic Assumptions

Chapter 1: Epistemic Assumptions

Thesis #1: One is solely informed by experience

“We must, as in all other cases, set the apparent facts before us and, after first discussing the difficulties, go on to prove, if possible, the truth of all the common opinions about these affections of the mind, or, failing this, of the grater number and the most authoritative; for if we resolve both the difficulties and leave the common opinions undisturbed, we shall have proved the case sufficiently.”1 As a read through the canon of philosophy2 will evidence, there is a long-standing tradition of beginning with and stating atomic, self-apparent, facts followed by exploring the ramifications of accepting those facts. While some philosophers may begin with assumptions more apparent and verifiable than others, it remains the case that all worldviews are predicated on basic assertions which are made by the one (or group) which crafted said worldview.

This assertion is, itself, a self-apparent truth. There is no real way to prove that all reason is derived from immediate facts, only to disprove it. The principle of non-contradiction is one such principle: a thing cannot both be and not be in the same mode at the same time3. There is no way to conclusively prove this to be the case, but it is the foundation of all our reasoning. I assert that any example that could be presented contrary to this claim is either simply a convoluted example of my assertion or is an exercise in irrationality and absurdity4. I will choose to arbitrarily select one out of all the available examples of a beginning paradigm which attempts to circumvent this reality. A common line of reason in modern American society is the claim that “There exist, among men, a large percentage of bad actors who harm others. We wish to be protected from bad actors. Therefore we must place men in positions of authority over other men in order to protect them from bad actors.”5. Of course, in this case, there will undoubtedly be bad actors introduced into the aforementioned positions of authority, amplifying rather than mitigating the negative effects of bad actors in society.6 This is one of innumerable examples which demonstrate the impossibility of escaping the paradigm I have presented.

As can be assumed, these self-apparent facts are apparent only through the experience of the one to which the fact is apparent. Each of these (and all subsequent) experiential facts are, themselves, informed solely by experience. Even the most extremely outlandish claims to the reception of knowledge, like divine revelation or telepathy, are in their own way experiential. Ignoring whether or not it is possible or likely that one can have a vision or spontaneously altered awareness which is factual or true, what is guaranteed to be the case is that those who honestly make this claim have had an experience of such which has informed their worldview.

Reason, then, as the faculty by which one can analyze and make judgments about one’s environment, is ultimately derived from experience7. The experience of fundamental principles, like the PNC, allows one to generate the praxis8 of reason. By using the tools and flexing the muscles of the mind, one can begin to develop the faculty of reason.

Thesis #2: Reason dictates one’s understanding of the universe

One without reason, like an animal, exists in a perpetual cycle of stimulus and response. No different than a complex computer program, the sum of all an animal’s behaviors is dictated by a genetic, instinctual, rubric by which an animal eats when it is hungry, mates when it is fertile, and flees predators when threatened. Every nuance in their behavior is simply a property of their programming. This can lead to amusing circumstances when an animal’s conditioning is no longer appropriate for their environment, such as dogs refusing to walk through doorways due to certain cues which lead them to believe the door is closed or Andrew Jackson’s parrot swearing so profusely it must be removed from its owner’s funeral9. These amusing behaviors, though, are prime indicators as to the lack of a key characteristic which makes man unique from the animals: reason.

Both man and animals have experiences: certain events as perceived through the senses. However, man has the unique experience of experiencing that he is experiencing. In other words, “We are not only aware of things, but we are often aware of being aware of them. When I see the sun, I am often aware of my seeing the sun; thus ‘my seeing the sun’ is an object with which I have acquaintance.”10 Experience, itself, is clearly not sufficient, then, to be considered reason or a source of reason. Experience, as the animals have it (animal experience as I will refer to it), is little more than a sensational input to an organic calculator which produces a result. That result, even, is no more than an action of the body which, in turn, generates further sensational input. This cycle simply repeats itself thousands of times per minute, millions of minutes in succession, until the animal dies. The experience of man (or just “experience”, as I will call it), however, is different.

Man still experiences via the senses, but there is a slightly more complex process in operation after that initial sense experience. If a man is still in his infancy, is drunk, caught sufficiently off-guard, is mentally disabled, or is one of my critics (or is any combination of the above), it is incredibly likely that they will have a form of animal experience by which reason doesn’t enter the picture until some time after an instinctual and automatic response takes place. Even though that may be the case, there will be an opportunity later to reflect on the experience and interpret it as one wishes (though, at times, that opportunity is ignored). More commonly, an individual has the opportunity to process sense perceptions with a rational mindset, deliberating whether he should say a particular sentence or another while on a date, for example.

In this example of a date, one, we will name him Mike, can draw on experiences from the past to inform the present choice. Upon reflecting how poorly his last date went, Mike may opt to avoid describing in graphic detail what it feels like to shoot oneself in the leg over a veal entree… at least on the first date. This is an example of how one’s understanding is a direct result of one’s internal narrative. After experiencing the horror and disappointment of a first date ending abruptly and with no prospects of a second, Mike would have the rational faculty to reminisce over the experience in order to find a way to succeed in the future. Having reached an understanding that such behavior is not conducive to a successful date, he can choose to avoid that behavior in the future. This applies in all circumstances besides the aforementioned date. If, say, Mike were to decide to read this book, after reading a miserable and arrogant introduction, he may come to an understanding that this book is not worth it and return to watching football never to read philosophy again (that sorry bastard).

Of course, it is possible that one’s interpretation of an experience can be flawed. In the case of Mike, it’s possible that his earlier failed date had less to do with his choice of conversation and more to do with the fact that his would-be girlfriend was a vegan with a touch of Ebola. In the case of his current date, it is distinctly possible that his current would-be girlfriend is a red-blooded anarchist meat-eater who listens to Cannibal Corpse songs when she eats dinner at home. By misinterpreting previous experiences, Mike is going to spoil his chances with a real keeper. For this reason, I find it necessary to delineate between one’s subjective understanding of particular instances, which may or may not be inaccurate, and one’s faculty of understanding.

Thesis #3: One’s understanding of the universe dictates one’s behavior

As we addressed when discussing the differences between animal experience and actual experience, man behaves in a manner distinct from animals. Due to man’s faculty of reason, understanding and justification are elements which interject themselves between the phenomena of stimulus and response. In any instance of stimulus, a man must choose to assent to the stimulus and choose to respond. In the case of Mike, while reading my book, he would be exposed to the stimuli of mind-expansion, intellectual challenge, existential intrigue, and more. Being unaccustomed to such stimuli, our example, while incredulous of the stimuli, assents and then chooses to cease to read and retreat to the comforts of the familiar simulated manhood of football. In the case of a dog, however, whatever new stimuli it is exposed to are immediately either perceived through the filter of instinct or disregarded outright, much like a blind man being the recipient of a silent and rude gesture. As that stimuli is perceived, the dog’s instinct causes it to behave in one manner or another. For instance, being of domesticated genetic stock and trained to assist his blind owner in particular ways, he may maul the one performing the rude gesture, with no rational process involved, merely organic calculation.

This difference, however, does not mean that man is devoid of animal experience or instinct. As mentioned before, under certain circumstances, man can behave in a manner consistent with animal experience. As a matter of fact, it is the case that instinct may play, at a minimum, as much as half of the role in man’s experience and understanding. Man is clearly not the “tabula rasa” of Avicenna and Locke11. As I have asserted, the faculty of reason is inborn. Evidence exists to support my claim in that infants instinctively act on stimuli in order to feed, cry, swim, and flail their limbs; there are also contemporary scientific claims that the brain operates as an organic calculator, the evidence of this also exists in the behavior and brain structure of infants. Additionally, evolutionary psychologists have observed similar phenomena in grown adults concerning phobias, pain reactions, sexual attraction and many other areas of the human experience. As will be addressed later in this book, it is even possible that this rational faculty my argument hinges so heavily on is, in fact, nothing more than a uniquely complex form of animal experience12. Until such a time that I do address such claims, though, we will continue to operate under the belief that rationality exists per se.

Understanding and habituation, then, drastically impact one’s behavior because they are the medium by which one’s experience informs and dictates one’s behavior. Through experience of particular sensations, and the application of reason to those sensations, man can come to understand his environment. Through application of reason to any given circumstance of stimuli, he can then choose an action understood to be most appropriate in any circumstance. Habituation, additionally, impacts man through the instinctual inclination to maintain a certain consistency in one’s actions. In the case of Mike, this would result in choosing to watch sports over reading philosophy.

Thesis #4: The epistemic and phenomenological endeavors of philosophy (and, by extension, certain areas of physics which pertain to the human experience) are crucial to one’s understanding of the universe and one’s resultant behavior.

In choosing to watch sports rather than read philosophy, Mike is attempting to avoid the discomfort of a new experience for which he is ill-equipped. However, in avoiding that experience, Mike is attempting to shirk his need to engage in public discourse and exposure to culture. Whether or not he succeeds in such an endeavor is less important to us now than what such an experience represents. The experiences of public discourse and culture are key experiences which inform one’s understanding and behavior. Our example in the introduction to this book concerning the need for communication and language is a prime example of the fundamentals of public discourse and culture. “This mushroom bad,” clearly establishes certain cultural norms as well as informing one’s attitudes towards certain concepts. In the case of Mike, it could be a friend coaching him with dating advice or beer commercials during the football game altering his expectations of his date. If he had read my book, Mike would be more likely to succeed in his date, having better equipped himself with a tool set for working with the human condition.

These tools have been graciously provided for us through the long-standing traditions of philosophy, most notable in this instance would be epistemology and phenomenology. Through the study of knowledge and how man acquires knowledge13 and experiences and how man feels what he does,14 philosophy can aid significantly in one’s quest for understanding what and how he knows what he does and how to influence those around him. Most of what has been written in this chapter is lifted directly from discussions I have had regarding various works in epistemology and phenomenology. In this regard, I believe this work is a paradigm example of the assertion made, that one of the most crucial kinds of experience for the formation of one’s understanding is one of a social and philosophical nature.

A strong cultural and public formation of one’s understanding is crucial because a well-informed understanding can ultimately provide maximal utility to an individual and society15 whereas a poorly-informed understanding can effectively cripple one’s ability to develop their rational faculties or provide much utility to themselves or others. As was mentioned earlier, one’s subjective, personal understanding can be flawed. Some merely make a small error in their reasoning while others may be mentally disabled by either material means or due to a cripplingly misinformed understanding. The strongest influence to both the possibilities of an accurate understanding or mental disability is that public influence on the individual. As discussed in the intro, when done correctly, philosophy creates the circumstances most conducive to a well-informed worldview.

In this way, we see that one is solely informed by personal experience. That experience allows one do develop inherent faculties such as reason. Reason, in turn, allows one to analyze one’s experiences and engage one’s culture. This analysis generates an understanding and worldview within the individual, which also has a bearing on one’s habits as well. This understanding is the premise on which one makes a decision regarding how to behave in any given circumstance. As forming an accurate worldview is crucial to one’s successes, philosophy (the strongest candidate in this regard) is crucial to forming said worldview.

95 Theses

1Aristotle’s Nicomachean Ethics (Oxford World’s Classics) p.118

2The widely accepted list of “most significant philosophers to-date”.

3We will explore the Principle of Non-Contradiction, or the PNC, more thoroughly in chapter 3: Orders of Knowledge.

4A claim which is logically self defeating, whose conclusions deny the very premises on which it is built.

5This is an example of how Philosophies written in the mid-17th century (Hobbes’ Leviathan) have percolated though the social consciousness for centuries and are no longer questioned.

6Additional examples and further exploration of absurdity can be found in Hobbes’ Leviathan, chapter 5.

7The next chapter will explore this concept more fully.

8The method by which one, through either experience or theoretical knowledge (“knowledge that”), can develop practical, active knowledge (“knowledge how”).

9 Volume 3 of Samuel G. Heiskell’s Andrew Jackson and Early Tennessee History

10“Problems of Philosophy” Bertrand Russell ch.5

11“Tabula rasa” refers to a “scraped tablet” or “blank slate”, evoking a description of the mind in which there is initially no knowledge or activity whatsoever.

12In Chapter 2: “The Embodied Mind”

13epistemology

14phenomenology

15In this case, I’m using the term “utility” in a very loose way. The best definition of “utility”, though, would be, “the capacity for a thing to provide or contribute to one’s flourishing.”

Abstract of the 95 Theses

Assumptions and their descendants:

From Aristotle1 to Zeno, every man who has claimed the title “philosopher”, has made basic assumptions from which all their later works (if rigorously done) are derived. Even those that demand a priori proof of even the most atomic basis for argumentation (such as those in the Cartesian tradition2) make assumptions somewhere, no matter how well disguised or hidden they may be. There is nothing wrong about doing so, though; being an experiential creature man can only begin to reason from some given truth of which they have experience. The pre-existent knowledge required is of two kinds. In some cases admission of the fact must be assumed, in others comprehension of the meaning of the term used, and sometimes both assumptions are essential… Recognition of a truth may in some cases contain as factors both previous knowledge and also knowledge acquired simultaneously with that recognition-knowledge, this latter, of the particulars actually falling under the universal and therein already virtually known. ”3

Because it is the case that one must begin from assumptions, it is in one’s best interest to select the most fundamental and apparent assumptions and build up from there with the assistance of reason and observation. When one follows these assumptions to their logical conclusion, then, one will likely see the errors of one’s assumptions if the results are absurd or impossible. At that point, one must select an improved set of assumptions and move forward, repeating this process as many times as is necessary. I use epistemic assumptions here, as my childhood experiences in Cartesianism have shown to me the impossibility of accurately describing the universe if one is an epistemic skeptic or nihilist.

In addition to selecting a certain type of assumption, one must be deliberate in what quantity of assumptions one makes. If too few assumptions are made, there will be insufficient material from which to derive cogent syllogisms or conclusions, trapping one in the tiny cell of skepticism. Choosing too many or too advanced assumptions will short-circuit the philosophical process of discovering where the assumptions will lead and will necessarily result in the desired (and likely incorrect) conclusions of the author. Also, too many or too complex assumptions place one’s work beyond the accessibility of critics, in that no critic can hope to verify one’s claims based on one’s assumptions if the assumptions themselves are opaque, obscurantist, or simply a secret to all but the author.

As was implied by an earlier paragraph, and would logically follow from this conversation concerning the quantity and quality of assumptions, certain enlightenment-era questions and practices ought to be bracketed4 for later discussion. If one were to be forced to synthesize their own version of the Cogito, or the world of numena, the practice of philosophy would have halted midway through the enlightenment with each new philosopher attempting to invent a square wheel. That is not so say that skepticism should not be addressed; only that it doesn’t necessarily have to be the starting point. Nor does it mean that one’s assumptions suffice on their own; they ought to result in an empirically falsifiable claim by which one could determine the validity of one’s assumptions.

The physical world and our understanding:

Why would my project run straight from epistemological assumptions into physics? The physical sciences are the first source of certitude after the basic epistemological claims are made. It is far simpler to state that we can know things and that the primary engine for any knowledge is our experience and discuss that experience as opposed to making such an epistemological claim and immediately begin attempting to discuss experience or knowledge of some transcendent or ethical claim, as their experience is often derived from some manner of physical experience to begin with.

This is because philosophy, like reason, operates from the ground up: first, building a foundation before building arguments atop that foundation. “…If a house has been built, then blocks must have been quarried and shaped. The reason is that a house having been built necessitates a foundation having been laid, and if a foundation has been laid blocks must have been shaped beforehand.”5 As our immediate experiences are derived from our bodily senses, which are confined to matters of a physical nature, so too must our immediate foundations. Even universal and unavoidable principles, like the principle of non-contradiction or many ethical principles, are made known to one by way of physical sense experience (with assistance from reason, of course). In addition to the foundation which physics provides on an experiential level, it also provides a conceptual basis. One cannot properly ask “why?” without first asking “what?” and “how?” Physics, when done properly, effectively shows one what happens in our physical universe and how it does so.

Metaphysics6, as the name would imply, can also be appropriately appealed to in this stage of development. As a counterpart to the physical studies of how our universe operates, metaphysics applies a slightly less experiential and more rational but very similar method as physics to immaterial questions regarding our experience. Metaphysics and I have had a very rocky on-again-off-again relationship throughout my life. As a confessed former adherent of scientism, for quite some time I disavowed that metaphysics could even rightly be considered to exist. I am sure that by the time my life ends, I will have left and returned to metaphysics at least once more, but each time such an event occurs, our understanding and appreciation of each other grows.

Ontology as derived from experience:

Why ontology? If ontology is to be understood as the study of existence or existants, then it would naturally follow from our study of our experience to move on to the study of the things we are experiencing, namely, that which exists. There is a question more likely to be asked by a modern readership. That is, “why theism?” I have long struggled with the discussion of theism or atheism in the realm of philosophy. Even as a “scientist”, I was agnostic as to whether there existed some being beyond the physical realm, primarily because both a positive or negative claim as to theism are empirically unfalsifiable.

However, that was at a period of time where I was still immature, both biologically and philosophically. I have come to realize (as will be discussed in the Theses)7, that one’s assumptions on which one builds one’s philosophy necessarily result in either a positive or negative claim concerning theism. In the case of any teleological philosophy, it must result in a positive claim and, conversely, in the case of any nihilist philosophy, it must result in a negative claim.

Also, after physics is able to establish an empirical validation of one’s assertions, it must be relegated to the role of double-checker, simply checking all later claims against man’s experiences, ensuring that no claims made by other fields of study run contrary to that experience. Naturally, after physics establishes what happens and how, the philosopher must ask why it happens, or another way of phrasing “why” would be, “what is the practical universal significance of such an event?”

Although the question asks for the practical universal significance, and despite the claims made by postmodernists, it is not in any way untoward or egotistical to presume that the universal significance of such an event must, in some way, be centered upon ourselves. There is a twofold reason that this is the case. Firstly, the nature of man is such that he feels a compelling need to search for meaning in his existence; any teleological philosophy would rightly assign an end to that compulsion. Secondly, our definition of philosophy is predicated on the assumption that man is capable of discerning a relevant place in the cosmos for himself. Ultimately, in this case, the absurdist is right, it matters not whether there is a significant place for man in the universal sense or not, man can always make one.

In knowing man’s role and significance in the cosmos, one possesses a tool set which one can use to determine what one ought to do. Now, many will refer to Hume at this point and will insist that “One cannot derive an ought from an is,”8 but rather than conclusively disproving my point, they merely indicate their lack of understanding of Hume. The prohibition of deriving an ought from an is assumes that the realm of “is” consists merely of objective impersonal atomic facts. If one allows value claims into their ontology, or their category of “is”, it becomes inevitable that the is/ought distinction collapses. These value claims are clearly not empirical, but that brings us to our earlier discussion about the relationship between the sciences and philosophy, the moment that certain supplementary matters of fact are allowed into the realm of discourse, such as metaphysical, psychological, teleological, or ontological assertions, it can easily stand to reason that one can derive an ought from an is.

Even in such an event that objective values do not exist, the subjective values of individuals must be informed by a proper understanding of physics, metaphysics, and ontology. If one values a particular activity or outcome, one’s ability to achieve such a result is dependent on properly navigating reality. Many would-be “oughts” are simply impossible or absurd and are beyond the human capacity for comprehension, let alone accomplishment; thus, the realm of values to which one can assent is limited by the same factors which have confined our definition of the philosophical activity thus far. Even after one assents to a rationally consistent and metaphysically possible value, the methods by which one achieves such an outcome is dependent on the nature of reality and the actor’s ability to navigate it. With these strictures in place, it is essentially actionable to claim that one can derive an “ought” from an “is”.

The problem of evil and subsequent ethical prescriptions:

All philosophers are eventually faced with the question which plagues all men: “Why does life suck?” It finds itself phrased in many different ways but, since the time of Epicurus, the problem of evil has remained central to the discourse of philosophy. The most common phrasing would be something akin to, “If there exists an omnipotent, omniscient and omnibenevolent god, how can he allow innocent people to suffer as horribly as they do?”9 Usually, there are citations of disease and natural disasters killing small children to this effect.10

Different philosophers and traditions provide different answers, some more radically different than others. Some, such as Epicurus, would say that the problem of evil is sufficient cause for a practical atheistic hedonism. Others, such as Pascal, argue quite the opposite. Not the least of the responses, while being more or less outside the theistic spectrum, would be the approach popular in the ancient East (and the answer I once held myself), “Life simply sucks”. While my answer now is slightly more refined, the practical application of it remains mostly the same. So, what to do about the problem of evil? This is, again, more clearly and articulately discussed in the Theses11 than I could hope to write here. It will suffice to say, for now, that our understanding of man’s telos must accommodate for the problem of evil.

What can one do about the problem of evil? I believe that the answer is twofold. In the case of the philosopher, one is obligated to, at least, address and accommodate for it and move on with their reasoning. Each man, however, must be able to address and accommodate for the problem in their daily lives. While the appearances between these two courses of action are very similar, I believe that each require individual attention. The problem of evil serves as a strong device for proofreading philosophical assertions; insofar as one’s philosophy can or cannot address the problem, one can quickly assess the practical viability of said philosophy. The personal approach, while strongly tied to the philosophical one, need not be as rigorous or well-reasoned as the philosophical. The great acts of kindness displayed by those such as Blessed Teresa of Calcutta or Saint Nicholas are no less great a response to the problem of evil because of any lack of philosophical argumentation for their actions. In this work, I hope to articulate the philosophical side of the problem, and in a later work I hope to provide practical tools for living in accordance with that philosophical approach.

As will be discussed in this work, in all reality, the problem of evil only exists in the form of a problem because of the innate desires of man. Man bears in his heart the desire and freedom to excel. Whether one is aware of it or not, a majority of his actions are caused by or strongly influenced by that desire. Despite the common formulation of the problem of evil, it is less an ontological statement of “How can this thing possibly exist?” and more a plaintive cry of “Why do I want this, if the universe conspires such that I cannot have it?” One must be able and willing to address the problem and either overcome or circumvent it in order to achieve the self-fulfillment sought after by all men.

Conclusion

My aforementioned saloon discussions have operated as a club of sorts, with the working title of Lucaf Fits, which is an acronym for “Let us create a foundation For it to stand.” As the basis of logic, reason, philosophy, and ultimately all human endeavors, a solid rational foundation is required for all meaningful discourse and progress. “Lucaf Fits” serves well as both a goal and mantra for my group and myself. With this work, I hope to begin setting forth a foundation on which my other discourses may stand.

This work, as I have already said, is to be a starting place, not an exhaustive foundation or even an introductory work like the Summa or Prolegomenon. In sharing this work, I am exposing the beginnings on my internal discourse to the harsh elements of the social world. I hope to be met with great amounts of constructive criticism and support from my peers and superiors, but I am not so confident so as to expect it.

Regardless of the social and financial success or failure of “A Philosopher’s 95 Theses”, I intend to continue this line of work, exploring and expanding the 95 Theses, following them to their logical conclusions and modify the foundation as is needed to most successfully pursue the goal of philosophy. I also hope that with sufficient time, effort and experience, I can one day move beyond such foundational types of works and move into a more practical style of discourse and argumentation. I believe that the foundations such as these outlined here will necessarily lead to the conclusions that I so frequently argue and strive to engender in social media and day-to-day life; I hope one day to have outlined from this foundation those points so that others may see the validity of my position and actions. If, however, my conclusions are invalid and do not follow from the premises I am currently laying out, then, just as well, as it will guide me to the Truth which is far more valuable to a philosopher than public affirmation.

Because such discussion is directed at the revision of one’s arguments and beliefs, I will likely revise and correct this work through time. I have already, in the writing of this introduction, revised a few of the theses contained within this book, and have since edited each one a number of times, so as to more appropriately maintain their cohesion and logical validity. While I hope that such causes for revision will appear less and less frequently until, one day, I have acquired Truth, I am skeptical that such a time or event will occur in my lifetime, or even this world at all.

The ideas contained herein are the product of nearly two decades of oral discussion12 and revision, as well as excessive reading of philosophers across time and traditions. I am simultaneously both encouraged and discouraged by the genealogy of my current position. Having run the gamut of political, economic, religious and philosophical stances in my short lifetime, I am emboldened in saying that I have recognized my own mistakes and intellectual frailty enough times now to be more willing and able to admit my own mistakes when they are made. At the same time, however, I find myself skeptical of any truth claims I do make, now, because of my long list of fallacious stances in the past.

With luck and a fair degree of self-control, God willing, I will be able to make use of another seven or eight decades in this endeavor. That, I would hope, will be sufficient time to complete the revisions to this and my later works. Perhaps, one day, my ideas will be perpetuated in the traditions of philosophy. Perhaps commentaries on my work will be required reading in some institutions.

After all, the entire tradition of philosophy consists of free ideas. I do not mean “free” as in without cost, for many of the greatest and worst of the world’s philosophies have been crafted at great price. I mean “free” in the sense that the ideas, granted an appropriate environment, will spread and flourish like wildflowers. As I mentioned before, these ideas are as much a part of the intellectual atmosphere as any other cultural trend or idea. In many cases, these ideas are so liberated from the moorings of their original author that they are falsely attributed to one who was unwittingly synthesizing an already existing work.

It is an obligation of the philosopher to give credit where it is due. One ought especially to give citations to one’s contemporaries, as they are still present to take advantage of what approbations and criticisms come their way. To only a marginally lesser degree, one ought also give credit to those who have come before and laid the foundations on which one now builds, both so that one is not falsely assumed to be the progenitor of another’s work and so that one’s readership may be able to find the primary sources for their own edification. That being said, one must not be so averse to inadvertent plagiarism so as to hinder actual progress. A healthy balance must be struck between progress and citation.

In addition to the intellectual and social coin of credit given where it is due, actual coin ought to be given as well. Being merely human, a philosopher still needs food and shelter and time. When one works full-time performing menial and self-debasing labor (as is common in this age), it can be difficult or impossible to set aside sufficient time, resources, and motivation for such an undertaking as philosophy. Even if the ideas and art of philosophy ought to be unbound by financial constraints like all other intellectual or artistic works, the one producing the work is. I can justify selling this work as opposed to making it freely available to all only because it is being sold at an affordable price and because I am willing to donate copies and excerpts to those who can and will benefit from it but cannot possibly afford it13.

I make this financial case for philosophers with a caveat: no man should solely be a philosopher. If not working some form of job at least part-time or arranging for one’s self-sufficiency to supplement both one’s wallet and mind, than one must be working in some capacity either for survival or for art. A man’s mind can stagnate on outdated and fallacious thought if he is not careful to keep both his body and his social life healthy and active. Even if one makes enough money from teaching or publication (which, I understand, is rare), one must at least volunteer for a local, personal charity in which one works with other people and worldviews.

To this effect, I intend to continue this course my life has taken and see where it leads. I hope you, my reader, are willing and able to make use of this work and to aid me in my quest for Truth.

95 Theses

1Technically, Albertus de Saxonia is alphabetically prior to Aristotle, but he is much less known.

2The philosophers who followed in Descartes’ footsteps, maintaining a skeptical stance towards all facts that are not entirely doubt-free

3Aristotle “Posterior Analytics” book one

4Set aside with the intent to more thoroughly explore at a later time, it is a technique to be used only on concepts that are not crucial to the discussion at hand.

5Aristotle “Posterior Analytics” book 2

6From Greek: “after physics”. While the name denotes only that it was the subject Aristotle would teach after physics, it can be said to deal with the non-material aspects of physical inquiry.

7Chapters 5 and 13

8Hume “A Treatise of Human Nature” book 3

9 Hospers “An Introduction to Philosophical Analysis” p310

10Dostoevsky “Brothers Karamazov” is an excellent example of such descriptions.

11Book 5

12 In this case, I consider social media as a form of oral discussion.

13 Ironically, I qualify under my own rubric for a free copy

Philosophy in Seven Sentences

I’ve previously presented a brief review of Christian Apologetics (which seems to have vanished… I will have to write a second one or re-publish it). From the same author, InterVarsity Press has recently published Philosophy in Seven Sentences. Now that I’ve read the book (twice), I feel compelled to share it with my readers.

I love teaching/tutoring, especially audiences yet uncorrupted by academic ignorance and apathy. A few years ago, I taught a series of philosophy classes to a local homeschool group. It was well-received, it payed the bills, it gave both myself and my audience a newfound appreciation for the science and art that is philosophy.

The average age of the class was somewhere in the vicinity of thirteen or fourteen years of age, so they were largely unaware of philosophy altogether (which is a shame). I had four lectures with which to cover all the bases of “Philosophy 101” in a manner amenable to a young audience. Ultimately, I decided on pulling four themes/philosophers from history and simply walking the class through a philosophical exercise of exploring those themes. Almost the entirety of my preparation time was spent choosing the four themes. Ultimately, I think I chose Plato’s (Socrates’) apology, Aristotle’s categories (basic logic), Descartes’ cogito, and Kant’s categorical imperative. Of course each philosopher served as a foil for their contemporary history of philosophy and their inheritors, thereby covering the bases of philosophy’s history. Having taken two Philosophy 101 classes (from two different schools, long story), I get a feeling this is a popular way to teach such courses.

All this dry nostalgia is to set the stage for a brief overview of “Philosophy in Seven Sentences”. Typically, this would be a full-on “teaching from the text” post, but this book is literally fresh off the presses and both you and Douglas Groothuis would be better served if you ponied up the small amount of money required to acquire the text itself. That said, I do intend to give the text its due justice.

In eight short chapters, averaging about sixteen pages each, Groothuis takes one sentence per chapter (plus a short challenge at the end) and gives an excellent introduction to both the tools and traditions of philosophy. Typically, such a text will either attempt to impress its readers with technical terms, obscure references, and complicated methods of presentation or it will be written so casually and simplistically so as to render a rich and beautiful tradition banal and empty. Groothuis manages to dance a fine line between condescension and elitism, speaking plainly and straightforwardly but also challenging even seasoned readers to step up to his level of mastery concerning the material at hand.

I genuinely enjoy reading primary sources which, I guess, makes me weird; secondary and tertiary sources are generally less appealing to me, but I read any material with a sufficient insight-to-page-count ratio. As a case-in-point, I’ve already read many of the texts referenced in “Philosophy in Seven Sentences”. Even so, Groothuis manages to take a broad array of information, presumably acquired through extensive reading, discussion, and lecturing, and distill it down to one of the highest insight-to-page-count concentrations I have seen, even for someone with reasonable familiarity with the material presented.

The seven sentences in question are well-selected: spanning history and traditions from ancient Greece with Protagoras, Socrates, and Aristotle, to the early Church with Augustine, to the enlightenment with Descartes and Pascal, to modern existentialism with Kierkegaard. While I may have selected a couple different sentences (exchanging Paschal for Nietzsche and Kierkegaard for Camus or Sartre), Groothuis tells a progressive narrative which begins, dialectically and historically, with Protagoras’ “Man is the measure of all things,” and concludes with Kierkegaard’s pointed and melancholy “The greatest hazard of all, losing one’s self, can occur very quietly in the world, as if it were nothing at all.”

Readers who have no prior exposure to philosophy proper should, at least, recognize three or more of these quotes, as they have become memes referenced and repeated throughout popular culture. “Man is the measure of all things,” “I think, therefore I am,” and “The unexamined life is not worth living,” are referenced in popular films, shows, books, and songs. Descartes’ contribution, in particular, is the subject of a great many common jokes. I once owned a t-shirt which read “I drink, therefore I am.”Groothuis does an excellent job of setting misconceptions concerning these sentences without becoming a party-pooper.

Usually, a book I enjoy reading is full of highlights, annotations, and sticky notes. Every page of Human Action and Existentialism is a Humanism has some sort of mark on it. One would expect, then, that an unmarked book would be a sign of disinterest and, typically, one would be correct. In the case of “Philosophy in Seven Sentences”, though, nearly every line would be highlighted (defeating the purpose of highlighting) and there is no need for annotating the text; it is clear, concise, and wastes no time or space in exploring, if not the history of philosophy, a powerful narrative through the tradition of philosophy.

I have never before encountered a book better suited to serve as a textbook for an intro to philosophy class. Admittedly, this book would likely be better received in a Christian institution than elsewhere but, even elsewhere, it far outstrips and conspicuously secular text as far as both demonstrating the techniques of the philosophical exercise as well as exploring the philosophical tradition. I guess I’ve been salivating over this book long enough and ought to move on to “teaching”.

The general plot of the book begins with Protagoras’ exploration of subjectivity. Given that the pre-socratics are the progenitors of western philosophy, it makes perfect sense that one would start the narrative there. With a quick glance over extant pre-socratic works, one largely has a choice between the Zenos’ contributions of stoicism and obnoxious math problems, Pythagoras’ trigonometry, Heraclitus’ almost Buddhist sense of impermanence and meaninglessness, or Protagoras’ relativism. While Zeno (either one), Pythagoras, Heraclitus, et.al. each contributed quite a lot to philosophy as a whole, Protagoras sets a particular stage for Plato and Aristotle to get the show really going.

“Man is the measure of all things,” could easily be the opening lone of a stage play concerning the history of philosophy. I know from firsthand witness that phrase has hung on the wall of many dorm rooms that have borne witness to activities often reserved for cheap motel rooms outside of town; it has also, quite contrarily, remained very near the heart of philosophical discourse for over two millennia.

Such a mentality is easy for the philosophically-minded to slip into. As the exercise of philosophizing often consists of comparing and contrasting (AKA “measuring”) experiences, narratives, and ideas, it’s a natural temptation to declare oneself (or one’s kind) “the measure of all things”. Given the absence of an immediately apparent alternative to man, as far as measuring is concerned, Protagoras can’t really be blamed for making such a claim. Groothuis does an excellent job of exploring Protagoras’ position, the rationale behind it, what such a position means, and the ultimate results of a position. I don’t have the ability or word count to do so.

Moving on, a younger and arguably more famous contemporary of Protagoras is reported to have said “The unexamined life is not worth living.” Of course, if man is the measure of all things, then such an examination is likely to be very short in duration. Groothuis shows the tension between Socrates/Plato’s views on the transcendental nature of reality and Protagoras’ more materialist understanding of reality. While also setting up an opposition between Protagoras’ camp and the Socratic camp (which remains in the narrative all the way through Kierkegaard), he describes Socrates and his basis for such an extreme statement as “The unexamined life is not worth living,” in its own right as well. Admittedly, I feel that, despite explicitly addressing the key issue in interpreting Socrates (he didn’t write anything down, so all we have is other peoples’ accounts of what he said), Groothuis blurs the line between Socrates and Plato as far as their ideas are concerned.

Regardless of whether Plato or Socrates ought to get the credit allotted by Groothuis, they effectively prepare the stage for Aristotle who begins the discussion of man’s nature. Ultimately, the issue of man’s nature is what Augustine, Descartes, Pascal, and Kierkegaard are called to opine upon. Each one comes from a particular philosophical school and era in history and, therefore, has something unique to contribute to the discussion and Groothuis demonstrates a depth and breadth of knowledge on both the philosophers and their ideas.

This book is a must-read and must-have for anyone who is even fleetingly interested in matters beyond dinner, dates, and this week’s sportsball game. This goes for the engineer who did everything in his power to avoid liberal arts as well as the philosophy masters’ students who may need a reminder on the basics, a reminder of where philosophy 101 students stand, or as a textbook from which to teach. This book is one of the few secondary sources I will suggest, and I plan on snagging a few of the books listed in the bibliography for my personal extra-credit.

TL;DR; Philosophy in Seven Sentences, by Douglas Groothuis, is a paradigm example of how the more knowledgeable one is concerning a particular subject, the better one ought to be at explaining it in terms everyone can understand and, hopefully, enjoy. Derived from a popular introductory lecture style, Groothuis’ work takes seven deep, meaningful, and crucial sentences from the history of philosophy. While I may have chosen sentences from Nietzsche, Rousseau, ort Sartre instead, I would not have been even remotely capable of laying out so much information in so concise and readable a narrative. If anyone has a hard time keeping up with the terminology or argumentation in this blog, “Philosophy in Seven Sentences” is my most highly recommended starting place (followed by Liberty Classroom).

Introduction to the 95 Theses

Introduction

“A Philosopher’s 95 Theses”, a silly and audacious title for a work by a college dropout with little to no substantive endorsements. What is this work even supposed to be? This work is primarily an attempt to begin a systematized and traceable discussion concerning my particular brand of philosophy. Having spoken in various public forums, from in the classroom, to hosting salon discussions (thank you, Voltaire), to water cooler discussions, to arguing on Facebook (a noble means of communication, to be sure), teaching and tutoring homeschoolers, and managing a blog, I have found that many people in my generation and social stratum lack even rudimentary exposure to true philosophy or even formal logic. This isn’t the case for everyone, but a majority. Many times, people disagree with my statements or beliefs, not because of any logical or ideological error on my part, but rather a lack of understanding of how conclusions follow from premises. Ultimately, the discussions belie no understanding of the objective material at hand, but merely emotional attachments to already-existing prejudices as well as a fundamental lack of foundation from which they are arguing. When presented with this fact, others are wont to accuse me of the same. In this work, I hope to both soundly establish a defense from such accusations as well as begin to spread a culture of “lower-class intellectualism”: a culture of self-education and intellectual progress compatible with and available to “the lower class”, economically speaking. The first step of doing so would be to make something accessible and affordable available to what I call “my social stratum”, as well as simply raising awareness of alternatives to the current institutions which are fueled by big money and political agendas.

Clearly, as a starting place, this work is merely the beginning of what I hope to be an expansive and pervasive body of work. I hope to one day move beyond this project of establishing my foundations to making these concepts concrete and practical, providing a certain utility to all that would be open to a paradigm shift from our current postmodern sensibilities. From this body of work, I intend to expand and build on these ninety-five theses using the same style and methods contained herein, as well as writing a series of philosophically weighted articles concerning how one ought to live from day to day.

As most anyone who reads this work can tell, there is nothing groundbreaking or even original in this work, other than the arrangement of these ideas pulled from the atmosphere of the philosophical tradition. As a foundational work, I would expect this piece to be fairly conventional. Besides, as one prone to taking things too far and stating the outrageous, I want to give myself a moderate baseline from which to work in order to give some credence to my more extreme assertions which I have begun to publish already, alongside this work.

Despite the conventional content, I chose a particularly evocative title, (if I do say so myself). The title “A Philosopher’s 95 Theses” is an unabashed attempt to cash in on the fairy tale of Martin Luther’s dramatic succession from the Church. There is a narrative in which Luther made official his succession through the posting of the 95 Theses on the church doors as an overt “Eff-You” to the Church. While evidential support for this re-telling of history is nonexistent, the actual format and concept of the work itself is worthy of emulation. This is certainly the case if this is to be a beginning of a break from the status-quo of contemporary philosophy.

To be honest, the suggestion for the title and style for this work was presented to me by a friend who seemed quite earnest in wanting me to write my thoughts for his own edification. The suggestion was made primarily from a religious awareness of the Theses as a work of philosophy which could be easily adapted to a social media format. The concise nature of each thesis makes it easily tweeted in ninety-five segments. He leveled a challenge to me to post ninety five philosophical theses in ninety five days on Twitter and Facebook in order to encourage me to begin writing my ideas in a codified and discussion-friendly format. After a hilariously disorganized and epistemically infuriating four months, I had ninety-five theses, a ton of notes from discussions that were sparked (by the early theses, I think many friends and loved ones lost interest around #35 or so), and a new-found energy for attempting to publish something of worth.

The name and format of the original “95 Theses” has been lifted, but much of the argumentation and content has been abandoned, as Luther and I have very different intentions and circumstances concerning our respective works. Where Luther simultaneously affirmed and protested various Church doctrines and principles of theology, I intend to do the same for the philosophical doctrines which many contemporary philosophers have confessed. As such, rather than explicitly arguing the finer points of revelation and redemption, I intend to establish a solid foundation for later arguments in the philosophical realms.

As I will address in detail later, philosophy is a historical and holistic entity. Due to the nature of philosophy, I don’t expect to have come up with any original material, even if I know not where it has been written before. In the words of Descartes, drawing on Cicero, One cannot conceive anything so strange and implausible that it has not already been said by one philosopher or another.1 The ideas and truths of philosophy are simply “in the air”, as it were. One of the marks of truth in the philosophical world is its longevity. Many ideas that emerge in these theses, as well as my other works, are strongly rooted in classical philosophy as it has survived to this day.

I borrow heavily from existing works, as all philosophers do. I give credit where I can recall or research the original source, but it would be impossible to trace the genealogy of every idea which springs from my mind. This arrangement of concepts and their relationships is likely to be original, but the ideas themselves are old and deep-rooted. It is the perennial duty of the philosopher to water, trim and tend to the tree of knowledge which is philosophy: to hold the ideas in one’s mind, to criticize and correct errors, and generally allow the Truth to become known. Not a bonsai tree, but a veritable orchard of delicious and ripe fruits.

This work, hopefully, will establish a faux a priori2 foundation from which I can assert all of my later reasoning. Now is your chance, critics. Now is the time, in this work, to correct my premises, my errors, my moments of weakness, before I attempt to plumb the depths of truth in this vessel I have cobbled together. It will be too late, I am sure, when I arrive at a premise so incomprehensible and flawed to point out that I had overlooked a basic truth here and now.

I have grandstanded long enough on what philosophy is, without giving an appropriate definition and description of it. One should not assume that one’s use of terms is identical to that of one’s readers or opponents.

What is philosophy and why bother?

I believe that all who can rightly claim to be a philosopher will recognize certain fundamental characteristics which I believe to be necessary conditions for philosophy. It must be rational, as even the most blasé and stale philosophy assumes the basic precepts of logic, non-contradiction, and the ability of the mind to grasp truth. It must be consistent, as rationality simply can not allow for the possibility that the principle of non-contradiction is invalid. Therefore, all rational things are self-consistent. It must be empirically viable, as our experiences determine our understanding of the universe and, subsequently, the truth (the theses themselves will discuss this3); we cannot hold a belief which predicts or necessitates an experience divergent from what we actually experience. It must be universal, as any truth which is contingent upon circumstance is not a truth, but merely a fact.

In addition to these necessary attributes of the practice itself, I believe it must also produce certain results, fruits if you will, lest it be nothing but a mental exercise. Without ethical agency, this exercise would have no bearing on our lives as a prescriptive measure which, in the absence of an equivalent authority for prescription, would result in aimless and irrational lives, driven simply by the reptilian and hedonistic pleasures of our own genome. Without utility, this exercise would be superfluous to any other activity man would undertake; very few (and no sane) men would choose an impotent and laborious endeavor at the expense of something enjoyable and productive. Ultimately, without truth, there would be no rhyme or reason to the philosophical endeavor; if it were to be self- consistent and pursue truth, it must actually be capable of and ultimately accomplish the task of acquiring Truth. For these reasons, I assert with a fair degree of certitude that the purpose and goal of philosophy, as well as its necessary and sufficient conditions, (and, therefore its constituent elements, such as theology, physics, etc.) is to create an internally consistent, logically sound, empirically viable, and universal worldview which possesses ethical agency, utility, and (ultimately) Truth.

As mentioned in the above definition, philosophy possesses many constituent elements and tools of which it avails itself. As a reading of Aristotle or many of the enlightenment philosophers will support, I find that it is most natural to begin the philosophical journey in the realm of epistemology or phenomenology. A definition of each is in order, I believe, before addressing the practicality of such a method. Epistemology, taken from the Greeks, can simply be considered “the philosophy of knowledge and thinking, an explanation for how one thinks and knows”. Similarly, phenomenology would be “the philosophy of experience, an explanation for how one experiences and interprets those experiences”, also from the Greeks.

An approach starting from the angle of philosophy of thought and experiences does present some inherent issues, like the infamous discussion between Kant and Hegel:

“We ought, says Kant, to become acquainted with the instrument, before we undertake the work for which it is to be employed; for if the instrument be insufficient, all our trouble will be spent in vain… But the examination of knowledge can only be carried out by an act of knowledge. To examine this so-called instrument is the same thing as to know it. But to seek to know before we know is as absurd as the wise resolution of Scholasticus, not to venture into the water until he had learned to swim.”4

Hegel presents a very pragmatic alternative approach, which was quite popular with later Hegelian philosophers, like Marx. Essentially, he asserts that one ought to simply begin thinking and doing philosophy and will learn how one learns by witnessing one’s own experiences, much like how one learns to swim. As one can see, in reading the first ten or so theses, my assumptions and their descendants take a very Hegelian approach to early epistemology.

Amongst the historical traditions of philosophy, a debate as old as the pre-Socratic philosophies rages to this day: the theists vs. the atheists. Despite the greatest attempts of the moralist atheists, though, the arguments between theism and atheism ultimately deal with a more fundamental question. Whether or not there is a God is ultimately an argument as to whether there is any Truth at all. Again, as the theses address, either the universe is nihilist (devoid of any fundamental or objective meaning and purpose) or it is teleological (purposeful and directed)5. The most common theistic argument made is one concerning teleology, “What’s the point, if there’s no point?” Conversely, the atheist makes an absurd or existential (presenting logically inconsistent facts, or asserting that the universe itself is logically inconsistent) argument, “If there is no point, I can make one.” These arguments will be addressed in the theses6.

Ultimately, all forms of science and pseudo-science (assuming that they are rational and logically rigorous) are constituent elements of philosophy. If our definition of philosophy is accurate, then all rational activities which are directed at the goal of achieving ethics, utility, or Truth are elements of the grand attempt that is philosophy. The scientific endeavors are all part of the philosophical school of physics, by which one establishes the empirical viability of any particular philosophical view. The pseudo-sciences, ranging from sociology, to psychology, to astrology, to magic (again, assuming that they are rational and logically rigorous) can sometimes be appropriated into either physics or metaphysics. Some rare cases may even wander further from physics into epistemology or phenomenology, but all intellectual pursuits are ultimately an element of philosophy. Many of the individuals which pursue these endeavors lose sight of the forest for the trees, but that does not make their work any less valuable to the philosopher.

Bertrand Russel asserts, in chapter fifteen of “Problems of Philosophy”, that science becomes science by divorcing itself from philosophy once it becomes useful. Joseph Pieper, similarly contends that scientific inquiry is capable of achieving conclusions which are resolute and unyielding, whereas the philosophical endeavor can not.7 Both Russell and Pieper have a distinctly post-enlightenment flavor to them in this regard, which is unfortunate. They both fail to see that science is but a tool and a field contained within philosophy. Science may try to distinguish itself apart from its mother, with such cultural figureheads as Neil DeGrasse Tyson outright ridiculing her, but it can never truly extricate itself from the frameworks from which it came. Instead, it would be more appropriate for the specialists to concern themselves with their specialty and the philosophers to draw on them when needed.

Above all, reason is the driving force of man and his works. Above all rational pursuits, philosophy reigns. While not all men may have the ability to be great philosophers, all men are called to be philosophers, nonetheless. If in no other way, one must examine their choices and their lives in such a manner to achieve the best outcome available. Unfortunately, in this day and age, I fear that even this minor task proves to be too much for most.

It is no surprise, really, that this task has proven too much for my generation. The heart of philosophy is discourse and my generation is illiterate and disjointed in this regard. Rather than bemoaning our state of affairs, however, I ought to concern myself presently with the discursive nature of philosophy. Whether the discussion be oral debate in the city square, essays and books written in the solitude of a cave or study, or a college dropout’s ramblings on social media, philosophy only flourishes when an idea is shared, tested, refined, and put into practice. The manner in which this discourse and implementation takes shape is varied and veiled, but it is very real, even today.

The ideas and themes in popular philosophy pervade every area of our society, especially in the United States of America. They are boiled down to aphorisms and images and spread like a plague or meme through the cultural ether. I say “especially in America” as our nation was founded on a social experiment derived from the popular philosophies of the time (social contract theory), and that is a tradition that has continued for two centuries. Those that participate in the creation and sharing of art in society play a crucial role in the spread of these ideas.

Literature has been a long-suffering companion to philosophy. As far back as Homer and Gilgamesh, we see philosophical themes and musings riddle the characters and narratives of the culture. In more modern times, with the rise of the printing press, we saw an emergence of overtly philosophical fiction and some less-overtly philosophical fiction. There was such literature before the press, to be sure, just look at the classics. However, I find it unlikely that “Candide” or “Thus Spake Zarathustra” would have lasted the way the “Iliad” or “The Divine Comedy” has in the absence of the press. Even popular works of both fiction and nonfiction, whether intentionally or not, are rife with philosophical commitments.

These commitments are equally prevalent in film. While film is a fairly recent advancement in technology, it shares a common lineage with literature. We can easily trace its heritage from screenplay to stage play to oral traditions which stand as the forebears of ancient literature. For the sake of this discussion, I will consider video games and television shows as film, as their storytelling devices and methods are more-or-less identical. In addition to the words and language used in literature, film also presents ideas and commitments through the visual medium as well, certain images or arrangements can, consciously or unconsciously, link certain ideologies and characters together. The same holds true for music, sculpture, painting, any artistic or cultural endeavor, really, even dance.

Through the public discourse and permeation of cultural works, philosophy drives a society’s zeitgeist8. Any of the uninterested or uneducated who participate in cultural events, from watching movies, going to school, being subjected to advertising, have their minds and views molded by the underlying philosophy. Through exposure and osmosis, ideas that were once held in contempt have become mainstream and vice versa. This is the natural cycle of philosophy, and it is always made possible by the liberty of the minds of true philosophers. Even if the zeitgeist demands that the world be one way or another, the free thinkers are always at liberty to pursue the truth and share that quest with others through discourse.

Philosophical Schools, the Good and the Bad

Philosophies, taken in their historical and cultural context, ultimately tend to land in two categories: that of “the man” and that of “the rebel”. Whatever cultural or institutional norm for a culture may exist, it exists because of the philosophers who have brought those concepts to light and shared them via the public discourse. Those ideas that find themselves in favor of the ruling class or establishment naturally become the driving force of a society or state. Those ideas which are newer and less conformed tend to become popular amongst the counter-culture. It is important to note: this observation does not lend any judgment to the truth value of any one or another idea, simply its cultural impetus. It is the duty of the free-thinking philosopher to sort thought these ideas, regardless of the cultural context, and to ascertain the objective truth value of each respective idea. This often makes their philosophies unpalatable by both “the man” and his reactionaries. (C’est la vie.)

This cultural presence and impetus of popular ideas is revealed in every cultural work. From little nuances in color choice, sentence structure, musical tonality, to overt themes and statements, certain ideologies become manifest to an audience. These manifestations can be analytical and conscious and others can be more insidious or subconscious. The two most prominent contemporary examples are in the mainstream news and popular film, where phrasing and imagery is specifically designed to impart a worldview and philosophy on the unwashed masses.

It is no mistake or coincidence that the more authoritarian a state becomes, the more strictly social discourse and cultural works are censored. It is always in the best interest of the establishment to engender in their subjects conformity of thought and philosophy. The most intuitive and frequently used methods towards that end are limiting the subjects of discourse and subverting the thoughts of the masses. I believe that now, like any other time in history, the people of the world are having their thoughts and philosophies subverted and censored by the social and political establishments around the globe. An easy example of this phenomena would be the blind adherence to material reductionism, Neo-Darwinism, and cultural relativism which is strictly enforced in academia as well as by societal pressure, despite the lack of compelling rational evidence to support any of the three.

It is possible, however, that the prevalence of “bad philosophy” in popular culture is less a conspiracy of idiocy and more a benign zeitgeist of an uneducated time. Regardless of whether it is intentional or incidental, there is a silver lining in this situation. Philosophy, when maligned, can be a powerful tool for subjugation, but it is also, by its fundamental nature, liberating. Philosophy, as the pursuit of truth by rational means, necessarily drives its earnest adherents to freedom. By questioning the reasoning behind the social structures and institutional norms one encounters, one comes to understand where the truth lies and liberates oneself from the lies perpetuated by a society devoid of reason. Because of this, we see a dichotomy emerge: popular culture and its discontents. Now, this doesn’t mean that philosophers cannot enjoy and partake in the fruits of popular culture; it simply means that one ought to be aware of what is being imparted upon oneself, especially when there is a surplus of material available.

Reality exists such that there are several misconceptions and maligned concepts in the realm of contemporary philosophy. One of the popular misconceptions concerning philosophy and intellectualism is that it is a domain primarily inhabited by out-of-touch nerds arguing about stupid questions. “Which would win in a skirmish, the Enterprise or the Executor?” While the answer is obvious after a short bit of reflection (Enterprise), it is a dilemma that only a specific and small demographic will ever face. It is also a question that has questionable practical significance. I have witnessed in both the media and the general public a rising belief that those that contemplate such questions are to be considered intellectual and philosophical, at the expense of those that are deserving of the titles.

Of course, those that are deserving of the title have long been plagued by equally absurd-sounding puzzles. “When removing stones from a pile of stones, at which point is it no longer a pile?” While the answer may appear to be obvious to a mathematician or engineer (the pile is a designated set, it remains a pile even if there are no units in the set), it has far-reaching implications in the way man thinks and knows, or in other words, in the realm of epistemology.

Without philosophy, man would lack a crucial tool of introspection and rationality. The very question “What is knowledge?” does not have a satisfactorily categorical answer. Through our pursuits in philosophy, man has made great strides in addressing such a fundamental question which has evolved from “What is justice?” and moving onto “How can I be certain I exist?” and now addressing a wider, more complex assortment of queries. The fact remains, we must always ask, “How do I know this?”

These questions form our culture and our ethos. Or, rather, the pursuit of answers to this class of questions drives the popular zeitgeist. Even banal entertainment, like prime time television and late night talk shows touch on the questions which plague all sentient beings. “Why am I here?”, “Why am I unhappy?”, “What’s for lunch?”9 are all questions which people are desperately trying to answer whether they are aware of it or not. Philosophy attempts to codify and rationalize the pursuit of these answers, to make it accessible to our contemporaries and future generations, not only for our own sakes, but for the sake of man as well. These attempts are frequently used to answer these questions by taking our common assumptions and putting them to the test.

In each age and culture, there are certain ideas that become popular and omnipresent. An example would be polytheism in ancient Greece, or Christianity in 13th century Europe, or social Darwinism in the early 20th century. As can be seen through the examples presented, many of the common assumptions of the time fall to the wayside as a culture’s awareness evolved. In the words of Paschal: “Whatever the weight of antiquity, truth should always have the advantage, even when newly discovered, since it is always older than every opinion men have held about it, and only ignorance of its nature could [cause one to] imagine it began to be at the time it began to be known.”10 In some cases, those changes are for the better or worse (the shift from superstition to reason or the social ideology which fostered Nazism) at the time that change occurs. However, in the long run, philosophy always allows the individual and their culture to learn from the past. Typically, though (as I indicated above), this puts the individual at odds with his culture until the culture can catch up with him. This often makes the more notable philosophers those that were considered nonconformist.

A popular postmodern mindset in the philosophical landscape today has attempted to artificially generate that notoriety through philosophical non-conformity. What I mean is, they attempt to protest even philosophy itself. This is a trend which began in the enlightenment and found its perfection in the existentialist movement. Where enlightenment philosophers tended to either decry the philosophical mindset as some form of mental illness or feel the need to announce that it isn’t a “real” science, existentialists were (and are) wont to denounce not just the rationale of philosophy, but the very existence of logic altogether.

Absurdity is, fundamentally, simply denying or violating the principle of noncontradiction: asserting that something both is and is not in the same mode at the same time. Absurdism is a whole realm of postmodern philosophy in which one, such as Jean-Paul Sartre, attempts to use the tools of philosophy without following the rule of logic. While such attempts are entertaining and mind-expanding, they are just as the name says: absurd. As the 95 Theses (like all philosophy) assumes the existence and necessity of logic and rationality, this treatment of absurdism will be short and off-handed. Even so, Sartre, Camus, and other existentialists manage to contribute observations and arguments of value to those pursuing truth. I hope, in other works, to address the good and the bad of absurdist philosophy, but not today. This will be explicitly outlined in the theses themselves11, but this will help to better prepare a novice for the oncoming vocabulary contained in this work.

Nihilism is not a new concept in philosophy, but it has recently found a surge in popularity after witnessing the World War and all of its continuations. It is tempting to deny the existence of meaning when witnessing the most inhumane behaviors being perpetrated by man. “What is the meaning in millions of men killed by other men?” can easily become “What is the meaning?” However, as a being capable of asking such a question, the answer literally precedes the question. If one is able to witness and analyze whether or not something has meaning, there is, at a minimum, the production of that question. In the case of an absurdist, he looks no further than the mind of the inquirer, asserting that the inquirer/philosopher must give meaning to an otherwise meaningless world (and ultimately violating the PNC to do so). In this way, nihilism, in using a meaningful discourse to establish that there is no meaning besides the absurd is, itself, absurd. In the case of a philosopher, one asks “from whence does that desire for meaning come?”

In order to make sense of the universe at large, philosophy must be logical. Taking the evidence available to the philosopher and arranging it into a coherent narrative which is both satisfying and capable of producing utility and accurate predictions of cosmic behavior. The fact that our minds and our philosophical endeavors exist in such a way, and the fact that it is successful as such, we conclude that the universe itself must follow a form of logic. While the human intellect may be limited to codifying and adapting a series of laws to describe the universe’s behavior distinct from that behavior itself, the universe’s behavior is quite clearly consistent and logical, regardless of our perception of it.

This, of course, brings us to the subject of relativism. Relativism, in all but its softest forms, asserts and assumes the absence of objective existence, either in the form of moral reality, or physical or ontological reality. Moral relativism and its twin, cultural relativism, asserts that, because of the diversity of contradicting perceptions of ethical truth, there can be no absolute moral truth. Naïve relativism follows this form of logic to its inevitable conclusion: anything that can have contradictory observations or beliefs concerning it does not exist objectively, therefore reality itself does not objectively exist. While, at times, some form of scientific study is used in an attempt to justify such an assertion, typically it is an extreme reaction to scientism.

As objectionable as relativism is, it is at least identifiable and easily refuted. Scientism, however, is a beast of a different nature. Scientism is a strict adherence to the scientific method predicated on the philosophy of materialism, it is a union of empirical positivism and material reductivism. Anything not immediately falsifiable12 is of no consequence and ought to be done away with. Not all elements of scientism are bad (coming from a former adherent to scientism); a strict adherence to the methods of reson and empirical observation is what has elevated the school of physics to become the driving force of modern society it is today.

In recent centuries, most noticably the twentieth, there was a sudden surge in scientific thought and progress in all of the civilized world. There were innumerable factors that contributed to this phenomenon and, thankfully, I have no intention of going into detail concerning them. At the moment, I am far more concerned with the fruits of this technological renaissance than its causes. In the nineteenth century, the perpetual swell of knowledge and increasng standards of living appeared to be infinitely sustainable. This led to an optimism in the whole of society, but most especially in philosophy and its constituent sciences.

Confidence in science’s ability to cure all of humanity’s ails was joined by a popular trend in science known as reductionism. It was widely believed that science’s messianic qualities were a result of its percieved ability to reduce the most complex psychological or biological ailments into some simple alchemical formula (female histeria and electroshock therapy come to mind) and even the darkest and most troubling metaphysical questions could be exorcized with a simple application of mystical scientific hand-waving. Reductionism isn’t a modern invention, even the pre-Socratics strove to reduce all things to one atomic principle (the world is air/water/fire/flux/love/whatever), but never before was it so widespread and influential as during the rise of modernism and postmodernism.

Unfortuntely, in all their excitement over the leaps and bounds that were being made in their discoveries, true scientists (one who studies the physical sciences) became “scientists” (those that adhere to the philosophy of scientism). Subsequently, some bad science was introduced into the realm of sceintism without sufficient criticism. A handful of non-falsifiable theories, like Neo-Darwinism and String Theory, have managed to charade their way into the cult of scientism and are now defended with a fervor and blindness rivaled only by the most rediculous of religions. While it is not currently my goal to write a full-fledged indictment of scientism and other instances of bad science, I am compelled to at least demonstrate that materialism is insufficient and direct my readers to a work that more than completely shows that materialism and Neo-Darwinism are incomplete and illogical worldviews13. In favor of misguided science, many are equally prone to jihad in favor of bad philosophy (ie. relativism and consequentialism14). Some of these people have legitimate exuses for doing so (public education and demographics of their upbringing come to mind), ultimately, their excuses can be reduced to the defense of, “I didn’t know any better.” Some despicable men, however, are quite aware of the logical fallacies they commit in the name of furthering an agenda contrary to the pursuit of Truth.

Sophists, since ancient Greece, have always profited from making defenses of the indefensible, either for the acquisition of wealth or the silencing of their own conciences. Whenever an ill-informed or malignant trend emerges in a culture, it is certain that some sophist or another will emerge from the woodwork to champion it. Unfortunately for true philosophers, most sophists find their roots in philosophy and academia. This is unfortunate because, to the unwashed, the sophists and philosophers are indistinguishable between each other, save for sophists defending the fulfillment of their base desires while the other demands intellecual rigor and consitency. These sophists were the enemy of the ancients and are the enemy of philosophy today. As certain historians through history (like Cicero) have noted, there has been a noticeable trend of cultures falling for sophistry not long before their demise. In our modern culture, we see popular philosophy dominated by sophistry and intellectual vacuity. In academic philosophy, it would appear that a certain apathy to the common man and common culture has gripped the hearts of philosophers as they discuss the impractical and esoteric. Worse, though, than the philosopher turned sophist, is the celebrity or lawyer turned “philosopher”. Lawyers are paid to play by the rules and obfuscate the truth. Celebrities are paid because they make people feel good. Both of these careers are antithetical to the pursuit of truth. In such a case that one who makes a career of pursuing personal interest (whether it be thier own or their clients’) turns their attention to announcing certain ethical, social, scientific, or really any intellectual claim, they ought to be met with close scrutiny. An example which has plagued America (and the world) in recent years is the Hollywood zeitgeist of celebrities loudly and aggressively endorsing the political ideologies of the radical left. While these endorsements ought to be recieved skeptically, we instead have seen a widespread voice of agreement in the public forum. This is no different than the phenomenon observed by historians of bygone empires and cultures.

The same cult of irresponsibility and self-promotion in both popular culture and academia that existed in ancient Athens still plauges true philosophers today. At times, given the ascetic15 nature of the philosophical disciplines, it can be incredibly temptng for one to compromise one’s integrity for the sake of wealth or popularity which a philosopher would never see otherwise. Additionally, even if one is unaware of what they are doing, it is common for one to confuse one’s ideas with one’s self, which leads one to take justified criticism poorly and leaves no room for improvement and correction of ideas. When one is more concerned as to whether they are well-liked or can turn a profit rather than engaging in a genuine loving pursuit of wisdom and truth, it can only end badly.

As Socrates is credited to have said (which is more likely a paraphrase of his entire body of work), “The unexamined life is not worth living.” In order to successfully achieve eudaemonia16 or Truth, one must be vigilant and develop the ability to accurately assess one’s self. As will be expressed in the theses, one’s experience and examination of that experience is fundamental in one’s understanding of the universe and subsequent actions. Additionally, seeing as how eudaemonia and truth are the goals of the philosopher, it is clear that any philosopher and, truly, every man must live an examined life.

Now, this is not to say that every man must so thoroughly analyze and examine every atomic facet of his life in perpetual stoic apatheia. In fact, the reality is quite the opposite. While the philosopher must develop a categorical and pervasive habit of self-assessment, this could be crippling in other endeavors. Some men are simply incapable of this degree of introspection and others live in an environment which disallows such behavior. Even these men, though, can and ought to engage in what could rightly be called a “partially examined life”17: a lifestyle in which one at least routinely examines one’s conscience and actions. Training in and awareness of philosophy are invaluable tools in such an endeavor.

After all, our definition of philosophy clearly illustrates that philosophy is universally applicable. In clearly defining how the universe operates and why, as well as exploring what our actions must be in any given circumstance, philosophy establishes itself as the prime candidate to be the very center of culture and individual lives.

Through careful examination of one’s self and of the universe at large, one can come to an understanding of what one needs in order to acquire self-fulfillment. The desire for self-fulfillment is already the driving force behind culture. In developing and advancing the understanding required to achieve self-fulfillment, one contributes to the formation of a culture of self-fulfillment. This culture, informed by philosophy, would be a haven for those seeking eudaimonia.

As the centerpiece of ancient Greek culture and subsequently of philosophy, eudaimonia deserves a more thorough examination and definition. While it is alluded to in the 95 Theses, it may not get the fullest treatment it deserves. It then falls on the introduction here to give at least a high-altitude explanation with which to work. Eudaimonia as it is used here and in the theses can most easily be described as “the freedom to excel”. This means not only the presence of the mental faculties required to conceptualize and pursue excellence, but also the material and metaphysical circumstances required. In truth, I believe that this has always been the pursuit of man: to live in a culture of eudaimonia.

Philosophy: a Brief Genealogy

Regardless of which narrative one adheres to concerning the origins of man, there are certain circumstances which must have occurred at some point. While the beginnings of just such a narrative exist in the theses, I will attempt to imagine the worst-case scenario for the point I am attempting to illustrate. That point is, from the inception of the human race, philosophy has existed. With the emergence or creation of the first man, whether he was a mutated member of an ancestor race or created fully formed from the dirt by the very hand of God, his was the unique responsibility of siring the human race. While language and conceptualization may not be required in order to find a mate, it could certainly help. However, from the birth of the first progeny of man, communication and conceptualization become necessary for the continuation of the species. In order for her offspring to survive long enough to fulfill its duty to the species, our Eve must be able to express the concepts necessary for survival. Even if one is to assume that genetics supplied her offspring with instincts concerning fight-or-flight responses or aversions to creepy-crawlies that could be harmful, they would be insufficient for the task of allowing the offspring to learn, “This mushroom is bad,” or “This is how you kill a boar,” when they are one-chance circumstances which drastically impact survival.

It is clearly in the best interest of humanity’s survival to build on and diversify the material each generation inherits. “This mushroom is bad,” can only take one so far; it certainly does not place one at the top of the food chain. However, inquiry, discovery, and purpose can drive a nomadic people, scratching a meager sustenance from the earth, to ever greater achievements. I may not be able to kill a bear in hand-to-hand combat (I have never had the chance to try), but I don’t have to. By virtue of the utility of philosophy (and its constituent physical sciences), I live in an environment which is naturally repulsive to bears (though, in the instance of this region, the case was quite the opposite until recently); as added protection, though, I have many tools at my disposal, not the least of which is my Mosin–Nagant.

Aside from mere survival though, philosophy also provides mankind with an awareness of purpose and ethics which provides far more utility and impetus than survival, especially once the requirements for survival are met. In the pursuit of eudaimonia, we can imagine a genealogy of thought, moving from, “This mushroom is bad,” to, “Why is this mushroom bad?” to, “Why is?” With as many intermediary steps. Alongside this line of reasoning, we also see a diversification of material, branching from mere survival and pagan “gods of the gaps” into physics (including biology, astronomy/astrology, chemistry/alchemy, etc), metaphysics, epistemology, theology, etc.

While all these endeavors are oriented towards one end: the creation of an internally consistent, logically sound, empirically viable, and universal worldview which possesses ethical agency, utility, and (ultimately) Truth, they are sufficiently detailed and esoteric that one could spend their entire lives in devotion to one small element of a particular area of philosophy. This should not, however be used as a justification for skepticism18 as it would only serve as justification if philosophy were a solitary venture. Philosophy, by it’s nature, is collaborative. Each area of philosophy, no matter how distinct from another in focus and subject, bears at a minimum a holistic relationship to each other. In the same way that each area of study collaborates with the others, so too must individual philosophers. This relationship of the areas of study is due, in part, to their common material and practical significance; each area of philosophy informs the others and serves as a check against fallacious reasoning.

Being a human endeavor, philosophy finds itself the victim of human error quite frequently. As optimistic and teleological as my views are concerning this endeavor, I am not ignorant of the inherent shortcomings and roadblocks such an endeavor faces. I fully expect that even in the case of my own contributions, I will find myself (many years from now) arguing against the very assertions I make in this work. These shortcomings often lead to the development of dead-ends and half-truths. Some of these are quite speedily identified and handily defeated (like geocentrism) but many others are quite bothersome. Concepts which are rooted in truth or bear tangential resemblances to the truth often mislead the philosophical discourse. One need only to look as far as Epicurus’ problem of evil and subsequent resolution, or Puritanism, or the Copenhagen Interpretation, or Marxism to see what kind of damage can be done by philosophy run awry. These mistakes, as damaging as they may be, will, ultimately become a footnote in philosophy as failed experiments, as the utility of accurate reasoning becomes apparent and the march of the true philosopher continues unabated.

As the definition I am using for philosophy states, philosophy is an ongoing pursuit of truth (or, the Truth). All legitimate philosophers have, at one point or another, made a categorical assertion regarding truth. Even most faux philosophers make categorical assertions regarding truth, even if that assertion is a naive and misguided utterance of, “There is no truth.” While I do not necessarily believe that the “end of philosophy” has some metaphysical role to play in directing philosophy or that it may be attainable in this world, I do believe that the simple utility of truth allows and encourages “those who have eyes to see” to be diligent in selecting the philosophies to which they ascribe. This “natural selection” of memes will, naturally, lead towards the end of philosophy. I know this sounds quite similar to the Darwinist narrative which I have rejected mere pages before now, and it should, as there are some good ideas buried amidst the bad science. The survival of the fittest, as Herbert Spencer is credited with having formulated it, is one such concept.

Such memes as survival of the fittest are a prime contemporary example of how philosophical concepts tend to simply be a part of the atmosphere in which society functions. Most everyone has heard that phrase in one memorable context or another, even if they have no idea or a misconceived notion of what it means. In the case of philosophical culture, or rather the culture of philosophers, far more obscure and odd concepts are part of the atmosphere. In this way, a well-read and intelligent philosopher may breathe in Descartes, Scholasticus, Nietzsche, and Groothuis in order to utter forth a synthesis of these elements unique unto himself, even if it is identical to another’s work.

What utterance do I have to make? What can one such as myself bring to the banquet table of philosophy? I desire to partake of the feast about which those before me have written, but what can I do to pay admission? As will be clear to those who will bother to read these Theses, I am not yet sure, but I hope to one day have applied myself thoroughly enough to this, my vocation, so as to be worthy to touch the garment of lady philosophy.

This work, itself, is an attempt to codify my existing ideas in a format suitable for public development and critique. Philosophy, by its nature, is discursive and social by nature. I could not rightly call myself a philosopher if I were to merely wonder at the cosmos. Only if I were to share my wonder with others and argue my way to the truth alongside my companions would I be worthy of such a name. This is my first of a thousand steps towards the banquet for which I was created. I hope to bring along as many as can come with me to sing the praises of the Grand Architect of such a marvel as creation.

All I can rightly ask of philosophy and of those philosophers who would aid me in this journey would be that I contribute one more voice to this chorus as old as man: to be heard and considered by others, to have what truth I can find be perpetuated while my own shortcomings be disregarded. A lesson I have learned from Ayn Rand: to be considered sophomoric and redundant is still, at least, to be considered. If I could rightly ask more, however, I would ask that I be granted a personal fulfillment of my unslakable thirst for answers.

Hopefully, I can play an integral role in this chorus, can make an impact. I want to bring the practice of true philosophy back from the grave that enlightenment dug, existentialism filled, and postmodernism hid in the woods. The death of god19 was less a death of god and more the abortion of philosophy. I want to aid in the restoration of Lady Philosophy to her former glory, to clothe her once again in dignity and honor, and to bring her back to the common people, not as an object of rape, but of royalty. This novitiate book is the inauspicious beginning of such a daunting career choice.

95 Theses

1Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences” Pt. 2

2Self-evident and deductively reasoned

3Chapter 1: Epistemic Assumptions

4Hegel, Encyclopaedia of the Philosophical Sciences p10

5Chapter 5: Teleology?

6Also Ch 5

7“Leisure: The Basis of Culture” p110

8German: “Spirit of the times”

9“Time is an illusion, lunchtime doubly so.” Douglas Adams

10Groothuis, On Pascal (Stamford: Thomson Learning, 2003), 202

11Chapter 5

12 a theory resulting in an empirically verifiable prediction which, if inaccurate, determines that the theory is wrong

13Groothuis “Christian Apologetics” chapter 13

14An ethical school of thought which argues that the result of an action determines the ethical quality of said action

15Self-disciplinary and abstinent

16Flourishing and fulfillment

17 A phrase that is certainly as old as the Socrates quote from before, but never better implemented than as by the people on the Partially Examined Life podcast: http://www.partiallyexaminedlife.com/

18 disbelief that it is possible for one to obtain truth or knowledge of the truth

19Nietzsche used the phrase “god is dead” quite frequently. Most notable of which is his parable of the madman from “The Gay Science” book three.

Life and Death: A Meditation

A good number of important intellectuals, famous artists, and people I know personally have died or come pretty close in the last couple years. This phenomena is nothing new to me; even in the heart of Empire, humans are subject to the human condition no less than those in Empire’s killing fields. I’ve been faced with this reality a little more than I have grown accustomed to of late and felt I could share my musings here a little more long-form from the offhand remarks I’ve been getting in trouble over.

Before discussing death outright, it would likely be prudent to address that which immediately precedes it: life. As will be addressed in my 95 Theses, there exist two possible ontological realities concerning life. It can either be teleologically directed or it can be a mere gratuitous happenstance. In the absence of what amounts to some purpose and afterlife beyond this one, life is nothing more than a complex chemical reaction that eventually exhausts itself; one’s phenomenological experiences are nothing more than a freak occurrence of matter briefly knowing itself before once again becoming deaf and dumb.

Alternatively, if the Catholics, Buddhists, animists, or adherents of some other religion turn out to be correct, the purpose of this life is directed towards what occurs afterwards. I don’t know how deeply I ought to follow this line of thought for the sake of this post; I think the absurd caricatures most people have concerning heaven and hell or reincarnation are sufficient.

In the case of life being gratuitous, death is equally so. Not even the individual who may be dying has much cause for emotion. In a few moments, there will be nothing left, and there will be nothing left to observe that absence; the universe is (phenomenologically) extinguished in death. Other than waxing poetic or discussing the epistemic impossibility of comprehending such a reality, there isn’t anything more that needs to be said. I guess I could mention that, in a universe in which life and death are gratuitous, moral principles are meaningless, even a prohibition on murder, as the “victim” has nothing to lose by such an incident. In the words of Albert Camus: “There is a passion of the absurd. The absurd man will not commit suicide; he wants to live, without relinquishing any of his certainty, without a future, without hope, without illusion and without resignation either. The absurd man asserts himself by revolting. He stares at death with passionate attention and this fascination liberates him. He experiences the ‘divine irresponsibility’ of the condemned man. Since God does not exist and man dies, everything is permissible.”

In the case of life having a telos, specifically one that motivates human action, then death may yet achieve some meaning alongside life. Death then, depending on the nature of the afterlife, could be a blessing or a curse, contingent on the relation the dying has with said afterlife. Given that the existence or absence of any sort of afterlife is yet unknown by any reliable measure, it would likely be the most prudent course of action to err on the side of rational caution, whatever that may be.

Either way, one type of comment that has gotten me in trouble is speaking of suicide in what some consider to be unaffected or positive ways. I’m no stranger to suicide, having seriously encountered that spectre in my life by way of both experiencing the temptation myself and having friends and family succumb to it. Observing suicide from the clinically detached position of praxeology can provide some insight as to the nature of such a choice. In the language of praxeology, suicide is a result of one of two possible functions: extreme time preference or cost/benefit analysis.

Speaking from personal experience, it can be quite easy to make ill-informed decisions when one has a very high time preference. Ultimately, that which differentiates human action from animal movement is the deliberative and deferred function of rationality. Where a dog will eat whatever activates their appetite, a man can choose to abstain or to eat something different from that which activates his appetites. Each individual has a different capacity for such deliberation. For example, one could usually pass up one bitcoin today if it ensured receiving two bitcoins tomorrow… but if one were to win the powerball, the would likely take half of the prize up-front, rather than taking the full prize divided into several annuities.

How does such a time preference influence the choice to kill oneself? The easy example is that of adolescents killing themselves over the inhospitable nature of school as an environment or bullying from their peers and adults. School may be a 25,000 hour system of dehumanization, but one is typically expected to live for forty to eighty years after emerging from that abuse engine. Bullies and environments come and go, but death is permanent. The decision, then, to kill oneself when still so young is demonstrative of a time preference by which one would rather permanently obliterate oneself (or face eternal damnation, same idea) than suffer the ennui of being a slave for what amounts to a relatively brief time.

A different, but functionally equivalent, example is one I have faced more than once. I have always had a very contracted time preference, and certain bouts of what could appropriately be called ennui could have been fatal for me in the past. In the saving words of Camus (again): “There is but one truly serious philosophical problem and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy. All the rest – whether or not the world has three dimensions, whether the mind has nine or twelve categories – comes afterwards. These are games; one must first answer.” Technically, that question is an open one for me. The only reason I still live is that of a Sisyphean dare: “There is the possibility, however, slim, that tomorrow could be better than today… wouldn’t it be a sick stoic joke if I gave up just before it’s too late? I dare tomorrow to be worse though…” By and large, the number of better tomorrows has outweighed the worse ones.

After spending so many words on time preference, cost/benefit analysis doesn’t warrant much expenditure. Where suicide as a function of extreme time preference is typically the result of a flawed cost/benefit analysis, one which weighs immediate discomfort far more heavily than expected future gains, suicide as a function of cost/benefit analysis is simply one that is better informed. If someone is over a century old and is diagnosed with an inoperable and advanced form of cancer, odds are there will quickly arrive a day beyond which each day will be worse. In an act of stoic virtue, one may make an analysis of one affairs and choose to die on one’s own timeline, rather than that of one’s cancer. There are a great number of historical and literary examples which parallel this one.

This sort of deliberation has, historically, been rejected and discouraged by Christian thinkers and preachers even though, despite argumentation to the contrary, Thomism will defend my position, utilizing the myth of “double effect”. The most prominent basis for such a rejection has been that suicide is an act of despair and despair is the opposite of faith; to reach a conclusion that each day will be worse than the any preceding day and today is the lowest threshold of desirability is to despair in God’s ability/willingness to perform miracles. This is, of course, derived from a naive interpretation of Thomist theology. God has an equal capacity to miraculously improve one’s life tomorrow as He does to do so the moment before one pulls the trigger.

The other argument presented most often from the Christian camp is some variation of “Your body is not your own, it’s God’s; to kill your body would be to steal from God.” While such rhetoric could be eminently useful as a shorthand ethical device (“Would God rather I pursue physical and intellectual virtue with this body, or let it become a shiftless mass of wasted resources?”), the metaphysics of such a claim is either non-actionable or absurd, depending on the formulation. That is not to say that I am opposed to the idea that suicide may be a sin, but it certainly is not a crime.

Of course, when discussing faith and suicide, I would be remiss in not at least mentioning martyrdom. Allowing or intentionally causing oneself to be killed for the sake of furthering an agenda, especially in the case of “Christ’s Kingdom”, is typically what one means when one refers to a martyr in the literal sense. In other words, martyrdom is typically an instance of “suicide by cop/barbarian/jihadi/etc.” whereby one has allowed themselves to fall victim of an ideologue of an opposing faction. I intend to dedicate a full post to martyrdom some other time, but it suffices to say in this context that, if suicide is impermissible for any consistent reason, martyrdom must also be avoided at any cost (possibly other than apostasy or suicide) and a great many “martyrs’ may just be suicides by any reasonable definition. Having faith in God, the afterlife, or the righteousness of one’s cause is insufficient to differentiate between suicide and martyrdom, as suicide is an attempt to escape this life for whatever comes after (and is therefore more appropriately characterized as an act of faith in the afterlife, be it nothingness, reincarnation, whatever) and the only difference is whether one kills themselves by way of their own hand, or the inevitable reactions of others.

From a anthropological perspective, death is the driving motive behind human progress. Every human action is directed towards maximizing either quantity or quality to one’s life, even if that action may be misinformed. It follows, then, that the avoidance of death is what lies, fundamentally, behind the creation of internet, smart phones, cotton underpants, indoor plumbing, drugs/medicine, and whatever other white-bread modern inventions you enjoy. In addition to being a motivating factor, it is also an inter-generational biological process. Human strains that have existed for tens of thousands of years in a particular environment have been naturally selected to exhibit different characteristics due to that environment. Said factors have played a smaller, but more significant, factor in this selective process. Yes, I’m speaking of human evolution.

Human ingenuity has largely mitigated these natural selective processes in the last couple thousand years. One of the few factors which still contributes to beneficial selective processes is the individual detrimental effects of extreme time preference, which can largely only be mitigated by the actions of the individual in question who has such a time preference. As a result, suicide is, in effect, one of the few natural processes which contribute to beneficial breeding selection. This isn’t to say that suicide is a good thing, but it is one of the few factors in human environments that contributes to genetic hygiene.

One other circumstance in human environments which contributes to beneficial selective processes is the adverse consequences of crime and vice. Criminals place themselves in situations where lethal force may be used against them. If not immediate lethal force, social forces tend to reduce one’s ability to reproduce after the fact. Despite the best efforts of progressivism and the state to mitigate the consequences of crimes (such as theft) and vices (using poorly-designed drugs like krokodil or adderal), they have not totally succeeded. The violent death rate in progressive cities such as Chicago is one such data point to illustrate this.

In the absence of the state, these beneficial consequences will become more pronounced: rather than relying on welfare to purchase food so as to subsidize one’s drug addiction, a drug-user will be forced to choose between starvation or sobriety. Those with the capacity for virtue will eschew dependence on externalities and become a valuable member of a community and those without said capacity will not be passing on their genes. A similar paradigm emerges in the case of crime. In the absence of a politically-motivated and violent monopoly on security, jurisprudence, and welfare (such as prisons), criminals will be faced with more immediate and dire consequences. Without getting into specifics, as volumes have already been written about the plethora of options in LibPar, criminals will be faced with the prospect of a more vigilant and aware set of potential victims coupled with the likelihood of death or exile if caught. It is more likely, by orders of magnitude, that those capable of basic risk-assessment and cost/benefit analysis will refrain from making ill-advised decisions while those that are incapable are not likely to reproduce.

This post, thus far, has been largely descriptive: simply observing the ontological state of affairs without making a value judgment as to whether such things are “good” or “bad”. If you, the reader, have found yourself disagreeing with the facts as I’ve laid them out or if your aesthetic tastes have been put off by my sterile approach and you are still reading this, I first want to thank you and second would like your feedback. For the reminder of this already over-sized post, I want to delve into my personal aesthetics and, perhaps, some prescriptive writing.

Life, for me, exclusively finds its meaning in death. If there were no prospect that my existence as such would ever terminate, there would be no impetus for action outside of immediate carnal itches. Even the two deepest passions in my life (my family and philosophy) would likely lack the immediacy which makes me passionate. Rather than investing so much time and effort into relationships or reading, arguing, and writing, there would certainly be an attitude of , “I’ve got time… I’ll do that right after I eat this ten-pound steak and sleep it off.” Rather than frantically devouring philosophical texts or taking on the lifetime (and, in this hypothetical, therefore eternal) commitment of marriage and siring of children, a more causal and haphazard perusal of earthly delights would be in order. I believe I can at least understand why J.R.R. Tolkien, in the Silmarillion, would have the supreme creator of the world grant Man the the “gift” of being able to die, since Man was incapable of experiencing and appreciating the supreme beauty of the gods, as could the elves.

Given my awareness of mortality (having touched death a few times, unintentionally, and having lost friends, loved ones, and acquaintances), I have spent no small amount of time dwelling on the realities expressed above as well as much more that remains unaddressed in this post. Ultimately, as far as I can tell, death is no more or less significant that one’s birth, puberty, bowel movements, or meals. Circumstances of such an event, coupled with the aesthetic preferences of those involved can imbue the event with a subjective emotional quality (happy, sad, etc.) but an objective observer could identify certain facts about the event which may be lost to others blinded by personal preferences.

Regardless of whether life and death are gratuitous or teleologically significant, the reality remains that one’s emotional and aesthetic response to a death is what it is, and bears no moral value whether it be indifference, joy, or anguish. Ethically speaking, how one chooses to express or act upon one’s reaction is purely a matter of goal acquisition. If one wants to maintain relationships with one’s extended family, it may be ill-advised to shout for joy at grandpa’s funeral, for example.

If life and death are gratuitous, the deaths of your friends are to be mourned while those of your enemies are to be celebrated (if you care at all). If life and death are teleological in nature, it all depends on the telos; to a Muslim, animist, Buddhist, shamanist, or Jew, the circumstance of the death of either friend or foe is the determining factor as to whether it is cause for happiness or dismay. Christianity, being a uniquely optimistic worldview, presents a compelling case (and resultant mystery/paradox) that every life and inevitable death is cause for celebration. The resultant mystery is such that human beings are created with the innate and ineradicable desire to add quality and quantity to their lives, while also celebrating the extreme absence thereof. This apparent paradox is resolved by a more diligent exploration of ontological matters, which I will engage in the 95 Theses.

TL;DR: As this post is as concise as I could make it and it is still 50% larger than expected, I don’t know if an abbreviated version is responsible. The general moral that can be inferred from this post, I would hope, is that one should first focus on the categorical and ontological realities of life and death in an honest and descriptive manner before entertaining emotions, preferences, and prescriptions concerning specific cases. I spent so much time addressing this moral, though, that I never got to address the three or so statements I have made recently, revolving around this topic, which raised the ire of people less philosophically involved which motivated this post.

cropped-From-Scratch-4.2-Background-and-name.png

An Intro to Mereology: Parts and Wholes

 

This last moth of posts seems to be “boring analytic philosophy month”. Defining property and rights, dealing in definitions and ontology, and now mereology. Before getting hung-up on what is undoubtedly a new vocabulary word, let me give you this week’s question: “What is the relationship between an individual and a community?”

Mereology is a study as old as recorded philosophy that, while involved in every philosophical discipline, is seldom addressed directly. Modern understandings of the field are heavily informed by medieval mathematics, but it’s a broader field than just parts of set theory. Many philosophy majors I know personally had never heard the name or question of mereology in either their studies of personal engagements until I had brought it up. This is likely because the question of mereology is often either ignored or merely answered in specific cases by other disciplines within philosophy. If the question of ontology is “what exists and in what manner?” then, mereology would ask, “What is the relationship of parts and wholes?”

I may be prone to subjecting my audience to raw, obscure philosophical questions, but even I am loathe to write in-depth concerning mereology… at least for a blog post. I think we can make do with just the question of this post and the paradigm established in previous posts. I trust that you can keep up and, if not, that you will email me or comment below and let me know.

As I have argued in the last two posts, collectives do not exist. If collectives don’t exist, how can I begin speaking on the relationship between the individual and his community? I did leave the door open for communities to exist, within a narrow definition. As a matter of fact, I left the door open for three types of community to exist. Before getting into a taxonomy, though, I need to define what I mean by “community” and how it is distinct, ontologically, from “collective”. For now, I believe a sufficient definition of “community” is “a series of interpersonal relationships-” or, rather, “a series of individuals who hold a series of interpersonal relationships centered on one or more commonality.”

One will notice, if reading with an eye trained by my previous ontological discussions, that this would make a “community” an abstraction akin to a collective: something which exists only as an idea or a concept with no impetus of its own and serves only to inform one in a manner consistent with one’s epistemic limitations. In exploring the taxonomy of communities, I hope to explore the specifics of the role such an abstraction plays and why I would grant it a stronger ontology than a mere collective.

The commonalities on which a community may be centered can range from something so banal as a common geographic location, common interest, common heritage… to something as intense and significant as a common life-altering event, vocational encounter, or a common goal, method, and discipline. These commonalities seem to be divisible into three types of character. By virtue of their definition, communities grounded in these commonalities can be said to have such character. These three types of character would be incidental, practical, and intentional. Based on the names I have chosen, I assume that many preconceptions and questions have already formed in your mind. I’ll try to assuage such activities, now.

Let’s just start with descriptions. An incidental community is just that: a series of individuals who hold relationships of coincidence. The easiest example is one of locality, especially in the postmodern age. Even if they are incredibly transient and flimsy, I have a number of relationships with people who live in my apartment complex. The sole basis of these relationships is proximity (and the friction it entails):competing for decent parking, upholding lease policies, random polite (and not-so-polite) encounters, etc. This same sort of coincidence exists on the freeway/highway, at the grocery store or bank, and perhaps even people that share similar attributes to myself, such as gender, skin color, geography of birth, height, or other inheritances.

I think that the most immediate observation one can make concerning incidental relationships, especially when looking with an ethical eye, is the total lack of homogeneity between individuals in the community. A brief survey of the bumper stickers seen at the common geographic locations, the Denver facebook network, a survey of white people, etc. will quickly indicate only a few minor statistical trends, all of which are better explained by external factors as opposed to the nature of the community itself (again,it’s an abstracted tool). Due to this phenomenon, one cannot speak knowledgeably about specific individuals within an incidental community, even when armed with statistics, nor can one speak of them categorically. Not everyone at my apartment complex is poor, not all blacks are criminals, not all whites like Phish, not all Denverites smoke pot, and not everyone in Nagasaki are militaristic imperialists who deserve to be irradiated or vaporized.

That description sounds like one that could be called “practical”, I must admit. If any readers have a suggestion for a better nomenclature to differentiate between incidental communities and those which I am about to describe, please let me know.

A community of practical character could be considered “a series of individuals who hold relationships entered into or maintained due to practical considerations”. This involves business relationships, to be sure; doctors and patients, contractors and property owners, student’s/families and school teachers/administrators are good examples, too. These considerations could also be centered on internet forums, conventions centered on a particular interest, or any club of one sort or another.

These commonalities are also quite transient. One anime convention is more-or-less interchangeable with another, one school is interchangeable with another (or any number of alternatives), employees and employers as well as clubs or stores (like Costco or Sam’s Club) are equally so. Because an incidental relationship or community is merely a matter of coincidence, relationships or communities which are matters of active choice (aka. practical considerations) are marginally more tangible and representative of the individuals involved. One can speak semi-intelligently about metalheads, people who hang out at Hot Topic, or engineers. A lot of (frankly, true) stereotypes are a result of statistical trends in these self-selecting communities.

A sort of “practical community on steroids”, intentional communities now become our focus. Intentional communities are best described as “as series of individuals who hold relationships centered on common purposiveness, intention, and approach to such.” If teachers and families are practical communities in schools, the PTA/PTOs student councils, teachers’ unions, etc. are intentional communities. Hippie communes; anarchist “collectives”; charities; governments, mafia, and other gangs; even some religious sects are examples of other forms of intentional communities.

Where a practical community, say, a gun show, is centered on a common utility (such as being able to buy or sell guns, exchange information, or not be reviled as a criminal for merely voicing an interest in self-defense), it lacks a certain intention or purposiveness. For example, one wouldn’t expect everyone, or even most of the people, at a brony convention to agree that they must all work towards the creation of GMO purple ponies with unicorn horns, or the extermination of all non-bronies. The KKK or (neo-)Nazis, however, gather around a central intention of exterminating or enslaving an entire group of people (usually members of certain incidental communities), evangelical Christians wish to “Baptize all nations”, communes exist for whatever commie/naturalist lifestyle one pursues, police exist to enforce laws, the Bloods exist to kill the Crips (and vice-versa), and the government exists to govern.

I used slightly different verbs when describing these different communities. It’s ok, though, because the important mereological point to remember is that a “community” is merely individuals maintaining relationships betwixt themselves., not an entity existing in its own right. However, where the incidental communities likely only provide categorical claims that are tautological (“The black community is black”), and the practical communities present only statistical correlations (“people who tend to purchase Maseratis tend to be upper-middle class”), intentional communities provide more opportunity for both generalization and categorical claims. For instance, the claim “KKK members are racist,” is effectively incontestable; someone may find an instance which appears to be a non-racist KKK member, but such a circumstance would require detailed examination.

The “non-racist”individual could either be considered a “bad KKK member” (in the socratic vein) or not really a KKK member (due to definitions), but a more likely and more easily defensible claim would be the case which claims that the very membership in the KKK is an endorsement of the KKK’s intention, therefore it is impossible to be in the KKK and not be racist. Even in the case of someone “going undercover” to break up the KKK, they are acting in bad faith, which presents its own series of issues which we don’t have time for today.

What I mean to express by exploring this taxonomy of communities is that the first two types lack any ontology beyond being a mere abstraction, much like the collectives I addressed a few posts ago. An intentional community, while still lacking ontology in itself, does influence reality in a tangible way, unique from the other two. This influence takes the form of social, ethical, and moral qualifiers included in interpersonal interaction. Where a series of employers and employees is typically to be considered a practical community, if the employers have a stated intent, purpose, or method and are hiring employees for the sake of which, any employee which enters into that relationship is doing so in the same manner one would enter into the KKK or a commune.

In other words, one cannot be pro-life and work for Planned Parenthood or the US Military, one cannot join a hippie commune and not be a hippie, nor can one become a cop and not endorse coercion and theft, or any other example that may come to mind. Any seemingly contradictory instance is merely a case of an individual acting out of ignorance or bad faith. Ultimately, this is the reason there is no such thing as a “good cop” or an “egalitarian neo-nazi”; in choosing to join a community centered on the purpose of enforcing laws or eliminating Jews, one demonstrates a preference for such criminal actions, even if they are unaware of that reality.

TL;DR; Merelology is the study of the relationship between parts and wholes. This field of study applies when looking at the relationship between individuals and the abstract concepts called “communities”. In the case of coincidental and unintentional relationships, one could consider such a community an “incidental community”. In the case of a relationship entered into voluntarily, often out of practical considerations, one could consider it a “practical community”. Most interesting would be the “intentional community”, which would be entered into with the intent of fulfilling a particular goal or furthering a particular cause, held by all members of that community. Such a joining of an intentional community is an endorsement of the intent and methods implemented by other individuals within the community, insofar as they align with the community’s intent. Awareness of this taxonomy is important when one makes statistical or categorical observations concerning various communities.

 

Collectivizing Collectives

 

 Socialism, like the old policy from which it emanates, confounds Government and society. And so, every time we object to a thing being done by Government, it concludes that we object to its being done at all. We disapprove of education by the State—then we are against education altogether. We object to a State religion—then we would have no religion at all. We object to an equality which is brought about by the State then we are against equality, etc., etc. They might as well accuse us of wishing men not to eat, because we object to the cultivation of corn by the State.

How is it that the strange idea of making the law produce what it does not contain—prosperity, in a positive sense, wealth, science, religion—should ever have gained ground in the political world? The modern politicians, particularly those of the Socialist school, found their different theories upon one common hypothesis; and surely a more strange, a more presumptuous notion, could never have entered a human brain. ~Bastiat

Last week, I denounced the existence of collectives in the name of anarchy. A few commenters requested clarification on this subject for a few reasons. I figured that I ought to shoulder the inevitable burden of addressing collectivism and the philosophical issues therein.

The first order of business is to clarify my specific claim which was made last post. Some people demonstrated a desire to adapt a radical and likely unpopular claim to better jive with their own worldview or better lend itself to discussions with non-anarchists. While I am certainly sympathetic to that desire (see my posts about the Pope), this issue is foundational and, therefore, requires a certain clarity and inflexibility. My claim was not ethical, claiming that one ought to do a particular thing concerning collectives. Nor was my claim a pragmatic one, saying that things would be easier if one ignored collectives in favor of individuals.

My claim is a categorical, unequivocal ontological one. My claim is that collectives do not exist. Collectives posses the same ontology as Xenu, lizard Jews, and human-caused global warming. They are a fairy-tale. As my selected examples of fairy-tales demonstrate, though, some people do insane and violent things in the name of such fairy-tales.

I’m about to get ahead of myself. Before exploring collectives and the results of believing in them, I ought to give a definition of what exactly I mean by the term. Clearly, I’m not claiming that hippie communes, political migrations, cults, or other random gatherings of people are not a thing; these phenomena are easily observed. I am raising the question of their ontological status, though. I hope to make that distinction more clear through this post. When I say “collectives do not exist”, what I am saying is “an entity which exists distinct from and beyond the functioning of its individual components is a metaphysical impossibility”, specifically in the case of agents.

At this point, I expect scientists and pseudo-scientists to reel and accuse me of ignorance. In physics, elementary particles which exhibit certain behaviors can coalesce into a larger particle which exhibits behaviors different from the elementary ones, without an account of how the elementary particles contribute to said behavior. Quarks and protons/neutrons are a widely-known example of this phenomenon. A significant portion of my personal philosophical pursuits have revolved around philosophy of science and epistemology (probably because disillusionment with astrophysics is what drove me to philosophy), but one will notice a lack of such on this blog. This is for a variety of reasons, but if enough people express interest in my 95 Theses, that may change.

Anyway, one such reason is because scientists and science fans are trained to be openly hostile towards philosophy of science. Your reaction to this paragraph may demonstrate this. Protons and quarks are mere instruments. They are concepts which serve a function; specifically, they express regularities in mathematically mediated observations. Because this is the case, it is unnecessary to explain how quarks contribute to the behavior of protons… it may even be impossible to do so within our current paradigm. Another way of saying this would be that quarks are not “real” in the platonic sense; they are a predictor for phenomena in a similar (but more accurate) manner as Aristotle’s teloi or the medieval nature spirits.

Similarly, a biologist will discuss species or evolution in an anthropomorphized or teleological manner, “racists” will discuss statistical trends across demographics in a collectivized way, and sociologists or politicians will speak of “humanity” and “society” as if it were a tangible entity. There are nuanced distinctions between these examples and the physics example as well as distinctions betwixt each other. The primary distinction is the specific relationship between the individual and the whole. Where quarks are a tool to describe regularities when looking smaller than the atom, species, races, societies, etc. are tools to describe regularities when looking at unmanageably large numbers of individual instances.

In both paradigms, one must be very aware of one’s ontology. A long-standing basic principle in establishing ontology is simplicity; something akin to Occam’s Razor. If one can effectively describe, explain, and predict the nature of, say, a falling object using a tool such as gravity, one need not and ought not look for a coincidental explanation such as telos or “gravity spirits”. In the case of collective identifiers such as “species” or “society”, every significant behavior is explained by the behaviors of individual actors “within” the collective.

In other words, “society” or “species” are useful instruments for biologists or economists, but are ontologically superfluous. If, someday, one can determine what “real” object correlates to quarks, quarks would also become ontologically superfluous. This claim renders two significant outcomes.

The first is one of historical and scientific significance: in the same manner that believers in river spirits or flat earth theory are (appropriately) ridiculed, if science is allowed to continue progression, believers in “society” may be faced with similar reactions. Where virgin and child sacrifices used to be offered to spirits, modern-day sacrifices of comparable magnitude are offered to “society”. Such behaviors need to stop.

The second is one of philosophical and practical significance. Obviously, such a claim secures the case I made last week. That aside, one must critically assess one’s belief and rhetoric concerning “society”. For example, a materialist/scientism-ist/pragmatist is faced with a significant challenge. When faced with a choice between identifying the behaviors of material bodies behaving in deterministic ways and the emergent properties of those behaviors or believing in a metaphysical (immaterial) entity which interacts with those material bodies, determining behaviors outside the laws of physics, most often these materialists will opt for the metaphysical option. This is intellectually inconsistent and eminently damaging to the case for materialism.

Materialism aside, people at large seem to consistently believe that “society” possesses attributes contrary to the attributes of its constituent elements. I often argue against such a claim when it emerges in the context of voting and law enforcement. For example, if individuals lack the right to dictate the actions of others (forcing gays to act straight, forcing nuns to buy other people contraceptives, shooting people for driving the wrong car), how can they delegate that right (which doesn’t exist) to a representative, enforcer, or “society”?

The rhetoric concerning “society” oscillates between using “society” as a tool to accomplish personal goals (this is at the heart of electoral debates) and treating “society” as a force of nature to be mitigated and resisted (when one is on the receiving-end of “society” used as a tool). One must look no further than the “anti-war” movements on the right and left only being “anti-war” when the opposing team is in charge of the war.

This accusation goes beyond “society” and applies categorically. “Race” is a useful instrument for identifying genetic similarities amongst individuals and statistically analyzing unmanageably large populations. However, “race” possesses the same ontology as “species” or “society”; it exists as an epistemic tool, nothing more. Even when dealing with teams, gangs, or communities, (that is, associations of choice) one is merely dealing with individuals who may have common goals or proclivities. Such a community lacks ontology distinct from its constituent elements. If there are no individuals called “crips” there is not gang called “the crips”; if there are no police, there is no gang called “the police”. Additionally, with the possible exception of the Borg (TNG only, Voyager kinda’ goofed it) one cannot interact with the collective, only constituent elements of the collective. I will renounce my strong position on the non-existence of collectives if someone will allow me to speak to and shake hand with “society”.

This position, despite what you may think, does not disallow the existence of “communities”. With a very minor degree of re-definition, community can remain. If, by “community”, one means “a collection of strong and interconnected interpersonal relationships”, communities exist everywhere. One needs only be cautious to not assign metaphysical or moral properties it communities which are not appropriate.

My more religious friends may appeal to panentheism or the Body of Christ/Communion of Saints as a counter-argument. This argument doesn’t actually reject either concept; instead, it opens the door for a discussion concerning the nature of such metaphysical concepts and their relationship to the material world. To begin this discussion, I will suggest that such concepts operate primarily as eschatological phenomena and secondarily as an ethical heuristic.

One final note, as I am out of time: this is why such issues are self-defense, the tragedy of enforcement, and the state of war are so morally involved on this blog. Even though the police are such by virtue of a voluntary association centered on the pursuit of criminal activity, I do not believe asymmetric warfare against police as a whole is morally justified, but defending oneself from instances of extortion, kidnapping, coercion, and murder with lethal force is morally justified and ethically encouraged.

TL;DR: Last post, I was not claiming that one should merely behave as if collectives do not exist, but instead making the strong claim that the do not exist at all. Belief in collectives is ontologically and epistemically lazy and such laziness prevents the epistemic rectitude required for ethical action. Increased intellectual rigor with regards to “society” is required if one wishes to improve one’s quality of life or the quality of life of others.

 

Anarchy: A Definition

 

I previously posted “Towards a definition of Anarchy” in an attempt to begin a conversation. Nearly a year later, I feel sufficiently equipped to push that conversation further.

In that previous post, I argued that anarchy is the rejection of institutions predicated on the crimes of coercion, theft, or murder. I explored the cultural and etymological roots of the term “anarchy” as well as the underlying philosophy, and presented a starting place for achieving a working definition of anarchy. That definition has served me fairly well in discussions on social media, in person, and on this blog. Over time, though, I have found it necessary to modify aspects of that definition and explore the necessary conclusions of that definition.

After a year of perpetual discussion about presumed first principles and their results, I believe I must explore the term from two angles: that of its linguistic uses and that of its philosophical importance. I, unfortunately, must explore its linguistic function first, as it will help clarify the philosophical definition.

Anarchy, as a word, can be used to describe a state of affairs. Typically, it is used as a pejorative when it is used in this manner, courtesy of your local propagandists. The state of affairs it references is one in which there is an absence of “archons”: individuals who claim the right to coerce or otherwise harm non-aggressors. The free -I’m sorry- “black” market is a prime example of one such circumstance, such as open-air markets in rural parts of Empire and developing nations. In some rare cases of the pejorative use, it may be accurate; but more often it is a distinct lack of anarchy that is misidentified as such on the news and popular media.

Anarchy can also refer to the philosophy of anarchism or that of anarchists. This is nothing new, of course; I often refer to anarchy as “a philosophy of personal responsibility”. Many assert that anarchism is predicated on the non-aggression principle (the NAP) as its first and only principle. However, as I hope to explore soon, the NAP presents many challenges when taken on its own. In “New Logo” and “Is Property Theft”, I briefly explored the issue of voluntarism as a positive assertion from the NAP, primarily because the NAP is a negative moral claim and, even if the claim is true, the positive inverse statement of that claim is not necessarily true.

Most of the issues arising from using the NAP as a solitary first principle is that its conclusions are either voluntarism or some other conclusion informed by the anarchist’s other philosophical commitments, many of which result in impoverished or absurd worldviews. The fact that the NAP is a negative claim is what causes its dependance on other principles. This dependence is not an issue in itself, it is the theory-ladenness of the NAP’s terminology in every iteration. A prime example of this issue is when “libertarian” feminists start discussing male “micro-aggressions” and criminalizing the act of having a Y chromosome. As I’ve discussed before, if the NAP is to obtain, one’s response to aggression must allow for self-defense, up to and including the execution of lethal force. So, if “libertarian” feminists are to be consistent, they must embrace the perennial feminist slogan of “kill all men”. Somehow, this does not sound like a philosophy predicated on the non-instantiation of force (another way to say NAP).

If anarchy is to be predicated on negative claims, it must either be predicated on claims that are less-susceptible to mixing with bad philosophies than the NAP, or be predicated on a mixture of negative and positive claims such so as to form a complete worldview on its own. Let’s begin with the negative first principles which may be more reliable than the NAP, possibly even axiomatically grounding the NAP itself.

I previously argued that anarchy is the rejection of institutions predicated on crime. That particular claim would be an ethical one. While anarchism may be a moral philosophy, I have found that all moral philosophies must be predicated on some other basis for moral or ethical claims. In “An Economics of Ethics”, I implied that ethical claims are best rooted in ontology and physics while morality must be rooted in ontological claims.

Anarchism is best served, then, in basing itself not in the rejection of particular institutions but, instead, some ontological claim which results in such an ethical proscription. One such commitment would be disallowing collectives from one’s ontology. There is a series of fairly compelling arguments for the non-existence of collectives, but such cases will have to be made elsewhere, as this post is concerned with defining anarchy. For now, I will assert that anarchism is a philosophy which denies the existence of collectives, instead focusing on individuals and individual actions.

This focus on individual action can be informed by one of two suppositions: the existence of objective moral facts or nihilism. In the case of nihilism I must inquire as to why one who finds no meaning or purpose in anything would be motivated to embrace anything more than nihilism; I do not expect a satisfying answer. Even so, the case of a nihilist anarchism does not preclude ethics (as defined in “Morality and Ethics”), as most nihilists that don’t just kill themselves tend more towards epicurean hedonism out of an interest in maximizing one’s own pleasure. In which case, anarchism’s minimum ethical framework may even seem a bit narrow to a nihilist. “Don’t shit where you eat”, the nihilist ethical maxim, requires a degree of virtue and future-mindedness, whereas the NAP is merely a prohibition against a narrow list of actions which could be reasonably be considered crimes.

More reasonably, one could allow for the existence of objective moral facts. In another post, or perhaps, in a book I hope to self publish at the end of this year, I will make an introductory argument for the the existence of objective moral facts. Today, though, If we allow moral facts ontology, we can quickly come to see that objective moral facts can only be proscriptive: categorically disallowing certain behaviors for rationally self-interested individuals while not prescribing any particular actions. I’ve explored this discussion before in “The Dark Side”. As time goes on, I will expand that discussion into an argument in its own right.

I still refer to the NAP as shorthand for my own proscription against crime (coercion, theft, and murder) which could, technically, be considered an ethical proscription which obtains universally. This is due to an anarchist definition of “rights”: namely, a right rooted in the rejection of collectives’ existence and a focus on individual action. Such a definition could be “a delineation of behaviors which one could justifiably defend oneself with any necessary degree of force.” It would, then, be reasonable to assert that provoking one’s right to self defense is inadvisable under all circumstances.

What I am trying to express here is that the NAP (in whatever form) is a result of anarchist first principles, not a first principle in itself. It is certainly a useful rhetorical tool to appeal to the NAP straightaway, as “I think people shouldn’t murder each other” is usually common ground for people. However, if that is the extent of one’s education in anarchism, one will be prone to the mistakes explored earlier. Much like a man that becomes a Christian because “Jesus forgives you,” and leaves it at that, one will be prone to doing stupid things and giving the philosophy to which one claims to adhere a bad name.

Ultimately, anarchism is a moral philosophy. Predicated on certain ontological claims and on an informed understanding of the way the world operates. Anarchism is the conclusion that individuals ought to behave in a manner consistent with personal responsibility and not attempt to place that responsibility on the shoulders of others without their permission. This is primarily a practical consideration, but it is fully complimented by some forms of deontological frameworks, so long as they do not violate the ontological or ethical claims of anarchism. This consideration, I think secures anarchism from mixing with bad philosophies without requiring positive ontological claims.

I propose that a sufficient definition of anarchism (or anarchy, for simplicity) would be as follows: a philosophy predicated on the claim that collectives do not exist, only individuals; the claim that one is responsible for one’s actions, and will face the inevitable consequences of those actions, which results in the claim that one cannot justifiably commit crimes (coercion, theft, or murder) under any circumstances; and the claim that one can and should defend themselves from crimes as well. At first glance, this definition may not seem too similar to the popular conceptions of anarchy, but one can quickly conclude from these claims that governments do not exist, only people do, and those that engage in government activities, such as taxation (theft), enforcement (coercion), and war (murder), are criminals and ought to be dealt with as such. In other words, anarchy dictates that one interact with ISIS, Ted Bundy, and one’s local government bureaucrats and enforcers in a consistent manner.

TL;DR: My original suggested definition of anarchy was a good start, but it certainly needs work. The 2015 model of “Mad Philosopher’s flavor of anarchism” is ultimately little more than an ontological commitment which, if consistently and logically applied, can (and frequently does) result in the rest of the assertions and arguments I have made on this blog over the course of the last year or so. Anarchy is a philosophy predicated on the claim that collectives do not exist, only individuals; the claim that one is responsible for one’s actions, and will face the inevitable consequences of those actions, which results in the claim that one cannot justifiably commit crimes (coercion, theft, or murder) under any circumstances; and the claim that one can and should defend themselves from crimes as well. In other words, one can do whatever they want, but that doesn’t mean that it’s a good idea.

 

Paradigmatic Awareness

 Why can’t we all just get along? When it comes to discussion, why can’t we seem to understand what each other are saying?

            As is outlined extensively in my yet-unfinished book, epistemology (how we know what we know) is a field of intense and voluminous study.  I will do my utmost to remain concise and direct today, but we will see if I can manage to get my point across.
Among thinking people, there is a disturbing trend of people missing each others’ points and progressively resorting to name-calling and physical altercation.  Friendships end, wars erupt, libraries are burned… all over a misunderstanding as to whether Star Trek ToS is better or worse than J.J. Abrams’ reboot.  This phenomenon is easy to see every four years in America, when just under half of the population suddenly erupts in closed-minded and aggressive rhetoric over which master we should be owned by and what behaviors we ought to compel with the violence of the state.  For many people, this argument continues on a daily basis (Thanks, Obama).

Very, very rarely does one actually change their mind or realize that oneself was wrong.  On the occasion that one does so, it is rarely a result of dialogue, but instead a result of a personal and concrete experience of their worldview and reality not comporting.  This sort of event is at the heart of every popular feel-good drama about a grouchy old person overcoming his racism.  My purely subjective standard by which I choose to judge a philosopher’s ability to philosophize is their willingness and ability to change their mind and admit error by way of dialogue as opposed to concrete experience.

While very few people my be called to be a philosopher, everyone ought to be capable and willing to do philosophy, lest they be vulnerable to misanthropy, self-dehumanization, and falling for vicious and criminal ideologies.  What is required in order to do philosophy?  There is a multitude of tools required and yet another multitude of tools that are merely useful.  The first two, the most fundamental and primary, of these tools are logic and paradigmatic awareness.  Of course, one is a prerequisite for the other.

What is logic?  Logic, contrary to popular belief, does not refer to “all of the not-emotional things that happen in my brain”.  Logic is a science and an art as old as man’s pursuit of knowledge.  As a science, the body of theories and research has been steadily growing through the generations.  As an art, the technique and skill of those who wield it waxes and wanes with times and cultures.  Logic is the place where language, reason, and objective observation meet.  Logic, in its purest form, is the exploration of the principle of non-contradiction and its application to our experience of reality.  The quest for knowledge requires a reliable and finely-tunes toolset.  The study of logic, epistemology, and phenomenology, has been directed towards the development of these tools since their inception.

Even though some high schools teach introductory classes on deductive symbolic logic and may touch on inductive reasoning, logic has been widely abandoned by our education system and, by extension, society at large. Without a working knowledge of and praxis concerning deduction, induction, abduction, and the interrelationship of the three, one cannot be expected to be consistent in their beliefs, claims, and behaviors. Unfortunately, a blogcast of this length and quality is insufficient to teach such a skill. Fortunately, there is a vast body of material available on the internet for those that wish to be rational.

A grossly oversimplified and brief introduction of the three is required, though, before I can address paradigmatic awareness. Deduction, then, is described as “arguing from the general to the specific”. A classic, if not entirely reliable, example is the famous “all men are mortal” syllogism.
“All men are mortal. Socrates is a man. ∴ Socrates is mortal.”
In this case, it assumes general premises such as “all men are mortal” and uses the principle of non-contradiction to reach the conclusion, “Socrates is mortal.” So long as the premises are factual and there is no error in the logic, the conclusion must be true.
Induction, in simple formulation, is arguing from specifics to the general. An example frequently addressed in modern philosophy is the claim, “the sun will rise tomorrow.” This claim is made based in the consistency of such an occurrence in the past as well as an absence of any predictors which indicate that such an occurrence would cease (for example, the sun vanishing would leave some pretty significant clues). Induction does not produce certainty in the same way that deduction may, but instead some well-reasoned and reliable guesses which have a particular utility about them.

Abduction can be considered “making the strongest case”. If the circumstance arises such that a question presents itself which requires an answer and neither a deductive nor an inductive argument is possible, one can produce an answer which does not contradict accepted deductive and inductive claims and is, itself, self-consistent. Using tools such as observation, occam’s razor, intuition, and a detailed understanding of one’s paradigm (we’ll address this is a minute), one can make a compelling case as to why their chosen belief is true.

This brings us to the interrelation of the three. Due to the certainty produced by valid deductive reasoning, one’s inductive claims cannot come into contradiction with such claims. If one is committed to a particular inductive claim which is found in contradiction with deductive claims, they must first demonstrate a flaw in the premises or logic of the existing deductive claim. This same priority is given induction over abduction for the same reasons.

Of course, this description ignores the source of our general premises that this whole process began with. In all reality, premises are produced by abductive reasoning and ratified by the simple Popperian principle of trial and error. This means that, per Gödel, any complete philosophical worldview cannot prove itself to be factual. Only by way of comparing a worldview’s predictions and claims against one’s experience of reality or confirming the strength of the premises’ defense can one ultimately justify any particular worldview.

This finally brings us to paradigmatic awareness. Those that have read this far, I salute you. Using a modified version of Thomas Kuhn’s definition of “paradigm”, a paradigm is the set of established or assumed claims which take priority before the claim in question based on the rubric I briefly described when addressing logic. Why does something so simple-yet-esoteric matter? It may sound intuitive once described, but despite its intuitive qualities, very few (if any) people truly possess paradigmatic awareness

For instance, when faced with a claim one may find absurd, such as “We need to tax every transaction possible in order to pay for government guns,” it is possible that the (clearly incorrect) individual may have a valid logical argument to reach that conclusion. More likely they hold, either implicitly or explicitly, flawed premises from which they derived an absurd conclusion. There is really no point in discussing the conclusion itself so long as the premises are left unacknowledged and unaddressed. Communication simply isn’t possible without commonly accepted paradigms between communicants.

This is where the standard of being able to change one’s mind comes into play; in the process of exploring the premises held by someone else which resulted in an apparently absurd claim, three beneficial results may arise. In exploring the paradigm of someone else, you may bring to light counter-intuitive or implicit premises that your conversant may never have previously critically assessed. Additionally, it will give you the opportunity to cast doubt on another’s premises, allowing them the otherwise impossible moment of self-reflection. Lastly, of course, by holding a counter-factual presented by someone else, there is always a chance (however slim) that you may realize that you, yourself, are wrong.

Now, one cannot always explore others’ worldviews without expecting the same intellectual courtesy in return. By following the advice given above and explaining what you are doing along the way, you can effectively provide an education in communication skills and logic that far exceeds what meager offerings most people are exposed to. This will give them a greater chance to entertain your correct but unpopular claims like, “Taxation is theft.” Additionally, anyone unwilling to explore their own premises or yours are clearly not interested in intellectually honest dialogue directed at obtaining truth and, therefore, are not worth your time or energy; a handy resource management tool, if you ask me.

So, why can’t we get along? Because no one is given the tools required to even consider getting along. Why can’t we understand what each other are saying? Because we don’t try hard enough. Remember, no unwilling student can learn, this includes yourself.

TL;DR: Listen to what people claim. Ask, “How did you reach that conclusion?” Make it a point to maintain an awareness of your opponent’s paradigm. Genuinely search for the truth in their words. Expect and demand that they reciprocate the effort, lest you waste both parties’ time and energy.
As I said on facebook the other day (while re-realizing some flaws in the AnCap worldview):
I love being a philosopher. My worldview is constantly shifting and undulating… but always gradually comporting itself more closely to reality. Where fleeting moments of intuition can, decades later, be given meaning and purpose and carefully constructed arguments and justifications can crumble, there is where humility and virtue can grow. The fires of truth and the crucible of reason can lay bare natural and artificial landscapes of mind alike, and enrich the soil for new growth and the return of the most robust ideas to carry on their existence.