The Great Depression

 

1DEbbfqTkb5M6n1ZxDnLf2kqUw3HpdCQB7

Sorry for the raw-sounding audio, Audacity detected the wrong microphone and I didn’t catch it until it was too late.

  1. I recommend that you listen to last podcast episode about the Weimar Republic. While I’m not super happy with the finished product, it will serve as a good introduction to the thought processes we will be running through in this episode and I probably won’t be able to resist taking some things from that episode for granted in this one.
  2. There are a number of “mainstream” explanations as to what caused the Great Depression and what prolonged it. Of course, those are two wildly distinct questions; I will be focusing on the causes of the G.D. and, if I have time, I can touch on reasons it lasted so long.
    1. History: The Great Depression, generally conceived, was basically the entire 1930’s. It started with a stock market crash near the end of 1929 and ended with a general rise in employment and quality of life around 1941.
    2. Many historians will claim that the great depression was only from 1929 to 1933, but the economy was in the toilet for the entire 30’s with a brief uptick due to speculation on increased gold supplies in the mid 30’s.
      1. Many Keynesians credit the wartime spending of WWII for the depression era ending. That’s just as incorrect as blaming deflation for the Weimar Republic or Great Depression.  Hopefully we’ll have time to address that at the end.
    3. Like I pointed out last episode, people act with incomplete data, due to delays in market signals as well as simply not being omniscient.
      1. Mises’ Master builder: Builder starts running low on supplies: he can adjust the plan better the earlier he knows.
    4. Business cycle is due to credit expansion sending false market signals until there is a shortage and correction.
      1. One such signal is the availability of capital assets and consumable goods.
        1. The primary mechanism by which loans are issued (specifically, the incentive to offer a loan and the signal that loans are available for investors) is interest rates. Interest rates are, effectively, the price of money over time.
        2. If I want money now and you have money that you don’t need right now, you can give me said money so long as I promise to give that money back within a certain time frame, plus interest.
          1. This exchange is beneficial, as I can take the money that is otherwise stagnant and invest it in a way that it otherwise would not have been, thus increasing the odds of it generating further wealth. You, in turn, are able to put the money to work for yourself by increasing its quantity over time, based on your level of risk aversion and distance of your time horizons.
          2. The interest rate is usually determined by two factors.
            1. The primary factor is the supply of money available for loans as compared to the demand for loans: basic supply and demand curves. The more people that want money, the higher the interest rates until the market clearing price is discovered: those that can’t afford higher interest rates drop out until there is the same quantity of loan requested as there is money to loan out.  These funds are usually supplied by savings accounts, certificates of deposit, and other forms of “long term” storage of money.
            2. Secondarily, the interest rate can be skewed higher or lower based on individual risk factors. If you have a good or bad reputation for paying off debts, your specific interest rate could be lower or higher, respectively.  Collateral for the loan is also a factor; mortgages can have a lower interest rate, because the loan issuer can always just take the house back.  Car loans are a little less secure, as a car can drive into the sunset, never to be seen again, and thus have a higher interest rate.  Then credit cards and student loans merely have abstractions such as credit scores as collateral and therefore ought to have some of the highest interest rates, second only to payday loan outfits.
  • Because the interest rate is tied to various factors such as supply and demand and risk, it serves as a market signal to potential savers/loan issuers/loan-takers with regards to the current economic climate.
  1. In the parable of the master builder, the apparent supply of bricks correlates to the signals sent by interest rates.
    1. In the framework I outlined just now, an investor can estimate, fairly accurately, the trajectory of various economic factors and thus be better able to steer his investments to success (which is represented by wealth creation).
  2. If, however, there is a central authority on the creation and management of money, propped up and secured by a violent imposition of legal tender status, interest rates begin to lose their utility.
    1. If money can be issued in the form of a loan without being backed by some actual creation of wealth, the interest rate will naturally fall.
      1. Last episode, I mentioned how coinage used to be made from materials that had reliably stable supply curves, such as gold or silver. The only way more coins could be made was by actually acquiring the requisite materials (barring counterfeiting).
        1. As a side note: this reality is essentially what led to the discovery of inflation; with the sudden influx of gold and silver into Europe from the Americas, the otherwise reliable supply curve was upset and purchasing power of most currencies dropped. Kings were very confused at how getting more money made things more expensive, much like leftists, today.
      2. This reliable supply curve helps establish interest rates because one can calculate how much money will be created during the lifespan of the loan and the degree of wealth that will be created over that same time span, which will determine the real price of the money (the purchasing power) once the principal and interest are paid.
      3. If there is no real competitor for the monopoly money of the central bank and there is no significant physical limit to the printing of more money, the effective money supply is theoretically infinite: you can always just print more. To incent increased loan quantities issued by the central bank, the bank can simply set the interest rate lower by fiat or decree, rather than necessarily inflating the money supply directly.
        1. Either way, the end result remains the same: the quantity of money available increases significantly relative to the availability of wealth: the brickyard says it has more bricks than it really does. Eventually, there will be no more bricks, regardless of how many were promised.  The bubble bursts.  The decade-long party comes to a crashing halt.  There’s a run on the banks to try to pull as many bricks out of the brickyard before everyone else that was promised bricks gets them all.
      4. Many mainstream economists and public-school-educated plebs insist that the cause of the great depression was the contraction of credit supply at the end of the roaring 20’s.
        1. A second parable: A certain public figure gets addicted to Methamphetamines.  The addiction, of course, begins as a result of the increased focus and energy the drug provides.  As intake of the drug increases, the body needs to begin expending stored resources to keep up with the metabolic strain of the drug, which, counter-intuitively, results in a chemical dependency on the drug to keep running: the hallmark of psychological and chemical addiction.
        2. Eventually, friends, family, and fans of the public figure get worried and stage an intervention: cutting off the supply of the drug. This makes the addicted public figure furious.  They feel great.  They have energy, they are focused, they are invincible.  But any outside observer would note that they are emaciated, covered in sores, irritable, and exclusively focused on Meth, rather than anything productive.
        3. When the friends and fans cut off the meth supply and the addicted celebrity becomes increasingly uncomfortable and angry at his situation, what do you think he’ll blame his situation on? Will it be himself and the choices he made early on (probably because he was unhappy to begin with) which caused the addiction, or the people who cut off his supply of drugs (which made him feel awesome)?
      5. Public figures in the 30s, and pretty much ever since, have responded to being cut off from their credit addiction the same way any other addict is; they blame the absence of the drug, rather than the drug itself, for their shitty situation.
        1. You will even hear friends of ours in the Chicago School parroting these sentiments, despite the prima facie fact that the initial and protracted artificial credit expansion is what led to the inevitable implosion of the market.
      6. Why does this matter, though? Wasn’t the depression, like, a century ago?  Didn’t the federal reserve learn its lesson?  Otherwise, why wouldn’t there be any depressions since then?
        1. Well, if you’re aware that all the mainstream economists and politicians did was change the definition of “depression” and effectively replaced it with the term “recession”, then one would have to account for 1945, 1949, 1953, 1958, 1960, 1969, 1973, 1980, 1990, 2000, 2008…
        2. Admittedly, only one or two of them would be able to compete with the depression of the 30s, but we’re finally coming out of a decade-long depression as a result of credit expansion and collapse in real estate and ancillary markets, so it’s fairly relevant today.
          1. The most recent issue of “The Austrian”, a publication of the Mises Institute, gets deep into the parallels and differences between our current situation and that of the progressive era. I highly recommend getting a subscription to “The Austrian” as it always has relevant material derived from history and praxeology.
        3. “We” clearly haven’t learned our lesson from the Great Depression, as the federal reserve and the federal government (same thing) implemented essentially the same strategy they used back in the 30s, with the same outcome.
      7. The strategies implemented in the 30’s were centered around manipulating market signals further, in order to incent consumers and investors to behave as if the economy were running smoothly, as it had prior to the creation of the federal reserve (pay no attention to the man behind the curtain).
        1. For example, FDR tried to artificially elevate prices by literally destroying supplies of necessary goods like food, causing shortages, which would elevate prices. No now all the unemployed and under-employed could take their meager assets and buy far less food than before.  Great idea.
        2. He also artificially raised the cost of labor by using the violence of the state to enforce minimum wages and increasingly untenable workplace regulations, making it too expensive for beleaguered entrepreneurs to hire the mass of unemployed who would be happy to work for even a nickel an hour, thereby worsening unemployment rather than solving it.
        3. Stealing all the gold by executive order was another brilliant idea. It was basically an expansion of the existing legal tender structure, forcing people to exclusively use federal reserve notes as currency as opposed to more secure and reliable assets.  Thereby further reducing the available wealth of the American people, especially when artificially elevated prices are in effect.  It’s almost as if FDR wanted everyone to starve.  Fortunately, today, no one can confiscate Bitcoin absent the rubber hose method.
        4. And, of course, hardcore propaganda echoed across the country so loudly that wrong-think didn’t need to be punished by legal means; social stigma and even private-sector violence were promptly unleashed on anyone who would dare to question the new socialist order. To this day, that propaganda can be heard echoing through the halls of academia and when random “financial experts” catch a whiff of the sweet perfume of industry on social media, you can be sure that someone will be compared to Hitler for questioning FDR.
        5. Unfortunately for everyone involved in the American 30’s, or any other socialist era, no amount of coercion and destruction of wealth will solve the economic calculation problem presented by violent centralization of market action.
      8. Which brings me to my final point, Herbert Hoover was far from the laissez-faire Ayn Rand Anarcho-Capitalist FDR’s propagandists have made him out to be. A better way of characterizing Hoover would be to say that he’s the Vladimir Lenin to FDR’s Joseph Stalin.  That is to say, they tried the same thing, but FDR definitely got the high score.  The entirety of the New deal was simply taking the knobs on all of Hoover’s attempts to salvage the economy and turning them up to 11.
      9. Also, deflation had even less to do with the great Depression than it did the Weimar Republic. Sometimes people will point to certain market shifts during the credit contraction and go “See? Deflation.” Even if it were deflation, it was minor, isolated, and a symptom rather than a cause of the Depression; calling it deflation is disingenuous, though, as those isolated incidents are merely the result of increased assets being available on the market as people tried to unload luxuries in favor of necessities, flooding the luxury markets with goods and causing prices to fall.
      10. I also highly recommend getting a subscription to Tom Woods’ Liberty classroom (using my link), Reading Rothbard’s “American Great Depression”, and listening to the Mises Weekends podcast feed. I don’t think I’ve said anything original in these 20-or-so minutes, I’ve just amalgamated a bunch of material from these and other sources over the last decade or so.  I may have made some mistakes or left some important details out, so it’s always best to go to the primary sources if you’re interested.

Carpe Veritas

Weimar Germany and Deflation

 

  1. During WWI, all major players not beholden to a commodity standard (such as gold or silver) inflated their money supplies.
    1. What I mean by “inflating the money supply” may be a little bit different than what one would learn in a basic economics class. What I mean is that the total quantity of money put in circulation increased relative to the available wealth within that nation’s economy.
      1. This is usually only possible if the money in question is not physically made of a commodity that is limited in availability (such as fiat currency not tied to gold or silver reserves) or if there is an unexpected windfall of the limited commodity (such as the sudden importation of gold and silver from the Americas into Europe).
    2. Typically, this inflation is felt by consumers in short order, as the supply of money increases relative to its demand (as the available goods remain constant). This is represented by decreased purchasing power a.k.a. a general rise in prices.
      1. In the case of the Weimar republic, this rise in consumer prices was delayed by various forces.
        1. In the mainstream narrative, this was due to people hoarding their money in mattresses and hidey-holes due to the economic uncertainty of war. By not allowing the money to enter circulation, the citizens of the Republic delayed the Cantillon effect.  After the war ended, people had more confidence in the economy and began spending the money they had hoarded, causing the Cantillon effect that should have been felt gradually over the course of the war to be felt all at once.
          1. The Cantillon effect is a result whereby an individual coming into possession of counterfeit or inflated (same thing) currency can more easily raise the market clearing price of a particular good, thereby securing that good more easily than others in the market. This windfall on the part of the seller of said good then contributes to his ability to raise the market clearing price of a good he desires.
            For example: if the Reichsbank just printed me a hundred thousand marks, I could either more easily outbid my competition for purchasing a vehicle I desire or I could buy the next nicest model without having contributed any actual wealth to the market.  This is free money for me and it is also effectively free money for the car dealer.  He, in turn, can purchase a house he wants for a higher price than his competition or buy a nicer house than he could have originally afforded, and this process continues throughout the economy.
          2. This process can be drowned out by noise in the market if it is just one counterfeiter, but when an entire nation takes advantage of that free money, there’s no way the market can absorb that general rise in prices.
        2. After the initial market correction to the elevated prices, uncertainty concerning the value of the Mark over time diminished and the economy was allowed to recover and adjust to the new normal. This new normal featured relatively stable purchasing power for the Mark, despite the Reichsbank continuing to print money.
          1. A situation not unlike the current situation in the United States emerged, though. Through government payments to foreign nations (in Weimar, it was due to the absurd demands set in the treaty of Versailles, in our situation it has a lot more to do with foreign aid and proxy wars) as well as a severe imbalance between importation and exportation, the Cantillon effect was, again, delayed by moving Marks out of the country at a rate comparable to the rate they were being printed.  This made the Mark weak as compared to other currencies in circulation, but one would not notice if their financial concerns were purely domestic.
            1. This loss of purchasing power (like that of the USD is) was masked by the coincidental inflation of foreign currencies as well.
          2. As more countries dropped the Mark as a preferred currency, more and more marks moved back into the Weimar Republic, causing that delayed Cantillon effect to be felt all at once, again.
            1. The mainstream account tends to overlook the reality that the mechanism by which this flood of returning currency occurred was by using the Marks, which were created out of thin air without commensurate increases in tangible wealth within the Republic, to purchase the goods which remained in the Republic. This drastically decreased the available capital assets of the Republic while also drastically increasing the available money supply relative to those assets.
            2. One thing that the mainstream account does acknowledge is that the rising prices over time provided consumers with incentives to spend quickly, which, in economic terms, means that the demand for said money was drastically reduced, further compounding the effects of inflation.
              1. To take it one step further: at this point, according to Mises’ regression theorem, the Mark was totally unsuitable for use as a currency, which should be no surprise given the nature of fiat currency in general, but the Western World has a hard time learning from its history.
            3. Ultimately, once the Mark was inflated to the point that it was more expensive to hold and transport the Marks one owned than the Marks were worth, the economy was dead on arrival. This is where a populist movement centered around a fiscally literate charismatic demagogue will come to power, declare the Mark to be of no consequence, default on international debt, install a more secure monetary policy, and unleash military force on anyone who would try to stop them from doing so.
            4. But I digress… As far as deflation is concerned, it had very little impact on the general trajectory of interbellum Germany.  While the mainstream accuses deflation for the sudden burst of post-war inflation, the cause of post-war inflation was the presses running amok during WWI.  The deflationary forces of scared consumers, at most, simply delayed the effect of wartime inflation.  A more likely cause of the delayed effect than scared consumers, though, is the mechanisms by which the inflated money supply was confined to military budgets and paychecks that wouldn’t be spent until the war wound down and the soldiers went home and the manner in which the inflated marks were exported to foreign producers of arms and consumer goods, much like the temporary stability experienced in the Weimar Republic.
              1. This delayed economic impact is actually more common than a lot of people may think. Because economic signals are the results of human actions and human actions are the result of people responding to market signals, there is always a degree of delay between when an event takes place and when one reacts to the event… rinse and repeat.
              2. Also, if one is able to keep certain market signals secret, as in the housing collapse of 2008, the delay of a couple days or even months can cause people to behave in ways that they would not, if they had all the information they needed to make a sound investment. In the case of the housing bubble, the banks were able to feign loan viability and even solvency for the entire bank long enough to get political processes started for pending bailouts.  In the case of apparently stable prices during WWI and prior to the hyperinflation as the Mark collapsed, the sheer quantity of money being printed and exported was being largely ignored in the media at the time.  For comparison, look at the reported national debt of the United States today as compared to the “unfunded liabilities” of the same.
            5. A lot of these events and theories parallel the next podcast episode that has been commissioned concerning the Great Depression in America.
              1. If you like this episode, please send Bitcoin (BTC) to the address in the show notes: [insert address]
              2. If you’d like an episode to be produced on commission or would like consulting or tutoring services, feel free to contact me on social media or at madphilosopher@gmx.com
            6. Carpe Veritas

Costco and a millkshake recipe

The other day at Costco…

Such a cliche start to a blog post from a stay at home mom with multiple children who semi-home-schools and sends the oldest to a charter school who makes her own Kombucha and just nursed the family’s most recent female addition (yes, we only have girls) to sleep and can ramble on just as long as Mad Philosopher with smaller words…

That was actually the start to Mad Philosopher’s recent post, but it suited me as well.  In the same vein as “visit Costco and buying their giant boxes of baby diapers that never seem to last as long as you want and giant tasty muffins that a Hashimoto’s girl like me can no longer enjoy”, I figured I would throw my run-on-sentence stream-of-consciousness into the giant void that is the internet.

“I have gadgets and gizmos a plenty…I have ideas and recipes galore…want a gluten free dessert- I have 20! But who cares…no big deal…”

Okay, kinda’ hope you care enough to keep reading. And to not tattle on me to the Mad Philosopher that I just quoted the “Little Mermaid.”  I’m burying the lead, here, but hold out just a little longer…

Mad Philosopher counts how many words he writes each and every post. I don’t think I will. radical freedom!  In all seriousness, I am trying to keep my post short and fun and maybe even helpful.  As a thank you for any who made it this far in my first not-gif post, here is a super easy and delicious Bailey’s milkshake recipe to sip on later and a promise of a funny gif post soon.

 

Bailey’s Shake:

2 shots Bailey’s

Roughly 1 1/2cups ice cream (chocolate is recommended)

Optional (but not really): whipped cream.  Also, sprinkles.

Put the Bailey’s in first, add all ingredients into blender and then enjoy! Add more or less ice cream depending on desired thickness.

 

Artificial Intelligence

The other day at Costco, a guy guy behind the counter read my Nietzsche t-shirt and pronounced his name properly.  Pretty much two groups of people can do so: people who really enjoyed that one college philosophy class and continental philosophy professors given that he was working at Costco, I correctly deduced that he was not a college professor.  Our discussion of philosophy, itself, petered out fairly quickly, but we switched gears to discussing Artificial Intelligence and I thought it would be fun to do a data-dump regarding the problems I see with discussing Artificial Intelligence.


Also, an update on the Trolling at the LFL project: https://www.facebook.com/UnApologeticPhilosophy/posts/1705777666122442

Wizardly Wisdom Guest Spot #3

Howdy.  I promise I’m still around and working on content.  I just also happen to be doing a lot of client work on the side and helping my family as my gradfather passes away.  Also, did you guys catch the shenanigans at the Unite the Right rally?  Exciting times, to be sure.

Anyway, here’s my recent guest spot on the Wizardly Wisdom Podcast:

Carpe Veritas

Wizardly Wisdom Guest Spot #2!

Hello all,

Here’s another bit of audio-only content.  I did another guest spot on Wizardly Wisdom Podcast.  The first one was a blast, but this one is about 20% more awesome.  We spoke about the philosophical underpinnings of the libertarian movement, some historical context for different positions people hold to be “the libertarian position”, and why discourse about this discourse is important.

You’ll have to forgive my rough audio, we had some technical difficulties, but I think the content more than makes up for a little echo and click.

 

Cryptocurrency for Catholics

Here’s another impromptu conversation post with a new friend of mine from Facebook.  We talk about the fundamentals of cryptocurrencies, currency in general, certain economic issues related to cryptocurrency and then the Catholic Church’s relationship to cryptocurrencies and possible options for it to navigate the current political and economic climate.  All the really meaty material starts at the 13:10 mark.

Regulations

This is another audio-only post.  Given how out-of-control my life has been with work, my side gig as an Anarchist Consultant, and the birth of my fourth beautiful daughter (and fourth total daughter), I’ve been less able to write things down as I want to be.  So, for now, I’m going out of my way to produce more short-form audio recordings on relevant subjects and release those more frequently.

This is the first of one such post.

Carpe Veritas,
Mad Philosopher

The Role of Philosophy in Daily Life

One might read the previous chapter and question whether philosophy is more than esoteric navel-gazing.  Admittedly, I didn’t do a very good job of presenting it in a manner that would appeal to “Plumber Joe”.  Why should one concern oneself with trying to figure out all the little details about how the universe operates and why?  Shouldn’t it be sufficient to figure out how these more concrete tools at my disposal can contribute to my quality of life?  I can make more money, get better employee benefits, and have more self-satisfaction if I simply tend my garden[1] and work on much more real things.  Besides: lifting weights, buying cars, and playing guitar are easier activities than questioning fundamental assumptions about reality and considerably increase my value in the sexual market by comparison.

I, myself, feed my growing family by way of more practical considerations than discussing the specific ontological status of contracts.  I’m a facilities manager by trade and a philosopher by vocation.  Given that practical considerations generally have more market value than philosophical ones, why would one choose to engage philosophy?  There are a number of answers that, cumulatively, make a compelling case for such activity.  For now, I will focus on the more practical aspects and save the more psychological and ephemeral ones for later in this book.

One of the key aspects of the philosophical exercise is epistemology.  What epistemology effectively boils down to is the study of knowledge: what it means to know something and by what mechanism one comes to know something.  At first, it may seem like a dumb line of inquiry.  One knows something if they believe something and it happens to be true; they know these things because experience leads them to believe such things with accuracy.

As anyone who has had experience with mind-altering substances, mental illness, or living with a pathological liar, will attest, sometimes knowing things isn’t as easy as people initially think.  This has been the case throughout history, as well.  If I see an omen or an angel comes down and tells me something will happen at an appointed time, could that belief rightly be called knowledge?  What if an authority figure tells me something?  Hell, even my senses are suspect; how many times has someone looked at an object and misjudged its size or distance, witnessed a mirage, heard or felt something that didn’t correspond to anyone else’s experience, or any number of other illusions?

Descartes[2] wondered if he was the only mind in existence and that there may be a spirit of some sort causing him to have a vision of all the other phenomena he experienced.  This line of reasoning is called solipsism[3].  This solipsistic reasoning has been extended to “Matrix”-like brain-in-a-vat thought experiments and universe-simulation theories.  One doesn’t need to get as involved as Descartes, though, a quick trip on drugs or mental instability will give one sufficient experience of “seeing things that aren’t really there” to begin doubting one’s senses.

Epistemic problems don’t even need to be that far-reaching, either.  For example, inexplicably, there are a growing number of people that believe the Earth is flat, that crystals have magic healing powers, that children should be encouraged to undergo irreversible unhealthy and life-altering plastic surgery, and so many more absurdities.  Just yesterday, I was led to believe that I had to be somewhere at a certain time… and both the time and location were incorrect.

Understanding the nature of knowledge in a deeper and more reflective manner has, however, been quite useful in preventing situations such as the one that occurred yesterday.  For example, exploring common occurrences of human fallibility in theory helps to identify instances in reality and navigate people through them.  When attempting to coordinate multiple contractors, administrators, and customers, heightened awareness of epistemic difficulties and solutions has been invaluable.

Something related to epistemology and equal in utility is the study of ontology.  Ontology is the study of existence, things that exist, and in what manner.  Again, this may seem to be as obviously superfluous as epistemology at first, and one could just as easily be surprised.  The earlier epistemic examples of “experiencing things that aren’t really there apply to ontology as well, of course.  But what if I told you that a great many things we take for granted as existants[4] are of dubious ontological status?

There’s the obvious things like God, space aliens, astrological energies, political authority, true love… and some less obvious things like consciousness, free will, fundamental particles, or that fortune that Nigerian prince still owes you[5].  One can’t be certain of the existence (or non-existence) of these things if one doesn’t have a firm grasp on one’s methods of knowing things but, even then, it can be difficult to prove or disprove the existence such things.

This is where the bottom-up approach of philosophy
I mentioned in the previous chapter becomes pertinent.  If one can secure knowledge of or, at least, confidence in the existence of some things, it becomes easier to bring other things into that sphere of knowledge by way of understanding the relationships between the two.  Since Descartes’s famous cogito[6], philosophers have largely attempted to prove their own existence or the existence of the phenomena experienced by themselves and used that as a starting place by which to prove the existence of the other furniture of the world that we all take for granted.

I’m sure that this doesn’t seem practical just yet.  “I know I’m hungry because I feel hungry and I know that this bacon cheeseburger I’m about to eat is real because I can see, smell, touch, and taste it.”  Fair enough.  But what if there is a God and he hates people who eat cheeseburgers?  Alternatively, what if that meat isn’t real meat but is some science experiment grown in a vat and happens to be riddled with prions[7]?  Knowing either of those circumstances may give one sufficient reason to modify one’s behavior.

The same goes for whether or not the cow and pig that were, ostensibly, butchered to produce one’s meal possess consciousness and are capable of experiencing meaningful mental events.  If one were convinced that were the case, one would likely become a vegetarian, posthaste.  Otherwise, why wouldn’t one eat baby-burgers with dolphin sauce?

That took a dark turn, but the question still stands.  There is a great deal of human suffering that one can witness and, assuming one believes that other humans exist and are capable of comparable mental faculties to oneself.  A good portion of this suffering is, directly or indirectly, a result of epistemic or ontological mistakes made by either those that are suffering or by others who have those unfortunate individuals within their sphere of influence.

This is why ethics is the oldest and most-engaged field of study throughout the history of philosophy.  The pre-Socratics[8] were primarily concerned with “how does one live the good life” and secondarily concerned with “how does the world work?”  Socrates, Plato, and Aristotle had similar priorities.  Medieval thinkers in Europe and the Middle East alike were also primarily concerned with “How does one be holy?” and secondarily concerned with “How does God work?”.  Enlightenment-era and modern thinkers have been primarily concerned with “what is justice?” and secondarily concerned with political institutions such as monarchy and various forms of socialism (such as democracy, republicanism, communism, etc.).  Only recently has postmodernism shifted the focus from “how does one live the good life?” to “how can we best undermine all of the institutions which were built by Europeans of bygone eras?” with living the good life becoming a secondary philosophical pursuit.

Of course, one can’t know how one ought to act without first knowing at least a little bit about the world one is trying to navigate, hence my initial focus on epistemology and ontology.  For example, one cannot determine that one ought to act to minimize the suffering of others if one does not first establish that there are others who can suffer and that suffering is undesirable.  The same dilemma applies when determining that one ought to live by the prescriptions of a book written thousands of years ago or refraining from eating a delicious and juicy steak.

A quick survey of ethical theories will present so many varieties of premises and conclusions that one is liable to despair at the outset of such an investigation.  Do not worry; I hope that, by the end of this book, you will have a firm enough grasp of philosophical methodology and (possibly) the reality of the matter which philosophy engages that you will be well on your way to making sense of ethics.

For now, I think it should suffice to say that ethics is the most practically applicable area of philosophy because its primary focus is influencing how one acts.  Ethics takes into account the various circumstances an actor finds himself in and applies a rubric by which he can or should act.  As the ancient Greeks phrased it, the problem is “how does one live the good life?”  Such an inquiry is obviously directed at happiness and, hey, who doesn’t like being genuinely happy?

Admittedly, this rubric must take into account objective facts about the world, such as what things exist and in what manner as well as subjective matters such as the objective of the individual actor, and that process is where things get hairy.  The methodology one uses to sort through the furniture of the world and the subjective goals of the individual actor is the source of the plethora of divergent ethical theories[9].

Ultimately, this introduction to the basics of philosophy is directed at establishing in your mind the plausibility of philosophy having practical utility in daily life.  I do not know you, the reader, personally but I am confident that it is a rare exception to find an individual completely lacking in ethical awareness.  How often does one encounter phrases like “that’s just wrong,” “people should just,” “such-and-such are as bad as Hitler,” “you really should go vegan/to church/vote/to college” or other variations of statements directed at modifying or justifying one’s behavior?  Whether those claims relate to a consistent and expansive network of ethical calculations and value judgements or not, those are ethical frameworks in action.

Even if one isn’t aware of the genealogy of those ethical compunctions, I can guarantee that they are derived from some philosophical work or another.  It is important to be aware of that genealogy, though; without the ability to critically examine the consistency of ethical claims one can fall victim to con artists and well-meaning do-gooders alike.  How many political campaigns have stemmed from undeserved patriotism or lies generating outrage?  How many people donate money to charities that simply show a sad image and ask for money, only to line the pockets of fraudsters?  Philosophy can help prevent such things.

[1] This is a barely-veiled allusion to “Candide” by Voltaire.  It’s an exceptional work of scathing philosophical satire.  It’s not as much fun if one hasn’t familiarized oneself with Leibnitz’ optimism.

[2] Rene Descartes: French philosopher from the turn of the 17th century; began a series of inquiries in modern philosophy named “Cartesian” which center on mind-body dualism and problems of knowledge.

[3] Solipsism: The belief that one’s self is the only thing that can be known to exist as such.

[4] Existants (n): Things that exist.

[5] If you don’t get the reference, just look up “Nigerian Prince scam” on the internet.

[6] “Cogito ergo sum.” translated as “I think, therefore I am.”

[7] A prion is a unique vector of disease wherein mutated proteins migrate through a host organism and reproduce, much like a virus.

[8] Pre-Socratics (n): The philosophers who lived in the Mediterranean region before the time of Socrates (the end of the 5th century BC).

[9] This dilemma is made strikingly clear by the observation of David Hume in “A Treatise of Human Nature” wherein he indicates that moral obligation is a concept of a different category than facts about the world.  This is commonly called the is-ought divide.  I will address this particular issue in the chapter on human action.

The Nature of Philosophy

As is the case with most cultural pursuits which hearken back into the dark recesses of history, philosophy has no universally-agreed upon definition.  Even in academic circles, the definitions of the enterprise called “philosophy” is likely to be as numerous as the number of philosophy department chairs one asks.  This is a phenomenon[1] that vexes many analytic-minded[2] philosophers, given their obsession with necessary and sufficient conditions[3].

While I write and think very much like an analytic, I do not feel that it should be absolutely crucial to assign a definition to philosophy which outlines necessary and sufficient conditions.  At the same time, however, I am not inclined to do as postmodern[4] and continental[5] thinkers tend and simply hand-wave the issue and say “it’s a family of activities that generally resemble each other”.  The only remaining option, then, is to make an attempt at crafting a heuristic[6] for identifying philosophical activities as opposed to any other activities within the scope of human intellectual experience.

Looking at the historical context of philosophy, one may get a feel for the “family resemblance” of philosophical activities.  The helps one create a genealogy of philosophy.  This genealogy begins with ancient thinkers were predominantly concerned with “living the good life” as well as understanding how the world worked.  One of the tools that was of utmost importance to the ancient thinkers and has maintained its utility (at least, up until the point where the postmodernists have taken over) is logic.  In the middle ages of Europe and comparable periods of time in locales such as India and Japan, there was a burgeoning attempt to ascertain the fundamental qualities of existence; admittedly, this was universally in a religious or theistic context of some form or another, but that does not negate the contributions made.

In the more modern eras, from the enlightenment[7] to today, the philosophical enterprise has been a predominantly directed at understanding the manner in which man interacts with reality, from the nature of sense experience to the nature of knowledge and its acquisition.  Additionally, there has been a lot of emphasis on the manner in which the individual interacts with mankind at large and how that interaction ought to be conducted.

Depending on one’s definitions and motivations for constructing a narrative, philosophy can be seen as the progenitor of, handmaid to, or companion of nearly other activity in human intellectual life.  Modern scientific methods are the product of ancient natural studies and enlightenment-era epistemology[8].  Computer science is predicated on mathematical principles and linguistic theories which have been formed through philosophical discourse.  Theology is, by and large, the application of philosophical tools to puzzles related to spiritual revelations and religious doctrines.  Economics[9] is the result of a-priori[10] reasoning in conjunction with philosophical tools of introspection and observation.  These relationships cannot be ignored, but the exact nature of these relationships is at the heart of many lively debates.

I can (and have) gone on a much more rigorous exploration of the necessary and sufficient conditions for something to be considered philosophy, but that sort of exercise is better suited for a longer, more exhaustive, procedural work.  For now, I think it would be most prudent to do a quick breakdown of the etymology[11] of the word “philosophy”.  The word, itself, hails from ancient Greek and effectively means “love of wisdom”.

Of course, nothing in Greek translates so directly into English.  For example, ancient Greek has at least four words for love (arguably, there are a few more).  This particular root, “-philia”, would be most appropriately used in the context of a dispassionate desire for (non-sexual) intimacy, such as that of close friends.  Additionally, “sophos” is a Greek word the denotes a wide array of practical and virtuous skills and habits regarding wisdom, rather than just the sterile modern English concept of knowing a lot or having advanced experience.

The best I can do to describe the Greek root of the term is to say that it is “an actionable desire to develop intellectual virtue and put it into practice in the world at large”.  This takes many different forms, as demonstrated by Socrates and Diogenes relentlessly badgering their neighbors concerning how wrong their ideas of how the world worked really were, while Aristotle, Pythagoras, Epicurus and Zeno started schools and lectured ad-nauseam.  Later in history, the general attitude of a philosopher had largely homogenized into academic bookishness and the writing of essays and long-form treatises.  The exact nature of each essay and treatise may be radically divergent with regards to content, method, and end, though.

Ultimately, taking into account all these diverse enterprises and the influence of postmodern thought, I believe that any human enterprise directed at creating an internally consistent, logically sound, empirically viable, and universal worldview which possesses ethical actionability, utility, and (ultimately) Truth can be rightly considered to be “philosophy”.[12]

In order to attempt to construct a worldview that correlates to reality, there are a great many prerequisites that must first be met.  For example, there is the assumption that there is a reality to which a worldview can correlate.  Another example would be establishing the fundamentals of logic in such a way so as to be certain of their utility[13].  Yet another assumption would be that one is capable of constructing a worldview at all.

Rather than dragging my readers through the most meticulous and technical aspects of post-enlightenment thought, I’d like to discuss the general methodology of philosophy and, if my readers are so inclined so as to investigate these problems in their fullness, I can recommend some starting places.[14]  These problems of philosophy are quite significant, and I believe that these issues ought to be examined, but they are not issues for beginners or the faint of heart.

Instead, I recommend familiarizing oneself with the fundamentals of philosophical methodology and begin exploring this new way of perceiving reality, first.  Even though it has taken many different forms throughout history and our contemporary academic landscape, the fundamental methodology of philosophy has found no better expression than that of the trivium and quadrivium of the middle-ages in Europe.  Although these fields of study were crafted in a theistic environment and are, therefore, often ignored or denigrated by modern (leftist) scholars, the methodology they present are still quite valid, even if they may have been used to reach illicit conclusions.

The trivium consists of three stages of thought: the logic, the grammar, and the rhetoric.  Initially, these stages of thought were applied exclusively to language (hence their names).  The logic was the basis of linguistic thought; it contained the a priori principles such as the law of identity[15], the principle of non-contradiction[16], and the resultant laws of induction.  The grammar demonstrated the rules of language which reflected the logical principles outlined earlier; subject-object relations and other syntax relationships are important to maintaining fidelity to the logical principles underlying that communication[17].  The rhetoric refined the above skill sets so as to aid a thinker[18] in convincing others of the facts which he had uncovered through the application of logic and grammar.

Since its inception as a linguistic methodology, the trivium quickly expanded into a philosophical methodology.  This is partly due to the close relationship that language and philosophy has always held and partly due to the axiomatic nature of the trivium lending itself to the inquiries of philosophy.  In essence, a thinker must first establish the furniture of the world (the fundamental principles and objects of those principles), then explore the relationships between those objects, and then must find a means by which to express those relationships.  For example, the “Socrates is a man” syllogism I referenced in the footnote on this page contains material that isn’t merely linguistic.  For example, the categories “Socrates”, “man”, and “being” are assumed to correlate to realities in the observable world.  Additionally, the grammar of the statement establishes a relationship to those categories which are assumed to correlate to the observable world.  This trend is maintained through the rest of the syllogism:

Socrates is a man,

All men are mortal,

∴ Socrates is a mortal.

At each level of the syllogism, new categories and relationships are assumed or established.  On a linguistic level, logic serves as the structural framework for the grammar to populate with the symbols for Socrates, man, etc. and the rhetoric is the manner in which one would express this syllogism to others and defend the validity of the syllogism.  On a philosophical level, the logic serves as the source for the objects Socrates, man, etc. the grammar denotes the relationships between those symbols, and the rhetoric serves as the means by which these ideas move from my mind to the page for your mind to reassemble[19].

This quick introduction into the methodology of philosophy will be expounded upon in the next chapter, as we explore the role of philosophy in daily life or, as the ancient Greeks put it, “how does one live the good life?”

[1] Phenomenon (n): The object of a person’s perception or discussion; an event of which the senses or the mind are aware.

[2] Analytic Philosophy (n): A school or tradition of philosophical thought predominantly populated by English-speaking philosophers which emphasizes procedural methodology and strict definitions and application of logic.

[3] Necessary and Sufficient Conditions (n):  The requirements of any given subject to meet a definition; necessary qualities are qualities which, if absent, preclude subjects from being defined as such and sufficient qualities are qualities that, if present, allow a subject to be defined as such.

[4] Postmodern (adj): Relating to a school of thought which maintains certain attitudes such as indefinability, plurality of reality, and subjective narrative ontologically trumping objective reality.

[5] Continental (adj): Relating to a school or tradition of philosophical thought predominantly populated by thinkers from mainland Europe which emphasizes meta-philosophical influences on philosophy such as culture and economics.

[6] Heuristic (n): A method or system of interpreting ideas as they are presented.

[7] Enlightenment Era (n): A period in European philosophical history, commonly accepted to be from as early as the 16th century to the end of the 18th century; the era is marked by a sudden surge in scientific advance, political upheaval, and sheer number of philosophical schools of thought.

[8] Epistemology (n): The study of knowledge, the manner and mechanisms by which one knows.

[9] Austrian Economics.  This will be discussed in Chapter 4: Political Philosophy and its Discontents.

[10] A priori (adj): A logical justification for a claim based on syllogisms, moving from given premises to their necessary conclusions.  This is often set in opposition to a posteriori or “empirical” reasoning.

[11] Etymology (n): The study of the meaning of words and the changes of those meanings throughout history.

[12] There is a good amount of jargon in this proposed definition; as these terms appear later in this book, they will be defined in more detail.

[13] Utility (n): The capacity for a thing to provide or contribute to accomplishing one’s end, usually in the context of alleviating discomfort.

[14] “The problems of Philosophy” by Bertrand Russell, “Cartesian Meditations” by (((Edmund Husserl))), and (for the preeminent masochist) “Critique of Pure Reason” by Immanuel Kant

[15] Law of Identity (logic): A=A (A equals A), A≠¬A (A does not equal not-A)

[16] Principle of Non-Contradiction (logic): The logical principle that something cannot both be and not be in the same mode at the same time. (Abbreviated as PNC)

[17] For example, in the over-used case of the “Socrates is a man” syllogism, if you were to mistake the subject-object relationship, you can end up with things like “Man is a Socrates” which is not only incorrect, but it is nonsensical.

[18] i.e. The philosopher

[19] There are deeper epistemic realities hidden in this discussion of the trivium method, but those will be addressed in the coming chapters of this book.

A Meditation on Mondays

 

There’s a quote by Slavoj Zizek that I used to really like when I was an economically illiterate communist:

“If you hate Mondays, you don’t hate Mondays; you hate capitalism.” (or something to that effect)

The obvious hidden premise is that you hate Monday because you just had a weekend and don’t want to go back to work. That makes sense for a majority of people in the developed world; if I had sufficient wealth so as to live the weekend lifestyle all week, I probably would. Admittedly, I would still be working… but I would be working by building things and writing things and enjoying the leisure of exertions not tied directly to survival.

The part that Zizek (and most people) miss is that Capitalism is the only reason that not every day is a work day. What I mean is, only the set of emergent properties of voluntary exchange on the aggregate, can generate sufficient wealth so as to allow some people to emerge from the had-to-mouth existence of poverty. Only with the division of labor, the production of wealth through the voluntary exchange of goods and services, and the ability of individuals to act on their subjective values and preferences, coupled with the efficiency of markets in the aggregate, is it possible to generate enough wealth so as to afford the opportunity cost associated with taking two-or-more days a week off.

As I’ve addressed previously, the natural state of man is one of abject poverty. In order to emerge from that state, individuals must either get incredibly lucky (such as finding a region so laden with food and shelter and absent natural predators that one no longer has to work to survive) or, more likely, find a mechanism by which individuals are able to contribute maximum utility to their family/tribal unit so as to create surplus wealth. Rather than reiterating that point in greater detail, I’ll just suggest you read that previous post.

Instead, I want to meditate on Mondays just a little bit more. If I were to wake up in some other time in history and some other place, let’s see what my relationship would be with Monday…

>rolls d20<

Ok, so it’s the mid 19th century in Europe. Political tensions being what they are, various governments have shut down the international marketplace. It’s basically the white people version of most African warlord situations.

>rolls another d20<

Welcome to Ireland (I really did randomly select the time and location, I promise).

The only day of the week that is unique is Sunday. Every day of the week, we lay around and wait to die because the soil is useless and nobody will sell us food or let us emigrate to greener pastures. We try to find or make food, but one might as well try to get blood from a stone. The only reason Sunday is special is because we get to feel guilty about not making it to church because we had to eat our horse to live and our local priest already died.

Why am I writing about to potato famine? Partly because the dice led me here. Also, it is a clear example of what happens when different anti-capital policies reduce people to the basic state of man: poverty. A “Monday” for us is any day after we actually managed to buy some food from a smuggler or find an animal in the arid fields to kill and eat.

This serves as a specific example but, by and large, most of human history has consisted of this state of affairs. Those that manage to develop a skill or combine existing technologies/resources in a novel and useful way, have the ability to improve the quality of life of their neighbors in exchange for their clients’ goods or services. When enough people engage in this entrepreneurship, a division of labor emerges and everyone benefits. When too few people manage to do so, though, people get locked into poverty. There are a number of factors that can interfere with peoples’ ability to engage in entrepreneurship, but criminal gangs (such as government) tend to be the largest impediment.

What has changed throughout history? Why did Monday used to be “the day after Sunday and before Tuesday” and it is now “the first day of the average work week, the day after the weekend”? The division of labor above mentioned allows one person to specialize as a doctor, treating people’s illnesses and injuries, in exchange for the services of a guy who specializes in car maintenance and repair or the product of a guy who specializes in farming or factory labor. That same doctor can take the “surplus” (the amount of wealth he generates that is more than sufficient for mere survival) he receives as a result of his profession being high in demand and low in supply and he can invest it in an entrepreneur who wishes to build a more efficient factory which has lower costs and can sell cars at simultaneously lower cost and higher profit.

Even though this one feature of being a doctor (or a lawyer, or an accountant, or a banker…) is insufficient to create a weekend for everyone, it’s a microcosmic example of that market function. For example, that doctor now makes enough money as part of his profession that he can make that money create more money though investment. He’s so far removed from sustenance living that he is now able to say “I don’t need to work on Saturday or Sunday. Even though I’d make more money doing that, I don’t need that money… I’d rather go skiing with my kids.” The guy who built the factory with his money can likely do the same thing as he corners the automotive market and begins selling his machines to factory owners in other markets.

As more and more people engage in this type of investment and wealth creation, the wealth that individual workers can produce is increased as well. If anyone has played Minecraft in survival mode before, they’ll know what I mean. When you have to run around and find food in the woods and make things by punching trees to get wood, your time is largely focused on not dying. The same is true in the real world: if all I’ve got is a fistful of seeds and a stick, you better believe that I’ll be focused on farming 99% of my time, because if I don’t I’ll starve. Once I can drive a tractor around, pipe water in from beneath the earth, purchase fertilizer from neighboring ranchers, and hire laborers to do the same, I no longer have to worry so much about farming as I do what I’m going to do with all the extra food I’ve made but can’t possibly eat.

This solution obviously hasn’t lifted everyone out of sustenance hand-to-mouth living, yet, but it’s done a pretty good job, so far. As more and more people see their quality of life improve and “mere survival” becoming nothing more than a vague nagging in their reptile brain, they have more chances to make the same decision that the doctor did. This is a historical phenomenon that one can see happen over and over, between periods in which government conflicts reduce people back to square one, but it’s also a phenomenon that can be witnessed in individual peoples’ lives.

When I first got married, I was burdened with crippling debt and a useless degree (mistakes I made). I bounced from part-time job to part-time job, providing minimum value to employers for minimum wage. As time has gone on, I’ve built a resume and a skill set that has given me the bargaining power to secure a salaried position with, you guessed it, a weekend. Mondays are definitely the most strenuous day of work for me… but that’s because I’ve front-loaded all my work for the week so I can be more proactive and provide more value to my employer, thereby giving me more bargaining power when requesting increases in my salary.

People, such as my past self, will complain and point out that “if it weren’t for capitalism, you wouldn’t have to work 40+ hours a week, just for a paycheck… who needs money, anyway, man?” To a certain degree, they’re correct: if it weren’t for capitalism, you wouldn’t have a 40+ hour a week job. Instead, you’d have to work every waking moment to scrounge up enough nuts and berries to feed yourself and the couple of your kids that survived infancy. Even if we were socialists, the best we could hope for is to share those meager findings between us all… but that’s basically just a really lame “food insurance pool”.

I used to hate Mondays. If I had a more preferable alternative, I would still not go to my job on Mondays. But to complain about Mondays is just spoiled and ignorant: you just got two days off to do things like walk into the giant food-warehouse where delicacies from around the world sit on shiny and clean shelves to wait for you to take home and savor or go engage in leisure activity such as exercise, reading, going to the movies, arguing with people on facebook… and your biggest complaint is “now I have to go and provide value to others in order to afford all these privileges I just enjoyed.” I get it, I’m as misanthropic and antisocial as the next guy, but if you’re going to exchange money for my time and patience, I’m going to smile and tolerate your banality with the disposition of a Hindu cow, because I want to take a couple days off this week to drink rum and write blog posts no one will read, play video games, celebrate my grandfather’s 80-something birthday, and roughhouse with my kids.

TL;DR: “If you hate Mondays, you hate capitalism” is a clever one-liner, and I understand where that opinion would come from, I used to be there. At the end of the day, though, anyone with a weekend should celebrate Mondays: without them you would have no weekend. Without the wonders of capitalism, mankind would still be primitive cave-dwellers praying to rocks and clouds in the hope that not all of their kids would die this year. Or mankind would be extinct, courtesy of any number of natural disasters which could threaten a small community of technologically illiterate creatures. Instead, capitalism has elevated your quality of living to the point that you are wealthy enough to say “I’ll take these two days off from surviving and do something fun, instead… because I can.” Someday, I hope to have created enough wealth so as to “go to work” fewer and fewer days of the week and, instead, provide value to people in other, more enjoyable, ways.

Not only is this a more accurate way of looking at things, but it has really changed my attitude towards work, family life, and my life in general. I am genuinely more happy for having learned these things I’m meditating on, today.

If you want to learn similar things, and have your life radically improved by a PhD-level understanding of history and economics, you should check out Tom Woods’ Liberty Clasroom. If that’s too pricey, you can do the next-best thing and support this site on Patreon.

Mad Education

One of the many ongoing conversations I am having with my old college buddies is that of education. Of course, given our unique perspectives and attitudes, we aren’t discussing the usual mundane and empirically-oriented discussions as to whether school choice, prayer in classrooms, standardized testing, or mandatory attendance are good ideas. Instead we are discussing the definition of the word “education” (something for which I get picked on relentlessly), and whether or not it is true that “all children ought to be educated”.

As much as I wish I were prepared to write a full text centered on that question, I am not yet prepared and that conversation has not yet concluded. One participant in the conversation, someone you ought to be familiar with, likes to try to bring the high-altitude and categorical discussion down into our spheres of influence. This time he did so by directly asking me (the main antagonist and contrarian, as usual) what relationship I have with educating children, in concrete terms.

I proceeded to monologue about how Wife of Mad Philosopher and I have gone about preparing our daughters. I became very self aware as I went on and on, given that I was (once again) talking about myself at length. At the end of the monologue, though, I was given positive feedback and some questions for the sake of clarification. As that portion of our conversation wrapped up, I felt so good about how it went that I figured I could share it with my readers. I assure you, it pertains to philosophy.

The Wife and I are currently home-schoolers, it would seem. This decision happened somewhat organically, as we cannot afford private school (at least, not any private education worth paying for) and public schools are undeniably indoctrination centers for the creation of left-statist suicide cultists. I was homeschooled for a good portion of my junior high school years and skipped high school altogether, so I am not unaware of homeschool culture. My wife attended Catholic private schools in New Hampshire, and we both went to a Catholic University in Florida. Given that background, awareness of our Faith and a healthy regard for the GTB (Good, True, and Beautiful), and those are things that seem to be lacking in availability in the current market.

If any of you readers are familiar with homeschool culture, you may have a hard time finding a box to put us in. We’re certainly not the curriculum-hunters, moving from Alpha to Seton to Ron Paul to Tom Woods. Some people may want to call us un-schoolers but that isn’t entirely accurate, either. Besides, I try to avoid the term as it’s often used as an invective.

Ultimately, the best way to explain our methodology is to simply describe what we do and the intentions behind the actions in question. The short answer is we’re using a combination of the Trivium, self-awareness (Brandon, (((Rosenberg))), etc.), and more mainstream tools in a lifestyle approach. It will be difficult to simply say “here is what a typical day looks like, extrapolate that to two-thirds of the year,” because every day is its own unique experience.

This variation is due to the relational nature of our approach. Rather than simply establishing a “teacher-student” dynamic and declaring “I am in teacher-mode, now, so you must learn these things I have set out for you, student,” We explore the world around us from the perspectives of a bunch of little, beautiful, white girls ages six and under. On days when Wife has the physical ability, they often set up little desks and do level-appropriate literacy/numeracy exercises as long as attention-span and desire for the ability to do grown-up things persists. Some days, this is five minutes, other days it’s a whole morning. When she is having a bad day (Hashimotos+pregnancy+dietary mistakes= a bad day), there is a lot of web-based material available and educational television; we don’t force them to play ABC Mouse or watch Veritasium videos, but they often enjoy to opportunity when it is offered.

Logic is *always* emphasized in communication as well as NVC and other self-awareness methodologies. I make a conscious effort to speak in syllogistic phrases and reference rules of induction as well as fallacies and cognitive biases in my daily conversations, and I redouble that effort when speaking with my children. When discussing desires and engaging in conflict resolution, NVC comes in handy as well. This emphasis on logic and self-awareness is less a matter of some sort of concrete learning mechanism, but instead learning a skill set that helps oneself determine what one wants to do and how to do it. As a result of this approach, my children are able to engage adults and other children in dialogue directed at meeting their own needs.

On average, three times a week, there are group science/engineering, play, and field-trip get-togethers with other families. Some of these get-togethers are with my family (I am the oldest of eight kids and my youngest sister is seven years old), but many of them are with other homeschool groups.

My kids have their own money and property and are responsible for the investment and consumption of those resources, with some adult suggestion and guidance. They get this money and property by way of gifts for holidays/birthdays and exchanging goods and services with others. They get plenty of gift money, and they have already figured out subjective ordinal value by way of spending that money and selling things to each other and other kids (mostly my siblings).

I do not believe in allowances (paying your shildren for existing, in the hope they learn how to manage money), and establishing a scheme of “you do these chores and I pay you” seems contrived and puts a strain on our relationship. Of course, in the same way that I must care for my apartment and follow certain rules as pertains to my lease, my children must do the same with regards to their things and my apartment. If messes get out of control and are not cleaned in a timely manner, the messes are physically removed from the apartment.

Daily, Wife and I pray at our icon corner, read the Scripture passages from the Divine Liturgy, recite rote prayers (grace before meals, bedtime prayers, etc). We encourage participation, but we make it a point to not coerce it. We are at our Byzantine rite parish or a Roman Rite parish at least two times a week, but often three or four times. Our kids get plenty of exposure to our Faith, and they ask a lot of questions. Fortunately, Wife and I are sufficiently catechized and skeptical so as to be able to provide honest and concrete answers to many of their questions, instead of hand-waving and appealing to authority. You won’t see us saying “God just made it that way,” or “it’s a mystery, just believe it or you’re going to hell.”

Bedtime stories are always exercises in literacy and often pertain to classic literature, economics, survival skills, natural sciences, etc. My kids love the Tuttle Twins series, Survivor Max, My Little Pony Comics, 1001 Nights (Harvard Classics 1909-1911 edition) and the usual “If You Give a Mouse a Cookie” type stuff. They pick out words, letters, sentences, etc. that they recognize and always, always, always, with the questions. My favorite ones are the questions we get from Tuttle Twins and Magic school bus, but even normal kids’ books generate fun and informative discussions.
And by now you should know me: every waking moment is a series of questions, arguments, answers… my kids are not spared that fate. Every assertion they make, I request evidence. Every demand they make, I ask for an NVC phrasing and a justification for their request. Every time they express that they think I’m wrong or unjust (“fair” is a banned word in our house), we negotiate. Of course, I have the brunt of the bargaining power, being the effective landlord, but that doesn’t mean they can’t improve my quality of life in exchange for whatever it is they want.

We do mild amounts of parkour, self-defense/martial arts, camping, and structured physical activity alongside a ton of simply running around in nature and roughhousing. Park trips, snowball fights, swimming in a nearby pool… it’s a blast. I share my vast knowledge of plants, creepy-crawlies, and other animals whenever I can (and they are interested); the Boy Scouts did at least that much for me.
And there’s the never-ending series of “Why”s that come from the children and they always get answer, whether it’s something I know off the top of my head or we need to go to my bookshelf for the answer.

When my bookshelf is insufficient, we just duck it.
By now, you can probably see why the “unschooling” label would be applied; our approach is more a lifestyle education process as opposed to a “sit in your desk and memorize this shit” curriculum. There is some of that, but it’s an added feature as opposed to the central approach.

Oh, and Wife and I don’t filter our conversations in front of the children, so they are exposed to conflict resolution, finances, political intrigue, rhetoric, etc. That’s where a lot of the “why”s come from.

Oh, and video games. We play video games.

Wife has a more disparaging view of our approach, but that’s because she doesn’t like to give herself credit when it is due. She even admits that it’s due to a lack of self-confidence, and I understand and empathize. At the same time, we’re getting results and it’s more fun this way. She is nervous about attempting something less Prussian. I understand why, she is a product of said system and she turned out intelligent, informed, beautiful, and morally straight… but she is the minority output of that particular system.

(A quick aside about the Prussian comment, if you are not prepared for several hours of youtube videos about the history of American education… The modern American education model finds it’s point of origin in the Prussian war machine, circa late 18th century. It was explicitly designed to create factory workers and soldiers. Essentially, a handful of education consultants visited Prussia/Germany during summer break, took some tours given by diplomats, got sold on the idea and came back to the US and created the public education system. Which, in order to comply with government monopoly, the private institutions copied.)

Relating this subjective instance to the general principles we were discussing in my group of friends:
It would be arrogant and naive for me to assume to know the specific teloi which may or may not exist for each of my daughters, but I am exposing them to reality in a manner that is digestible and intelligible with the intent of providing them with the tools necessary to determine subjective needs/responsibilities for themselves. I believe this is the “self awareness” aspect of our discussion.
Any one of them may be the chief engineer on Musk’s Tesla-branded inter-generational spaceship, or they may do something more appropriate for women such as producing offspring or joining a monastery. The necessity of numeracy, literacy, and even logic are dependent upon those outcomes, but the self-awareness provided by NVC, property negotiation, and Nathaniel Brandon’s brand of “self esteem” will certainly aid in making that determination. This is the case simply because my children have the capacity for intellection and delay of gratification. If they were… of lesser genetic stock… or somehow disabled, even this self-awareness could be optional.

This is why I am resistant to the claim that “All children ought to be educated.”

TL;DR: I don’t know how to make this more concise. It’s about how I’m contributing to the development of my children, a little bit about the reasoning behind this approach, and the results of said approach. Carpe Veritas isn’t just a tagline, it’s an imperative I live by and set the example for my children to do the same.

Wizardly Wisdom Guest Spot

This week, I had the pleasure of being invited on the Wizardly Wisdom podcast.

We discussed a decently broad array of subjects, mostly centered around philosophy and libertarianism.  I’m about 70% happy with my performance this time around, so I guess I should apologize for not bringing my “A” game.  Still, I think this is an episode worth listening to, and the show over at Kenny the Wizard’s feed is worth listening to, as well.

If you liked this discussion, you’d love the 2016 anthology book, especially the book-exclusive chapter on “late stage anarchism”.

Language Barrier

Pod-and-blog-fade seems to be running rampant in the post-election libertarian and philosophy circles. I can’t help but wonder if it’s a combination of political hangover and something like a sigh of relief as certain existential threats have been postponed. Everywhere else, lefty entertainment and philosophy podcasts and blogs have begun their four-to-eight year pity-party, wherein they cry about the president to the exclusion of any other form of content. Technically, that’s why I voted for Trump, was to make these people cry… but I’ve got a bit of buyers’ remorse, now.

Anyway, I’m back on the content-producing bandwagon. Today, I’m talking about words.

 

I expect most of my readers will be well aware of the rules of grammar and have a decently expansive vocabulary. I’m not going to make a “top ten” list of fun punctuation marks… I mean, who hasn’t heard of an interrobang I’m not going to share my fun story about arguing about ancient Greek grammar with Jehova’s Witnesses (subject-object relationships are more important when you haven’t discovered punctuation yet). Instead, I’m discussing the philosophy of language in broad strokes.

As far as I can tell, most people haven’t critically examined the relationship between language and the world around them (unless they’ve smoked a lot of weed or have suffered severe concussions). As such, most people have intuitively just assumed one of two paradigms concerning the operation of language. If this describes you, Understand I’m not talking-down to you, as this is something esoteric enough in the realm of philosophy so as to be compared to particle physics or studying neolithic attitudes towards one’s in-laws. It is, however, an important issue to address when engaging in philosophical discussions.

Now that the disclaimers are out of the way, what are these two paradigms of language people assume? The first is that of what could be called “linguistic realism”: it’s a belief that words and sentences directly correlate to reality (in some cases, one could even say that words and reality are commensurate). In the case of thinkers like Plato and Aristotle, the word “justice” is an actual expression of some form or concept. When a poor soul makes the mistake of using the word “justice” near Socrates, Socrates assumes that the man must know the platonic form of justice so thoroughly so as to be able to utter the word, itself. Aristotle is a little more grounded, but he still assumes a sort of direct correlation between the word “justice” and manifestations in meatspace of someone “giving that which is owed”. In the modern age, that attitude is usually expressed by people who really enjoy Rhonda Byrne, people who think that bad words are bad words due to some innate quality of the word itself, and people who deride the idea of words changing meaning over time as well as the creation of new words. I used to be a linguistic realist.

The second paradigm of language could be called “postmodern nominalism” or “naive nominalism”. This position holds that words have very little correlation to reality; as a matter of fact, the best way to describe the position would be “the belief that words exist as nothing more than a game between individuals wherein rules are made up concerning the meaning and use of words, with little to no relation to the world outside of said game.” In the case of thinkers like Peter Abelard and Ludwig Wittgenstein, the meaning of a word depends on something along the lines of social consensus and common usage. When I say “tree”, it only means “that thing growing out of the ground, made out of wood, and bearing leaves” if I am speaking to someone who comprehends English and understands the botanical context of the statement. In a different context, the term “tree” could refer to a shape, such as that of a propane tree, a family tree, or a decision tree. To a non-English-speaker, it may as well be any other set of phonemes: it’s pure gibberish. In the modern age, that attitude is usually expressed by people who really enjoy saying “a rose by any other name…”, people who think that bad words are bad because of some historical or class-related context, and people who live-tweet their netflix-and-chill experience with their cis-gendered binary life-partner.

One of the clearest ways to delineate between these two positions is to inquire as to the nature of dictionaries. For example, if I hear or read a word I do not recognize, I obviously go to the dictionary… well… to google’s dictionary, at least. When I read the definition of the word, I am reading one of two things: I’m either reading the common context for the use of the particular term at the time of publication, or I am reading the “actual meaning” of the word. For example, if I were given the word “obacerate”, I would obviously have to google it or look it up in a century-old edition of the OED. When I get the definition “to interrupt one’s speech”, is that what the word means in some innate sense, or is that simply a description of how the word has been used in the past? If I were to begin using the word in colloquial conversation, would it mean “to interrupt one’s speech”, or could it take on a new meaning based on the context in which I use it or the context in which others understand it? If I only ever used the word “obacerate” when referencing covering someone’s mouth or punching them in the jaw, could the word take on that connotation?

If one says “the word means what the word means, regardless of context” one is likely a linguistic realist. If one says “the word hasn’t been used for almost a hundred years, it can mean whatever society begins to use it as” one is likely a naive nominalist. A more apparent, but less cut-and-dried example would be the use of words like “tweet”, wherein it could either be onomatopoeia for bird sounds or an activity which takes place on the website, twitter. If the word were to fall out of common parlance concerning birds, would the meaning of the word have changed once Webster cuts out the atavistic use of the word?

As is typically the case, I get the feeling that most people who bother to read this far are asking themselves “Why do I care about this hair-splitting over words?” If you are, you are right to do so. In day-to-day conversation, words just mean what they mean. If there is a misunderstanding, we need merely exchange one word for a synonym or offer a definition to contextualize the use of a particular word. In philosophy (and, therefore, any sufficiently advanced field of thought), though, these sorts of distinctions become important.

For example, if I assume that words have innate meanings and are either direct representations of something or are a sort of manifestation of the thing, itself, then when I start talking about something like colors, thoughts, phenomena, property norms… you know, abstractions, it can get hairy if I’m speaking to someone from a different set of preconceptions about language. I’m a sort of compatibilist nominalist. I greatly appreciate Peter Abelard’s contributions to the philosophy of language and I’m a recovering linguistic realist. As I will eventually get to in the 95 Theses, and I have already covered in the Patreon subscribers-only content, the human experience appears to be one which takes place entirely within one’s mind.

Whoah. Hit the brakes. That likely seems either patently obvious or totally insane, depending on who’s reading it. It’s either obvious that one has a consciousness which navigates a never-ending stream of sense-data and never grasps a “thing-in-itself” beyond those sense-inputs, or it’s insane to start talking like a Cartesian or Kantian solipsist: of course one sees, touches, tastes, smells, and hears the world around them and discusses these things with others…

…Which is a similar divide as the one between the linguistic realists and the postmodern nominalists. As far as I’m concerned, though, my mind is locked away from the world and only sees it as mediated through sense organs, nerve connections, chemical emulsions, brain wrinkles, and more. The only way I can make sense of all those inputs is to pick out regularities and assign concepts to those regularities. Through this systematic approach to those sense inputs, one can create a noetic and epistemic framework by which one can interact (albeit though similar mediation as the senses) with the world outside of one’s mind.

After all that fancy noesis and epistemology is underway, it becomes useful to apply language to this framework. If I consistently see a woody creature growing from the earth and bearing leaves and fruit, and I wish to express that set of concepts to someone else (who is obviously a similar set of sense perceptions, but I assume to be someone like myself), it helps to have a name, a sound, a mark, etc. to signify that set of concepts. And the basis for the word “tree” is created. The intuitive concepts such as causality, correlation, etc. also exist in that bundle of sense inputs and later receive names. If trees, causality, or even a world beyond the phenomena don’t actually exist, the sense inputs I have mistaken for these things still do. The reason I bring up abstractions of relationships, such as causality, is because they seem to relate to certain aspects of grammar. For example, subject-object relationships and prepositions seem to presuppose these causal and abstracted relationships.

Now, of course, there’s hundreds of years of philosophy of language at work and I couldn’t hope to go through even a thorough examination of my particular flavor of philosophy of language. The reason I tried to give this 2,000-word summary of the idea is twofold. First, I think that this is an issue that underlies a lot of misunderstandings and disagreements on the more superficial levels of human interaction. From the comical dilemmas over who’s allowed to say “faggot” or “nigger” to the more fundamental issues of whether or not “rights” or “norms” exist and in what manner, these conflicting theories of language are at play. The 95 Theses will go into the idea more in-depth and if the Patreon subscribers demand it, I’ll explore the idea further.

Second, I want to announce the upcoming glossary page on the website. I am often accused of mutilating language or using words in a way that only I can understand them. Less often, I’m accused of using too many technical words for people to keep up. I hope to remedy some of these issues by providing a cheat sheet of sorts to help people keep up with me and to understand what I am saying when I use words in a more precise way than they are commonly presented in dictionary definitions and colloquial use. Of course, I need feedback on which words should go in said glossary so, please, do comment on this post and send me emails about my abuses of language.

TL;DR: Philosophy of language is a very involved field of study, but nearly everyone is a philosopher of language, provided they speak a language. Even if one hasn’t critically analyzed their understanding of how language relates to the world, they are walking around with a bundle of assumptions as to what they mean when they speak certain words, and whether or not those words have some innate quality to them or whether they are just some sort of social game being played with other speakers of that same dialect. Most of those assumptions can be categorized as being that of “linguistic realism” (words are directly related to things and act as an avatar of the things they relate) or that of “postmodern nominalism” (words don’t mean anything in and of themselves and only vaguely gesture at socially agreed upon concepts). There are other, more nuanced positions that people can hold, but usually only as a result of actively engaging in the philosophy of language, an exercise I strongly recommend for those that are able.

New Year Resolutions

Belated new year’s resolutions are not uncommon. New year’s resolutions being made in mid-February are a little bit more uncommon, especially ones as uncertain as the ones I plan on presenting today.

 

I wanted to write this post at the start of January. Of course, I wanted the latest anthology book to be published at the start of January, as well. While I am involved in a great many projects, I seem to have let this domain run derelict. This is a difficult admission to make, as I have been so zealous to work on the Mad Philosopher project and I have made such progress so as to warrant actually getting Patreon subscribers and merchandise sales. Even though I technically do not owe anything to those of you that have supported the program so far by such methods, I still feel like I ought to try and do right by you.

When I started writing this post last month, I had three goals in mind. By the end of this year, I want this project to fully fund itself. As my family grows, so do my expenses; while I can justify monetary investment or investments of time and attention, I doubt I will have the ability to do both simultaneously. In order to accomplish this goal, I would need enough Patreon subscribers, book and merch sales, and donations to be able to pay for the web hosting, the soundcloud account, and the technology required for production. For the most part, I have gotten all the up-front costs out of the way and now only need to pay to maintain the subscriptions and equipment.

I also want to get more hands on deck for the Mad Philosopher project. We have already unveiled the Mad Theologian podcast, as an attempt to broaden the scope of the Mad Philosopher project and to cover more ground in our philosophical pursuit. While I’m excited at the prospect of bringing more content producers on-board and I’m excited by prospects for broadening our reach through other peoples’ channels, what I am most especially hoping to find is someone that can do the technical work for the site. While I am excited to do content production, most of my efforts have gone to audio editing, site management, account management, and site promotion; if I could either enlist one as passionate as myself or hire someone technically savvy enough and cheap enough to do so, I can devote more time and effort to content production and the broader vision of the Mad Philosopher project. This, of course, ties back to the first goal, given that hiring someone will have to be paid for.

The third goal is a little more vague. I want to get more involvement from readers, listeners, and contributors to the project. Ultimately, that goal looks like engagement by way of comments and market incentives. I want to know what the readers like, dislike, agree with, disagree with… and to have a conversation surrounding those points of engagements. I really don’t mind where that may lead us: I am equally equipped and excited to discuss the interrelationships between different schools of philosophy, the history and genealogy of philosophies, praxeology, anarcho-capitalism, or even just a unique perspective on pop culture mainstays such as music, tabletop RPGs, film, books, video games…

I think that each of these three goals supports the others, so I don’t know if any one can be pursued without also pursuing the others. Of course, this set of goals puts a larger burden on myself and the few of you currently engaged with the Mad Philosopher project than it will on those we recruit towards that end. I’m sure that, in the future, there will be certain rewards available only to those of you that contribute at this early hour of the project.

Which brings me to the next thing I want to talk about. Why has the blog run derelict for over a month? Part of it can be chalked up to my “new” job and my slowness in adapting to new time constraints. Where, before, I had time at work to write outlines for posts, read books, and discuss these ideas with others, my new job is a 40-60-hour-a-week marathon of phonecalls, emails, meetings, and clogged toilets. It’s been a great job, don’t get me wrong, but it leaves me with less time and energy available for vanity projects than my previous job had. I’ve also begun producing subscriber-only content for Patreon subscribers.

Lame excuses aside, I have been investing quite a lot of time into a few collaborative works. I am writing a book with one of our Patreon subscribers which goes through all sorts of gritty details concerning the philosophical justifications for censorship and the failures thereof. It is a work that covers medieval religious arguments, classical teleological arguments, postmodern critical theory arguments, and my own aesthetic values to boot. We’re having a lot of fun while we slowly and methodically slog though such arguments.

I’m also working on a collaborative death metal album with a friend of mine. It’s a concept album that explores the issues addressed in the 2016 anthology book-exclusive chapter “late stage anarchism” with a healthy dose of revolutionary and helicopter references, just because. We are hoping to put together a full demo album in the coming months and possibly even put together a kickstarter to get a studio band and some recording time. What we’ve got so far is like a purely-voluntary kick in the teeth. It’s as metal as you can get, and it’s been a lot of fun to work with DRFrozenfire.

I’ve also been contributing, publicly and behind the scenes, to other anarchist and philosophy productions out in the internet as well as IRL.

One final thing that I’ve been doing is hosting and participating in a lot of local discussions and events. Honestly, this blog originated as a substitute for in-person engagement which I was severely lacking. As I have had more opportunities to engage people IRL, the blog and facebook have become less of a focus. This hiatus I’ve been on, though, is one that was intended to be a chance for me to get my life in order and get a little more reading under my belt before going back out to blog content production.

With luck, I will be able to continue working on these side-projects, produce subscriber-only content, and make blog content. I am doing my best to avoid becoming a current events production, as tempting as it may be. I think that the likes of Cantwell, Molyneux, and Woods have it pretty sufficiently covered. Instead, I’m thinking I may begin to deconstruct different philosophers and discuss their ideas a little more in-between releasing some of my more original content, such as the 95 Theses. Ultimately, though, those that contribute towards the aforementioned three goals may have a direct impact on the nature of the content I choose to produce.

So, starting next week, expect some real content up on the site again. Carpe Veritas.

Coming Soon

Coming soon, to the Mad Philosopher project:

The Mad Theologian podcast!

This awesome picture of Rasputin is just here as a placeholder until our Mad Theologian-in-Residence takes his rightful place on the site.

See you soon and Carpe Veritas, readers.

P.S. In the mean time, head on over to Patreon to support this project and get some cool bonuses.

2016 Book Announcement!

Good news, everybody!

We’ve got another anthology book coming out in the next few days on Amazon, a Death Metal concept album in the works, a new offshoot brand from the Mad Philosopher project, and I’m starting to get my life in order so I can start working on the blog again!

Also, as always, you should head over to Patreon and get all the goodies that come with becoming a patron.

Stream-of-consciousness

Today is an audio-only episode.  It’s mostly just a stream-of-consciousness concerning different promotions I have going on and a little bit about the nature of “spreading the message”.

 

Something I forgot to mention in the recording is I am on a new Syndication site called “Everything Liberty“.  It’s worth checking it out.

A Frank Discussion of Rights

Previously, I have written on my blog and on social media concerning rights and all the things surrounding rights in common discourse. As far as I can tell, I have not written the word “right” in quite a while… and I’ve only mentioned it a few times out-loud in private conversations as I explored the ideas I am planning to write on, today.

Today, I want to begin a frank discussion of rights. Given my self-imposed word limit and general mental constraints, I want to ask and contextualize three questions and make one follow-up (potentially) controversial statement. One may be able to trace the evolution of my ideas alluded to in previous posts to where I am now by reading though my published posts and the book-exclusive material, and one certainly could do so if they know me on social media or in-person; regardless, this is where I am at in my exploration of the concept of rights. So now, some questions:

  1. What function does the concept of rights serve?
  2. What is the ontology or metaphysics concerning rights?
  3. Are there more philosophically resilient alternatives to the concept of rights?

I will save my statement for later.

Rights seem to be a shorthand for ethical and moral reasoning. In classical texts I’m familiar with, “rights” are less a concern than they tend to be in modern and postmodern texts. As a matter of fact, when the Greeks and Romans addressed concepts that look like “rights”, they tended to focus more on what the term “privileges” covers in the modern age: a liberty granted to an individual or group by the guy(s) in charge. In a lot of ways, moral and ethical argumentation either had everything to do with virtue and ignored rights entirely, or centered entirely on one’s responsibilities as derived from one’s privileges. In the middle-ages, the concept had evolved slightly so as to include what amounts to “privileges granted by God”; a prime example would be the so-called “divine right of kings” or the liberties taken by the Church.

In the 1700’s, there was a major shift in popular philosophy. With the sudden explosion of productive technologies (such as the printing press and general industry), the subsequent decentralization of cultural production and consumption, and the sub-subsequent weakening of governmental power, certain theories that were only whispered about in the middle ages became widely popular. One such set of theories would be those of classical liberalism; another would be social contract theory; and one more example would be the rise of secular humanism.

One theme that was central to all three of those sets of theories was this niggling question: “If our rights aren’t derived from the king’s (or God’s) permission, how can morality exist?” The answer that seems to have won out in the marketplace of ideas is the straightforward, “People have rights because they are people, just because. Rights are something intrinsic instead of some contingent set of permissions.” Given how liberalism, democracy, and humanism have played out over the last few centuries, I doubt anyone with a basic understanding of modern history could honestly deny that the answer provided above is fraught with pitfalls. Even the SJWs demanding that free college, getting paid just for existing, and having permission to murder one’s offspring are intrinsic rights, just because, will tell you that people are mis-applying the concept.

Ultimately, every application of rights I am familiar with revolves around the essential question(s): “What can I get away with and what am I entitled to?” This is the reason I say it seems to be the case that rights are used as shorthand for ethical and moral reasoning; the focus of the rights discussion seems to be largely the same focus of ethical argumentation in general. If I have a negative right (the moral claim to be exempt from some obligation or another), such as the right to be left alone, that would mean that I “can’t get away with” harassing others (because they have the same right). If I have a positive right (the moral claim to be served by others), such as medical care, that would mean that anyone who can provide me with medical care is obligated to do so.

Depending on the theory, rights derive their ontology from different underpinnings. Some theories posit that rights are God-given, others posit that rights are brute facts, yet other theories posit that rights are derived from the general acceptance of society, and on and on. I think this diversity of suggestions is a result of the above discussed function of rights. Ethics and morality are, by their nature, abstract. Ethics and morality don’t make things happen in the world, at least not directly; they are descriptions of how one ought to act, but they don’t make someone act in a particular way. Rights, as a shorthand for parameters of acceptable human action are at least equally abstract. Where one can observe an apple falling in the orchard and posit a theory as to the mechanisms by which such an event occurs and the regularity with which such an occurrence is likely, one does not have the opportunity to observe a right and speculate as to the mechanisms by which the right accomplished its end.

Instead, more often than not, a philosopher or political activist will ask themselves, “What do I want to achieve? By what mechanism can I empower people to give me what I want and disenfranchise those who would get in the way of my goals?” This may sound like a very cynical take on Locke, Montesquieu, Smith… but one must remember that “What I want to achieve” may in fact be “peace on Earth and goodwill towards (wo)men” or some other fruitcake ideal. Upon answering these questions, the strong zeitgeist of rights becomes a valuable tool in accomplishing those ends. One need only come up with a source of rights that is compatible with one’s pre-existing ontological commitments and promotes one’s agenda.

Of course, this cynical reading of the history of philosophy presents a series of arguments concerning rights that have more to do with sophistry and political theory than it does with a genuine pursuit of Truth. If one were to make a genuine attempt to ground rights in a reliable ontological or metaphysical framework, I imagine it would look a lot like the cases made by a number of Rothbardian philosophers. Unfortunately, the level of abstraction required to make a case for the existence and nature of rights rivals the cases for the existence and nature of God. I only have enough bandwidth for one God-level case at a time, and people should know by now which one I’ve taken on. Instead, I just want to point out that a theory of rights which anchors itself in some moral or ontological case needs something metaphysical which lacks direct interaction with the physical world, some sort of platonic realism, and a theory of rights which anchors itself in utilitarian or sociological cases results in a utilitarian ethical framework which is sufficient to replace a similar doctrine of rights altogether.

So, what if a grounded theory of rights is better just left as an ethical framework without the concept of rights? Well, for one, doing so effectively neuters the ongoing social justice commentary as well as the general statist narratives wherein people claim positive rights which must be produced by state slavery. Additionally, It expedites certain discussions within and without my particular school of thought when one focuses on the principles and facts available which concern themselves with issues most people refer to as “rights issues”. What I mean to say is that the rhetoric and traditions of rights may only muddy the waters if there is an equally or more philosophically resilient alternative.

Despite the likelihood of being accused of all manner of character flaws, such as that of being a materialist, being a nominalist, or of being some sort of pagan or atheist, I think we can ground any discussion of “rights issues” in a far more easily defined and effective set of terms and principles. For example, I believe Hans Hermann Hoppe’s premises for argumentation ethics obtain nicely. One such premise is that private property is an inescapable feature of the human condition; the very fact that one has access to and control over one’s body demonstrates the principle of self-ownership in a way that cannot be abrogated by any instance or degree of criminal trespass or chemical interference.

So, ever the quintessential AnCap, I think that exploration of the logical, physical, and metaphysical features of property will sort out all of the issues commonly presented as “rights issues” and will, more often than not, produce results that jive with rational intuition. For example, a good portion of the classical liberal “negative rights” are the immediate logical consequent of the nature of property: the right to secure oneself against coercion, murder, and theft is less a “right” and more a natural result of the nature of self-ownership; If I own my body (and by extension that which my body produces), given the definitive quality of property that is “exclusivity”, I may exclude others from use of that property by whatever means that does not involve trespass on my part. There: without “rights”, I’ve established the justifiability of self-defense and, due to the universal nature of property, have also denied the justifiability of trespasses such as murder, coercion, and theft.

If there were any rationally defensible claim to what is often called a positive right, an argument for such a claim could be made stronger by avoiding a discussion of rights, itself, and focusing on the reality of property, instead. Perhaps the most defensible claim of positive rights is that of the Catholics: the “right to life”. For example, a “right to life” can not be taken seriously, lest it result in absurdity given the above alluded to discussion concerning the relationship between positive rights and state slavery. Death is inevitable, so to have a right to escape such an inevitable phenomena would require that mankind collectively devote every resource available to the discovery of immortality which would, itself, result in the deaths of everyone involved.

Instead, acknowledging the unborn human’s ownership of its body, the propertarian obligations of a landlord (or, in this case, a mother), the degree of action either is able to engage in, and other features of property and the human condition would result in positions which directly parallel the traditional positions of the Catholic Church concerning abortion, evictionism, self-defense, euthanasia, and care for the elderly. As an added bonus, such an activity would demonstrate the absurdity of the “right to choose”, “right to birth control”, and etc.

The time has come for my controversial claim (as if this hasn’t been controversial so far). The Catholic Church made a grave error in adopting the enlightenment-era’s rhetoric concerning rights. I kinda’ already alluded to that claim in the last section of the post, but I think it is important enough to warrant explicit attention. In engaging a secular humanist agenda on its own flawed terms instead of continuing its pursuits in determining the truth of the matter, the Church made itself more popular in an adversarial world. In the process, though, it laid the groundwork for the current social and ethical battles it finds itself buried under. That is not to say that the Doctrinal positions of the Church, or even the moral and ethical teachings of the Church as a whole are inaccurate, but it is to say that the use of flawed theories and terminology obfuscates the veracity of those teachings. Because of this obfuscation, it is not an unfair accusation to blame the SJWs on the Church and to point out that the Church has backed itself into a corner concerning the pursuit of knowledge of creation (most noticeable of which being economics). This mistake can be rectified if teachers and clergy make a concerted effort to pursue truth as opposed to political expedience… but how long it will take to do so is very much a live question.

TL;DR: Rights, in their most resilient formulation can best be described as “temporary privileges granted by the guys in charge” or, alternatively, “an ethical or moral shorthand for determining justification of actions”. There are a number of frameworks in which people try to ground rights and accomplish the ends for which the have created those rights, some are more reasonable than others, but they all present issues I do not believe can be resolved. Additionally, there is far too much baggage and theory in the realm of discourse concerning rights to expect calm, rational debate. Property, and the logical and material consequences of property provide a resilient alternative to the discussion of rights which also achieves intuitive outcomes. For these and other reasons, I think that it would be a better rhetorical move to simply deny the existence of rights altogether and demonstrate the efficacy and utility of property in dispute resolution and moral or ethical dilemmas.

Also, here’s some George Carlin, for your entertainment.

patreon-logo

Liberty Classroom: an Invaluable Tool

If you are reading this near the end of November in 2016, you can get some major discounts and provide a great deal of support to the Mad Philosopher project by going to Tom Woods Liberty Classroom and subscribing.  If you are reading this at any other time, you can still provide a great amount of value to the project by doing so.

Tom Woods Liberty Classroom is easily one of the most undervalued resources available on the internet, as it provides a legitimate PhD-level resource on a number of crucial subjects such as history and economics.  The term “legitimate” is important, here, as what most universities provide is only half-true and full of leftist propaganda.  This resource is the closest to comprehensive and the closest to unbiased as can be found.

Click Here to get some coupon codes and subscribe.  This affiliate program is definitely one of the best ways to support the Mad Philosopher project, second only to just sending me Bitcoin directly.

 

Here’s some free samples (the best stuff is behind the paywall, obviously):

the best way to fulfill the maxim “Carpe Veritas” is to subscribe to Liberty Classroom and take advantage of everything such a subscription provides.

patreon-logo

Chapter 3: Orders of Knowledge

Chapter 3: Orders of Knowledge

We have thus far introduced ratio and intellectus. As a quick refresher, intellectus (or intellect) is the inborn faculty which experiences the self and the predecessor to reason, and reason or ratio is the development of said faculty. However, in addressing the human epistemic experience and briefly examining the manner in which our mind operates, we have completely overlooked the primary concern of modern epistemology. Knowledge, in all of its complexity, still haunts our exploration of our epistemic assumptions.

While the exact definition and importance of knowledge is hotly contested in this postmodern environment, one definition tends to maintain its resilience. Knowledge, in my mind, is limited to what is called “propositional knowledge”. The experiential basis of propositional knowledge we have already discussed ought to simply be called “experience”. I define propositional knowledge as “justified true belief”. Now, as the contentious discussion that rages on will demonstrate, this definition is not flawless and self-sufficient, but that should not overshadow the usefulness or accuracy of this definition.

A brief examination of the Stanford Encyclopedia of Philosophy’s page on knowledge1 illustrates the key issues with the above definition, drawing on the works of those such as Gettier. No mater how complex and detailed the discussion becomes, the utility of the above definition is undeniable. Much like Russel’s discussion of our knowledge of universals,2 we already have an intuitive understanding of what knowledge is. As a matter of fact, we use that intuitive understanding to critique our proposed definitions, the chief example of this is the Gettier problems. A brief explanation of the Gettier problems is in order; the Gettier problems are a series of hypothetical instances contrived such that the definitive requirements for knowledge are met, but the conclusion flies in the face of our intuitive understanding of knowledge. A workable solution to such a dilemma is simple: we must accommodate for such an intuitive element in our definition. For now, “a justified true belief in which the justification is factual and sufficiently related to the truth at hand” will suffice. As that is, more or less, our intuitive understanding (ignoring the verbosity of the definition) of knowledge. “Justified true belief” is a good shorthand for this definition. More work clearly ought to be done to develop a rigorous and categorical definition for knowledge, but that is not the intent of this work. Besides, I am confident that whatever rigorous categorical definition is found will simply be a more detailed and explicit form of the one I have given.

Now why, at the beginning of chapter three, do I suddenly launch into definitions, qualifications, and disclaimers with nary a mention of the next thesis in the sequence of ninety-five? Simply put, the next several theses operate with this definition of knowledge in mind and the mere definition of a word does not justify the use of a thesis when I am limited to a mere ninety five. One more minor but crucial point must first be made, however; our intuitive use for knowledge is the formation of a reliable worldview, predicated on the reliability of the mind. As with my explanation of experiential knowledge, man is a habitual creature: our understanding, use of, and reliance on propositional knowledge is no exception. With this tedium out of the way, we may now proceed.

Thesis #7: One gains first-order knowledge by the exploration of logic as pertains to “self-apparent” principles and facts…

As I explicated in the first two chapters, “self-apparent” principles and facts are experiential in nature. Even the existence of a “self” is derived from the experience of reflecting on one’s experiences; this knowledge is not inherent to the mind, brain, man, whatever. Even the definitive and logical truths we find to be “self-apparent” are derived from a more primary experience. The easiest example of which would be that of a triangle. A triangle is a closed two-dimensional polygon with three angles and sides, the angles of which total one hundred eighty degrees. We can identify triangles by these factors, but before we could discover these attributes of triangles, we must first have an experiential knowledge of spatial relationships and basic math/geometry before we can identify or express these characteristics.

In the last chapter, we established certain epistemic tools through our mental experiences. While it is quite productive and enlightening to turn these tools on themselves in a manner similar to which Hegel discusses in his Introduction to the Philosophical Encyclopedia3, it is not required in order to begin observing and acknowledging the world at large. We can establish undeniable matters of truth and fact using syllogistic reasoning coupled with experience (most especially self-apparent facts). Our definitions of knowledge and triangles are prime examples of such a practice. This method is simple enough; one first states a definitive fact derived from experience, then through the use of the PNC explores the implications of such a fact, so long as nothing is self-contradictory or contrary to experience it can be assumed to be first-order knowledge (or, knowledge proper). If the logical exploration results in a contradiction, one must first check their logic before throwing out the initial premise. This work is, itself, an example of such a practice; our first chapter begins with three assumptions made due to their self-apparent nature, and here we are, two chapters later, still exploring the logical ramifications of such assumptions.

My current experience, aside from self apparent principles, is my only source of immediate knowledge. If our friend Mike, from the first chapter, is experiencing a particular event, say the fateful day he shot himself in the leg, he has a whole array of experiential facts at his disposal as well as deductive reasoning to assist him in knowing certain facts. He has the experience of a raw coldness in his thigh as well as a ringing in his ears which are undeniable. Mike calls such an experience “pain” or “injury”. Also, he experiences recalling memories of having dropped the handgun and attempting to recover it on its descent.4 Deductive reasoning may not be able to establish with certainty who or what is at fault for his current circumstance, but it is sufficient in analyzing the circumstance itself. Which, to be frank, is far more important when faced with a circumstance such as:

  • I am experiencing phenomena congruent with severe injury

  • If one wishes not to die, when faced with serious injury, one ought to pursue medical assistance

  • I do not want to die

  • I should seek out medical assistance

rather than to pursue the line of inquiry consistent with “why?”

Syllogistic, or deductive, reasoning is ultimately a practice in exploring the ramifications of the PNC as it applies to a particular claim. In the above example, it pertains to one’s particular experiences of pains and desires. As an astute logician will note, the above syllogism cleverly cheated; it introduced a non-immediate experience or a non-deductive inference. The premise, “if one wishes not to die, when faced with serious injury, one ought to pursue medical assistance,” is not necessarily an experiential fact or a deductively ascertained claim. However, herein lies two details which require attention: intuition and second-order knowledge. The latter will be discussed soon, all we need note now is that one can make legitimate first-order claims which are informed by second-order knowledge, so long as one is cognizant that they are doing so and verify its congruence with the paradigm5 established by one’s first-order knowledge. The case of intuition, though, is slightly more complex. As discussed earlier6, there is a distinctly observable reality that the human mind inherently possesses certain faculties, the ones addressed so far being intelligence and instinct. As far as what the exact cause of these inherent faculties is, is beside our current line of investigation. We will simply play the pragmatist for now; we will treat intuition as a brute fact and discuss its causes and specifics later. In the case of Mike, he would likely have an intuitive response to his gun wound to attempt to staunch the blood flow and such, a shorthand for these series of responses would be, “to pursue medical assistance”.

…it is highly falsifiable, and applies to physical and metaphysical fact as well as matters of truth

The above is a particular instance of what is essentially the only true type of knowledge: the only circumstance of a “justified true belief”. Anything beyond the definitive and falsifiable justification of immediate experience and deductive reasoning cannot provide certainty to a greater degree. This certainty is not, however, absolute. It qualifies to be called certain due to its immediacy and falsifiability. Falsifiability is the circumstance and burden of proof one would have in disproving a particular claim.7

Karl Popper, having posited falsifiability as crucial to epistemological study and having built an entire body of work on such a principle, is a valuable asset to one such as myself. Anchoring an entire philosophical worldview on a few epistemic assumptions, I must be diligent in exploring these assumptions and securing them as best I can. Unfortunately for me, Popper is simultaneously more pessimistic and optimistic than myself; making use of his work will require diligence. We both agree that knowledge is always suspect. It is always subject to criticism and correction. In his ardent desire to avoid supporting authoritarianism8, he seems to fall into a trap of epistemological absurdity in which “all knowledge is human… it is mixed with our errors, our prejudices, our dreams, and our hopes… all we can do is grope for truth even though it be beyond our reach.”9 As the previous chapters10 show, I agree that our knowledge is limited and influenced by the human condition but to assert (unfalsifiably, I might add) that truth in unobtainable due to that reality undermines the very premise of such a claim. Besides, to strive for the admittedly impossible is to waste one’s time. One’s energy would be better spent, at a minimum, on more practical asymptotic activities instead (like curing disease or pursuing pleasure or enlightenment).

With how jealously I withhold the title of “knowledge”, the degree of confidence one can have in their beliefs hinges on falsifiability. In order to claim something as knowledge11, one must be making a claim which is immediately apparent and clearly falsifiable. Falsification of this (and every other) form of knowledge is, in truth, a good thing. Falsification provides an opportunity for better refinement and correction of an otherwise flawed worldview.12 One should always open themselves to rational and rigorous criticisms, so as to avoid becoming a relic-bearer of Lady-Philosophy’s garment.13

This isn’t to say that the first time something unpredictable or inconsistent emerges one ought to throw out their entire worldview and sequester themselves in a mire of Cartesian doubt. Quite the opposite is the case, one ought to defend such a claim until such a time as it is sufficiently disproven or falsified. We will explore this more later. For now, it will suffice to point out that single incidents of inaccuracy in one’s beliefs may in fact be flukes, only cumulative or consistent error is sufficient cause for radical reevaluation.

Now, many may mistake this epistemic framework for some Kantian a-priori reasoning or some assertion of continental brute facts. Neither of these is the case at hand. These self-apparent facts are, in fact, theory-laden. Even the most fundamental facts one can select, such as the Cartesian cogito,14 still contain some degree of implicit theory. In the case of the cogito there is at least the predicate assumption that there is a causal relationship between actions and existants (that the experience of thought must be attributed to a thinker) and that the PNC obtains. The issue is not one of selecting a brute fact or discovering an a-priori truth, but rather to find a sufficient fact on which to vest one’s philosophy because all self-apparent facts are, without exception, theory-laden.15

Of all the things we have allowed into our ontology thus far, this theory-ladenness itself must either be a form of brute fact, an inherent fact that there is no fundamental starting-place to understanding the world,16 or must be an inextricable attribute of man’s mind. I am in favor of both of the proposed options, actually. I believe that the universe is an elegant and logically constituted entity which has no one logical predicate on which all else hinges, but rather is an intricate and interdependent network of logically constituted laws in which the absence of any one equally would cause a total collapse. Because of that holistic nature of reality, our minds are equally constituted as such in order to accurately form a conception of the universe. This inherent holisticism, then, is an aspect of one’s intellect.

As mentioned, this knowledge pertains to physical and metaphysical fact, as well as truth claims. So far, in this work, the most prominent first order claim pertaining to physical fact I have made is that one has embodied experiences. Falsifying such a claim may be somewhat difficult to do experientially with our current technological limitations. However, it could be quite easy to locate a logical inconsistency with the claim. For example, one could at least cast doubt on such a claim by finding an inconsistency between the epistemic claim that one is capable of abstract thought while insisting the primacy of material senses. I clearly have not found one, lest I would have asserted otherwise, but the purpose of publishing a work as such is to allow others to double-check my claims.

In similar fashion, we have made first-order metaphysical claims. Chief among them would be that one’s understanding dictates one’s behavior. Rather, a more specific case in that assertion would be that man operates with an intermediary function between stimulus and response. The easiest manner in which one could falsify such a claim, as far as I can tell, would be to demonstrate that it is superfluous to forming a sufficient paradigm for all second and third order reasoning. I have not yet addressed the framework in which one would do so, but we will get to it shortly.

This naturally brings us to truth claims. Technically, either everything or nothing we have discussed thus far qualifies as a truth claim, given the common usage of the term “truth claim”. As far as I am concerned, a “truth claim” is distinguished from a factual claim (such as the two we discussed above) with regards to its subject matter. A factual claim has to do with a state of affairs in specific or categorical situations whereas a truth claim regards a matter of transcendental realities. This will be addressed in more detail in the next chapter, but for now, we can refer to the PNC as one such claim. While I believe it to be impossible, one can falsify the PNC simply by illustrating a logically cogent circumstance in which something both is and is not in the same mode at the same time.

Thesis # 8: Through the marrying of multiple first-order concepts and further introduction of experience, one gains second-order knowledge…

As the thesis indicates, second-order knowledge17 is predicated on first-order knowledge. The sum total of one’s first-order knowledge creates a paradigm on which one’s second-order knowledge can be built. Having already shown themselves to be self-apparent, rationally cogent, and non-contradictory, first-order claims can be relied upon to fact check one’s second-order claims. In such a circumstance that one encounters or forms a second-order claim, they must critically assess its validity against the paradigm in which they are operating.

Through the application of deductive reasoning, one takes self-apparent logical principles and analyzes their relationships. By analyzing the relationships between their conclusions, they remove themselves from the self-apparent by a minor degree. This line of reasoning has few applications outside of mathematics without the added element of experience. Practically speaking, the marrying of multiple first order concepts and adding experiential data is fairly straightforward.

Mike, now medically stabilized, can effortlessly begin to assess what happened from the perspective of strong belief. He has already ascertained that he is injured and that he dropped a loaded gun. By drawing from experience, he knows it is incredibly likely that, in fumbling to catch the gun, he may have pulled the trigger. He also has a strong belief that the other two people who had possession of a handgun at the time were executing proper gun safety and were not in such a position so as to fire a gun at an angle corresponding to his wound. All of this evidence along with the deductive arsenal provided by his first-order paradigm can (rightly) lead him to the conclusion that he did, in fact, shoot himself in the leg.

The belief he has that his companions were executing proper gun safety is primarily due to experience and collaboration. He has witnessed them demonstrate their skill, knowledgeably, and contentiousness many times before while shooting. Additionally, they are responsible for his knowledge of the rules and basics of gun safety and use. Adding to his certainty that he did in fact shoot himself would be one of his companions serving as a witness to the event, “Dude, you just shot yourself!” In their own way, collaboration and communication are a form of experience which are useable in the development of second-order knowledge. Any one stranger can present a claim to another; without a well-developed discourse between the two, in addition to the critical thinking skills required to assess that discourse, such an interaction is meaningless. If some stranger (or even a friend) simply walks up to you and makes a claim, anything from “the sky is blue” to “Elvis lives”, and leaves promptly thereafter, there was no opportunity to expand one’s knowledge base. However, as will be explored later in this chapter and especially in the next chapter, someone can make an argument for a second-order belief and that allows for the opportunity to expand one’s knowledge base or at least reassess one’s existing knowledge base.

To one familiar with logic, this thesis essentially concerns itself with induction. While Russell explores induction quite thoroughly in chapter six of his “Problems of Philosophy”, he fails to provide a concise definition for quick reference. I will suggest a definition and then recommend that the more ambitious of my readers read Russel for more detail. I would define induction as, “the rational function by which one forms a strong belief by repeated experience and logical inference.”

Clearly, the study of physics18 lands solidly in this category. The empirical and observational study of the world which makes use of logic, mathematics, and repeated experimentation has been developed with the intent and end19 of forming a cohesive and reliable framework of second-order knowledge. Physics has proven invaluable in expanding our knowledge and providing for vast improvements in our quality of life and shows no signs of slowing in pursuit of that end. However, some have fallen victim to the ideology of scientism, believing that this material study of the world must be predicated on a purely material ontology and is the alpha and omega of knowledge. As I have already illustrated, science is predicated on a first-order paradigm and is part of a larger framework of philosophy. I am reminded again of Russell:

“The man who has fed the chicken every day throughout its life as last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken”20

As an aside that my broader ideology and disposition will not allow me to leave unaddressed, who is crazier, the chicken who distrusts the farmer and awaits and prepares for such a time that the common belief in the farmer’s benevolence is falsified, or the chickens who are content with the utility of daily meals?

… this order of knowledge is less falsifiable than the first.

Like first order claims, second order claims cannot contradict each other. In the popular case of science, it is easy to make a claim that this is not the case. For example, Newtonian gravity is still used universally for most every day-to-day practical application of physics, such as architecture or demolition while Einstein’s theories on relativity have effectively falsified newton’s theories. That claim, though, is naive; certain aspects of Newtonian mechanics have been shown inaccurate and ineffective, but that does not mean that there were not accurate observations, predictions, and knowledge claims contained therein.21 In less esoteric knowledge bases, this reality is more evident. One cannot simultaneously claim that the sun will rise tomorrow and claim that it will not. Mike can not claim that he had shot himself in the leg and that he did not, nor can the chicken claim that the farmer will wring their necks and that he will refrain from doing so.

In reality, if any two second-order claims are found be contradictory, they are likely inconsistent with the first order paradigm one established prior to making such second-order claims. This is because no second-order claim can be made without first assuming the accuracy of one’s first-order paradigm and verifying that second order claim against it. In such a circumstance that there is a true contradiction between two second-order claims (as opposed to a merely apparent contradiction) which are both supported or necessitated by one’s first-order paradigm, one must reassess their first-order paradigm in order to ensure that some mistake was not made which would result in such a contradiction.

If there is no flaw in the first-order paradigm, one must move on to pitting the contradictory strong beliefs against each other and attempt to falsify them. In most cases, second-order claims are experientially falsifiable. Induction, as its primary use, makes predictions about the world and about certain logical results. In these cases, one needs only to seek out instances in which the predictions made are consistently or severely inaccurate.

Thesis # 9: Through the extension of trends in the aforementioned orders of knowledge and the marrying of multiple second-order concepts, one can gain third-order knowledge: this order is rarely falsifiable by any means other than proving logical inconsistencies concerning the first-and-second-order paradigms and between third-order knowledge claims

While it may not be clear, in what I have written thus far, I have attempted to remain as politically correct and uncontroversial as possible while still saying what is necessary to convey my point. Unfortunately, this is the point at which I must descend into touchy material. Mike may have a weak belief that he shot himself because of karma or divine punishment. He may believe that he was predestined to shoot himself or that the CIA had implanted a microchip in his ass that made him do so. Any or all of these beliefs may be true. So long as they do not contradict the paradigms established by the first-and-second-order knowledge sets or each other, it is justifiable to believe such things22. Those examples are clearly a bit extreme, but it wouldn’t be out of line to say that Mike’s justifications for these claims may be more well reasoned and defensible than many claims at people at large take to be determined matters of fact. We will address that in the next section of this chapter.

Typically, third-order knowledge claims reside in realm of such things as esoteric sciences, religious discussions, conspiracy theories, and (especially) politics. Not always are these realms populated solely by third-order claims, but they do tend towards that in the common man’s mind. Other than by showing a logical inconsistency with the pre-existing paradigms, it is difficult to establish a falsifying element in third-order claims, which is likely to be part of the reason why the average man tends to vest so much of their mental narrative in the realm of weak beliefs, because they have the illusion of being bulletproof to the logically illiterate.

This is not a dismissal of weak belief. While this type of knowledge is frequently abused, it does have its utility. Sufficient practical reliability and utility can secure third-order concepts against ridicule. Many times throughout history, some person or organization has made a third-order claim which, by way of abductive reasoning or by advances in the rational or technological tools at man’s disposal, has since established itself as second-order knowledge. Abductive reasoning can best be described as an appeal to a compelling explanation for an otherwise unintelligible or gratuitous circumstance. In the words of C.S. Peirce, “The surprising fact, C, is observed. But if A were true, C would e a matter of course. Hence, there is reason to suspect that A is true.”23 This abductive reasoning is easily third-order knowledge, and can even see itself promoted to the second order, given sufficient supporting evidence.

In the case of scientific and religious discussion, one ought to be diligent in first securing their claims well within the realm of second-order knowledge. Many times, a great deal of cultural upheaval and unnecessary suffering result from people aggressively supporting and advancing weak beliefs in such a way so as to make them mandatory for all. Two easy, controversial, opposed, and equally ridiculous examples are those of six-day-creationism and Neo-Darwinism. Both stand on weak paradigms and contradict matters of scientific and metaphysical fact which are quite cemented as second-order knowledge. It is acceptable to hold religious or scientific beliefs which are third-order, but only so long as one remembers that they are beholden to the standards established by their preceding paradigms.

Thesis# 10: Through the collaboration of certain philosophers (and philosophy’s constituents) throughout history, there have been established a series of compelling arguments and traditions as apply to the truth and meaning of the universe; one must be willing to adopt certain elements from these traditions, but not without first assessing the validity of and categorizing such elements

All of this chapter thus far likely appears to be a matter of stating the obvious. It is possible that one or another of my readers will claim that this model in no way resembles the actual process of knowing and knowledge. I challenge such a reader to provide a more practical, reliable, and accurate model so that I may adopt it. For now, I will extoll the cash value24 of this model.

An interesting concept introduced by the sophists in the “new atheism” movement is meme theory.25 A grossly oversimplified view of meme theory is simple: individuals create and transmit memes betwixt one another much like viruses, only instead of deadly illness, they are ideas held in the mind. The memes that survive are those which provide the most utility or are in some other way given opportunity to spread. This theory was created with the express purpose of attempting to discredit religions as some sort of “meme engineering scheme” in which religious leaders, over the course of centuries and millennia, create and finely tune memes which grant the leaders control over those infected by the memes. If true, this would make religions some sort of mental terrorist organizations.

All sentient creatures, in communicating, are meme engineers. When I form a thought and pass it on to another, I am a meme engineer. When taking ideas in and deciding which to share, which to disregard, and which to modify, I am also participating in meme engineering. All of philosophy, including science and theology both, is party to meme engineering. This does not mean that philosophy is some evil organization creating zombies from a careful application of a trade milennia old, but rather the opposite. While there are bad actors which do attempt to abuse ideology and reason to bend the weak-minded to their devices26, meme engineering is the primary engine of progress.

It is important to note that memes are more like sound bytes than full-fledged ideas. Certain images, affectations, or catchphrases are good representations of memes. Where one can easily remember, recite, or recognize a phrase like, “Form follows function,” they may have no concept of it’s point of origin or even what it means. Only through some form of learning or education does one come to know that it is a principle that is key to the architectural field, and too often forgotten.

Many people, for any number of possible reasons, do not critically assess their belief structures. Our culture has engendered a distinctly emotional and anti-reason attitude. Many insist that, “people need to learn to think” when what they really mean is that, “they ought to learn to think like me.” Social understanding of the term, “critical thought” has been switched to dogmatic neoliberal belief. Our political, religious, educational, and economic landscape clearly illustrates this attitude. Additionally, a popular activity that has emerged is asking elementary questions concerning these subjects of a random selection of people off the street and sharing their absolutely incoherent answers.

Ultimately, this unwillingness to critically assess one’s beliefs in the manner I have thus far outlined has become so widespread for so long that many cultures of intolerance to reason have developed. It is, quite literally, impossible to speak cogently, intelligently, and civilly with a large swath of the population. Neoliberalism, fundamentalism, scientism, fideism, and any number more “-ism”s have evolved from their origins as mere theories or rubrics for action into monstrous, insular, intolerant, and aggressive codes of dogma which cannot coexist in a world with rational actors capable of critical thought. This does not mean that all that ascribe to “-ism”s are mindless warrior drones ever ready to jihad in the name of science, faith, or civil rights; some are quite intelligent, if mistaken. Likewise, some number of “-ism”s have managed to maintain their proper mindset, application, and scope in an otherwise irrational environment.

If one is careful to examine both their own and others’ belief structures, one can inoculate themselves against bad memes and avoid being misdirected. Nearly every individual is rational to some degree. As a result, even the most unintelligent or mistaken individual tends to utter claims which bear some degree of truth. I hope that, though this work and those to follow, I may be successful in distilling said truths from the many, many ideologies and theories to which I have been exposed and arrange them in such a fashion so as to be accurate enough to piss absolutely everyone off. I believe that with proper education or training in logical thought, many will be able to make use of this model of knowing and believing in such a way that, even if they are unsuccessful in forming an accurate worldview, they may at least be able to behave and discuss in a civil and intelligent manner.

As can be inferred from the discussion of this framework, the order in which a particular piece of knowledge falls is contingent on the knower, not the meme (or claim). The argument to the concept establishes its order, not the idea itself. A clear example would be in the realm of ethics, in which one can make a particular claim (murder is wrong), and depending on one’s method of determining the claim can land it in any particular category. Kant can claim “Murder is wrong because blah, blah, categorical imperative, blah, blah,” and it would at least qualify as a strong belief. “Murder is wrong,” says the local minister, “because I have a strong abductive argument for the existence of God and the Bible as a moral authority,” and his claim would be, at a minimum, third-order knowledge. When you ask the first person you see at the super market (as I have) and get the response, “Murder is wrong because… what are you, a psycho? It just is!” you have just encountered a claim with no knowledge content worth consideration.

One cannot possibly double-check every claim that they encounter, especially in this era of information overload. Categorization of ideas can help. Our current society sees an instinctive application of this solution; when presenting an idea (especially concerning a political issue) to one’s acquaintances, one is frequently faced with a dismissive response coupled with a particular categorization (“Oh, this is just that liberal/republican crap”). This can be done in a conscious and responsible manner. After assessing a claim one encounters, they can categorize the claim based on premises, subject matter, the stances that others tend to take on issues other than the claim at hand. In doing this, the next time one encounters the same or related claims, they can expediently determine whether said claims operate in an acceptable and cogent framework. Admittedly, this process can result in one overlooking valuable information due to the manner in which it is presented. For this reason, I find that it would be ideal for one to maintain a stoic agnosticism when overwhelmed and explore one claim at a time, remembering always the larger picture.

The necessity and importance of collaboration cannot be overshadowed by the pitfalls of the human condition. In interacting with others in the philosophical space, one is able to expand their knowledge base, refine and correct mistakes, and increase the number of creative minds working on any given problem. Also, this interaction tends to leave a record. Once upon a time, letters, books, and diaries left a record for later philosophers to engage. In today’s era, those technologies certainly persist, but we have the additional technologies of the internet and all it has to offer. Most notable of which is the permanence and accessibility of data, which are attributes that will likely increase in scope as cryptography and open-source technologies become a cultural mainstay.

Many ideas which have survived the ravages of human history have been passed down generationally, being improved, corrected, reassessed, with each passing century. Not all, but likely some of these ideas and worldviews contain a series of compelling arguments and methodological traditions, hence their survival. It would be a missed opportunity if one did not make an earnest attempt to analyze and selectively accept the accurate and useful from these traditions. As long as one’s first order claims are factual and true, it ultimately doesn’t matter which first-order claims are made, a properly formed reason has the capacity to derive the type of worldview pursued by the philosophers: one that is internally consistent, logically sound, empirically viable and universal, possessing ethical agency, utility, and Truth.

95 Theses

1 http://plato.stanford.edu/entries/knowledge-analysis/

2“Problems of Philosophy” Chapter 9

3Hegel, Encyclopaedia of the Philosophical Sciences p10

4Gun safety protip: don’t do that.

5 Which will be discussed later in this chapter as well

6Ch 2: The Embodied Mind

7Falsifiability is a concept I have shamelessly stolen from Karl Popper and turned to my own uses. I will point the curious reader to hes “Conjectures and Refutations”.

8A desire I share as an anarchist.

9Karl Popper Conjectures and Refutations p39

10As well as thesis 95

11First-order knowledge

12Popper p35

13Boethius’ Consolation of Philosophy p2

14Descartes “Meditations on first Philosophy” Chapter 2

15An idea that, while appearing to be simple, contains implicit meanings and beliefs within it.

16Holistic theory of knowledge

17Also called “strong belief”

18 The branch of philosophy which concerns itself with what our modern culture calls science, namely, a study of the material world

19Greek: telos. “That for the sake of which”

20Russell “Problems of Philosophy” Chapter 6

21For a more thorough exploration of both this specific example, and the principles which underlay it, I reference the reader to Thomas Kuhn’s “Structure of Scientific Revolutions”.

22 I seriously wonder what paradigms he would have to establish in order to simultaneously believe all four claims. If he has reliable second-order knowledge to base his accusations against the CIA, I want to hear it

23Groothuis “Christian Apologetics” p434

24The practical results of embracing a particular idea

25Richard Dawkins “The Selfish Gene”

26We will call these people “sophists” or “government officials”.