Friday, February 4, 2011

Outline of a Philosophy

There's a book I've been meaning to read, called "What we believe but cannot prove."  I've been thinking about what the heck I believe.  I wanted to find some foundation of certainty from which I could build, as mathematical axioms give rise to whole systems of mathematics.  Descartes did a similar thing, hence "Je pense donc je suis" - I think therefore I am.  But I have reason to doubt that, and while I have retained it in a sort of altered form as a kind of crutch or justification for (1), what I have found in the end as my deepest foundation of belief is the idea that nothing can be known for certain.  Including that very belief.  It occurs to me that this may not be satisfying to everyone.  Somehow, it seems to be satisfying me.  It resonates with what I have heard from others, from romanticized versions of "Eastern philosophy", to Douglas Adams' "42", to perhaps even Kant's "Refutation of Idealism" and Camus's idea of "the absurd."  In the end, everything we believe we cannot prove.  I think I'll go read that book as soon as I finish this outline.  I can get it via the Kindle app on my iPhone.

Okay I finished it for now.  The later parts are much more drafty than the earlier parts.  Maybe I'll come back later and work it out a little more.  It's kind of including a lot; maybe too much.


0. Fundamental unknowability
a. From limitations of language and the inescapability of the human cognitive/descriptive system.
b. From the insolubility of the prime mover or ultimate cause problem.
c. From the generalization of (0.a) and (0.b), or, "The child asking 'why?' will not be satisfied."

1. Something exists (∃A)
a. I think, therefore something is going on, but I can't really say more.  I feel like I'm here.
b. (0) holds by (0.a).
c. No claims on A, or, the inclusiveness of A.

2. Contingent belief; "If it is, it is." (A➞A)
a. A kind of pragmatics for functioning.
b. Imperfect alignment with science. (Maybe?)
c. (1.c) holds.

3. Belief does not affect A
a. Belief can affect perception.
b. Belief can affect behavior.

4. Free Will is a vacant concept, or, No Free Will but you won't miss it
a. Shown
i. From completeness of personal history and coin-flipping.
ii. From belief in scientific determinism. (bad argument)
iii. From belief in an omniscient being or beings. (bad argument)
b. Consequences
i. Everyone does exactly what they will.
ii. Does not imply no rewards, no punishment, or no ethics.

5. Might as well believe in other minds
a. Shaky grounds, but seems like there's no harm.
b. Clones and robots too.
c. Whether by degrees or with a jump at language or both.

6. Culture is socially constructed
a. Cultural relativism is correct in some respects.
b. Cultural relativism is fundamentally incorrect and humanity should and may move toward a "global modern" culture that is unified by a set of beliefs that happen to be these.

7. No deities
a. Harmful effects as in the slave religions (Judaism, Christianity, Islam).
b. Harmful effects even in religions that appear to have non-slave aspects (Buddhism, Islam, etc.).

8. Death is the end

9. Value, values and ethics arise socially, as (6)
a. Human life has no "intrinsic" value, by (9) etc.
b. Humans should be valued as members of their community, and the appropriate community for this consideration is the global community of all humans.
i. Abortion is good but can be problematic in practice.
ii. The death penalty could be good but is wildly problematic in practice.
iii. Euthanasia is good but can be problematic in practice.
iv. Suicide could be good or bad depending on how selfish surrounding people are.  As attitudes about it mature it will get better.

10. Science as modeling
a. Appropriate understanding of scientific "truth" re: (2).
i. Scientific statements are not "true" in the same sense that true statements about A would be, assuming they were possible.
b. Not quite "model-dependent realism" as in Hawking.  (Does this need its own point?)

11. Language
a. The most appropriate locus for understanding language is not a whole language but an individual.
b. Language operates as a kind of operating software over the hardware of the brain; falseness of strong Sapir-Worf hypothesis.
c. Humans will tend toward a single language.

12. Education
Okay I think I'll stop here for now and work on the education bit separately for a while.


Notes and thoughts:

General ideas that come out of this thinking:
Generalizing leads to misunderstanding.
Haha, unintentional joke?

(9.b) is a big problem.  I think I believe it, but I can't come up with a strong justification for universal inclusion.  It seems TOO axiomatic.  Should it include human-equivalent machines?  Near-human animals?  Should it include all of A?  Hmm.  Where is equality in this?  Should it be included because of negative consequences of NOT including it?

No comments: