Sunday, August 29, 2010

Back from a short hiatus with Frank Herbert

Due to various happenings in my personal life, I have not posted for almost a week and a half. I hope to keep such long stretches without posting to a minimum, but sometimes the personal and professional life intrudes to an unhappy extent. What must be done must be done.

Some of you will have read Frank Herbert's exceptional Dune books. Now, I know you're wondering what the deuce Frank Herbert has to do with a blog on the philosophy of science. Well, I think Mr. Herbert had to have been a student of science himself, at least in a well-informed lay capacity. He makes occasional reference to certain scientific facts that many people are not aware of and which are not generally taught to students outside of science.

More importantly, though, Frank Herbert had a clear grasp on the vulnerabilities of knowledge to the filters through which people necessarily obtain it. An very short story from Heretics of Dune shows this amply and is directly relevant to the topic of this blog. Here's the story:

"There was a man who sat each day looking out through a narrow vertical opening where a single board had been removed from a tall wooden fence. Each day a wild ass of the desert passed outside the fence and across the narrow opening - first the nose, then the head, the forelegs, the long brown back, the hindlegs, and lastly the tail. One day, the man leped to his feet with the light of discovery in his eyes and he shouted for all who could here him: 'It is obvious! The nose causes the tail!'"

To anyone familiar with the concept of a donkey, this story is ridiculous. Others have expressed this idea with other parables and examples, but the point remains - what we can learn from science depends on the filters through which we view the results of an experiment. Imagine how the view of the donkey would change if the fence were built differently and it was, instead, a horizontal slit through which the man viewed the donkey. What conclusions might he come to? Would the same constraints on the interpretation of the data apply?

In the story, the man makes several errors of judgment in evaluating the experimental outcome. The question for us is how many of those same errors we might make because we do not understand how our filters affect our interpretations. This question is really the key question behind this blog. This blog is about exploring what we can already infer from the scientific conclusions made thus far and about seeing and removing the filters that have been and are still being applied to the results of scientific experiments.

What orthodoxies trap us in the past? What long-held views prevent new ideas from getting their due? What filters of which we are not conscious prevent us from seeing the bigger picture? That, dear readers, is what this blog is all about. It is about boldly seeking those filters and equally boldly destroying them. I hope you will stay along for the ride.

Friday, August 20, 2010

The Value of Experience

I was thinking earlier today about the value of experience. By this, I mean the value that an individual places on a particular experience. If this sounds a bit metaphysical, hang in there...there is a science angle.

You see, there is a certain inherent arrogance in rejecting the value of another's experience. In order to keep this reasonably non-controversial, take the example of bungee jumping. I have not bungee jumped and likely never will. It simply doesn't appeal to me. Now, should I look down on or think less of someone who loves bungee jumping? I know this may seem a bit silly, but it not a far stretch from the same type of thinking that some folks actually engage in. Is it correct for me to dismiss that person's enjoyment simply because I don't understand it?

How does this refer to science? Well, remember that one of the key parts of the scientific method is publishing one's results for others so that can attempt to replicate and so on and so forth. There are many implications. For one, the most obvious is that the answer to this question could affect which experiments one chooses to replicate. If one is not disposed to like the implications of an experiment and to not value other's experiences, that may lead one to exclude the published research as valuable. This possibility is not necessarily a problem, but it does fall into that category of factors that influence scientific decisions that may well not be getting taken into account.

Another aspect of the denial of experience is embodied in the consumer of science - the nonscientists. Let us say as an example (an unlikely one, but that is not important), that science were to somehow establish as factual that after 39 days, a fetus is always viable and not before. Would someone who is predisposed to think of life as beginning at conception reject this scientific conclusion? Is it not improper to deny the experience of the various people who would have conducted the experiments to establish this conclusion? Would it not be equally incorrect to dismiss this experience if the conclusion were different (say, that life does indeed begin at conception). The point here is not the conclusion, it's that someone might choose to reject as invalid the experience of others simply because it is convenient to do so.

Denial of the validity of another's experience is a valid choice but a dangerous one, as it limits the possibilities that one can explore. If one is on a quest for truth, as many scientists are and many fancy themselves to be, denying another's experience as valid is arguably a very poor choice. However, if you agree that all experiences are valid, then the choice to deny another's experience is also valid.

Wednesday, August 18, 2010

Guest Blog!!

The folks over at Spirituality for Living were kind enough to invite me to be a guest blogger for them.  I happily agreed.  Check out my article on the Philosopher's Stone over there:

Here's the direct link:
http://dragonintuitive.com/the-philosophers-stone/

Thursday, August 12, 2010

My Credentials

It just occurred to me that I have not given my readers any idea of what authority (if any really is needed) I have to be writing this blog.  While it somewhat misses the point, it is nonetheless reasonable that you may be wondering who the heck I am or why I am going on about all of this. 

Fair enough.  I hold a Ph.D. in Chemistry obtained in 1998 from Florida State University.  While there, I worked at the National High Magnetic Field Laboratory working in the field of Fourier Transform Ion Cyclotron Resonance Mass Spectrometry.  So, I know a thing or two about science. 

On the non-scientific side, I hold a Bachelor of Arts (with a Chemistry major, though) from Huntingdon College.  Given that Huntingdon is associated with the United Methodist Church, my college education included courses on religion and philosophy.  One of my favorite classes while attending Huntingdon was one called the Philosophy of Religion.  Much like this blog does with science, that class examined religion from a philosophical viewpoint.  It was invaluable to study topics such as the proof of God's existence and the theistic principle.  I actually ended up agreeing that the ontological proof of God's existence proves the existence of a prime mover, if you want to be able to explain the universe.  If you don't care to explain the universe, it is not necessary to accept the existence of a prime mover.  Of course, the ontological proof does not establish the nature of God, simply its existence.  So far as the ontological proof is concerned, God could be defined as a purple grape popsicle named Herbert, and it would not matter.  If you believe that the force that created the universe is a giant, purple, grape popsicle, I can't argue with you (I might snicker just a bit, though). 

Anyway, that's my formal education.  Informally, I've been questioning the nature of reality since I was six, and I have read a great deal (and hope to read a good deal more yet) on the topic.  I also have been boning up on my quantum theory, as it is an aspect I did not need to delve into very deeply during my science education (being that I am a chemist). 

Well, I don't know if that will satisfy anyone that I have any business blathering on about all of this, but, if so, well, good, and, if not, oh well.  My views are still as valid as anyone else's (and theirs are as valid as mine - even if they are weird). 

The Nature of Belief

There are those whose attraction to science is its apparent (but not real) void of belief.  That is to say, that some look at science and believe that science requires no beliefs, and, thus, that is it superior to systems of thought that require belief.  These souls, unfortunately, have not examined the matter thoroughly enough.  The so-called hidden assumptions that I have already outlined (and there are more that I have yet to mention) are just such beliefs. 

The beliefs of an adherent of science who claims its superiority typically include the following:
1. That the ability to repeat an experiment and get the same result gives the result more validity than a result that cannot be reproduced in this manner.
2. That one can measure something independently of oneself.
3.  That rational thinking is superior to other types of thinking.

But, now, I ask...are these facts?  Can they be proven?  Can we do controlled, scientific experiments to demonstrate their validity?  The answer is quite obviously no.  If you do not see that the answer is no, meditate further upon these ideas.  Eventually it becomes clear that they cannot be tested scientifically.  They are, ultimately, opinions (i.e., beliefs).  That they are beliefs does not reduce their validity (or enhance it, either), but it does put the lie to the notion that science is a system of thought without belief. 

We all believe something, even if we fancy that we do not believe in anything.  In mathematics, a favorite problem is to prove a mathematical statement using as few postulates (i.e., assumptions a.k.a. beliefs) as possible.  Ideally, nothing would be assumed.  Try this exercise with anything in your personal life, and you will quickly find that it becomes very difficult to function as a human being (nay, impossible) without believing something.  However, this revelation requires critical examination of one's own thoughts.  If you say "I do not believe in anything", you are wrong, because you believe that you do not believe in anything.  Critical examination of your thoughts will eventually reveal to you that you do believe in something and, eventually, what those beliefs are. 

Science, dear readers, is not superior to any other system of thought.  It is a highly useful and beautiful system of thought with marvelous outcomes (some of which we could do without), but it is not superior.  While it may be more sober than religious thought, for example, it is no better.  It is different, and, of course, religious thought is no better than science, either. 

Ultimately, the best thoughts are those that make you feel good and are in harmony with your own being.  Some scientific thoughts will fall into this category and some will not.

Most importantly, the danger is that when you believe in the superiority of any system of thought, you run the risk of diminishing your own intuition, which is the most true thought there is (for you). 

More hidden assumptions coming up.

Wednesday, August 11, 2010

Media dangers

I was talking with my wife yesterday about how scientific information gets distributed.  For those of you that have not been exposed to the process, it goes something like this.  One or more scientists author an article and then submit it to a journal for review.  The article is (typically) reviewed by three "peers" - although some journals only require a single sponsor, and there are are non-journal scientific publications that have no review.  One of the hallmarks of these types of scientific publications, at least in the peer-reviewed literature, is that the conclusions reached are not speculative.  That is to say, the conclusions must be directly related to the data obtained and involve no speculation (although sometimes they do and reviewers let it go).  This point is important because it is precisely at this point in the process that the media can mess things up. 

Here's a hypothetical (but typical) example.  An article is published in a journal that reaches the conclusion that 60% of the people in the study lost weight when eating margarine instead of butter.  Now, there's a whole host of factors that are not addressed in that conclusion.  For example, how controlled were the rest of the people's diets?  Surely they didn't all eat the same thing in the same portions throughout the entire study.  What about exercise?  How many of them exercised?  What exercise?  How long?  Who snuck in a extra-large meat lover's pizza and horked the whole thing one night?  How different were the various participants' metabolisms?

You see, a simply stated conclusion like this hypothetical one has lots of caveats attached to it even without the philosophical examination that we've been going through.  But, some reporter, looking for something to report, sees this conclusion and then on your nightly news you are told that eating margarine will help you lose weight.  It's harder to generate interest in watching a news program if the teaser is that there is a report that some people lost some weight eating margarine instead of butter but it's not clear if that means margarine is better for you or not.  Some media outlets are better at being accurate than others, but it's something to watch out for.

The reason, of course, that all of this matters is that people end up making all kinds of decisions based on these kinds of simplifications.  Perhaps more importantly, these sorts of reports give the appearance of black-and-white scientific conclusions being generated, when, in fact, it is often the case that the conclusions are not so straightforward.  These sorts of reports trivialize the process as well as the conclusions.  They are not, of course, going to go away, and they represent a dilemma for the media.  After all, the alternative for the most part is simply to not report, and that does not serve the public, either.  So, in the end, all I can say about this topic is beware.  Take all science reporting through the media with one or more grains of salt.  My experience has been that the primary value of the media is to let you know that a report is out there.  My advice, though, is that before you act on the media report, find the scientific document and read it for yourself.  Even if you skip all the technical parts and simply read the conclusions, you will, at least, be cutting out the media middle-man.  Thank the media for letting you know the article was out there, but don't rely on their report to konw what the article really says. 

Saturday, August 7, 2010

Self-evidence

Self-evidence is a standard that has been applied to various ideas as being something that makes that idea somehow superior to accept without proof.  Of course, one has to buy into the idea of needing proof in the first place for this to matter, but Western society (and increasingly the whole world) operates on a largely logical positivist basis.  What I mean is that people more and more want to see the "proof".  This post may sound like I'm about to rip apart this notion.  I'm not.  A great deal of ignorance has been eliminated by healthy skepticism that demands proof, and I certainly count that as a good thing.  However, as with our other hidden assumptions, I want to make clear that there are underlying assumptions made with this demand that need to be exposed so that our thinking can be clear.

What does "self-evident" mean?  It means, basically, true because it's true.  In other words, it means something that we have no experience of not being true.  Gravity, for example, could be called self-evident.  We have no experience of something on Earth or in the universe that does not react to the force of gravity.  Thus, it could be said that the existence of gravity is self-evident. I'm not saying that we need to labor over whether to believe in gravity or not.  The point is that it is possible that there is a situation in which gravity does not apply of which we are simply unaware.

So, what's the big deal, you ask.  Well, the answer to that goes back to the concept of objectivity.  The point is this: Is it possible that gravity always applies because we believe that it always applies?  Your first reaction might be to say "Of course not!  What a silly question!".  Your justification for this outburst might be to say that you're not going to start jumping off of skyscrapers just because you decided to stop believing in gravity (and that would be quite wise of you).  The actual answer to this question is less important than that it be raised.  On a practical level, you cannot simply say to yourself, "I no longer believe in gravity.", jump off a skyscraper and levitate.  That does not mean it is strictly impossible.  It does mean, though, that you'd better not try.  The key point here is that it is possible (although we have no experience of such a thing) that there is a scenario in which gravity does not apply.  After all, we need only find one exception to the rule to potentially invalidate it.

So, what we arrive at is that our belief (i.;e., gravity always applies) may "blind" us to other possibilities if we are not careful.  How, then, can we claim to be objective?  If the experiments we design are guided by our subjective beliefs (unconscious as they may be), how is objectivity preserved?  The classical scientific answer to this quandary (as some folks have scratched the tip of this particular iceberg) is that we repeat the experiments in different times and places.  The problem, of course, with this is that what really has to be done is to repeat the experiment over and over again with people having different beliefs.  The problem is that it is not always possible to fulfill this latter requirement.  After all, you will be hard-pressed to find people not believing in gravity (even those who earnestly wish they did not).  So, we must admit to some degree of subjectivity in science (even without invoking quantum theory).

Okay, so the argument will be that someone might indeed believe that gravity does not apply but then they perform an experiment (hopefully not involving skyscrapers) and gravity remains proven.  So, the skeptic might say, the objectivity of science is borne out.  But, then, let us ask, is it not possible that this individual simply deluded him/herself into this belief?  In other words, is it not still possible that the person still unconsciously believed in gravity while consciously denying it?  I do not think it requires intense examination of human behavior to see that such a thing is indeed possible.

In the end, we must certainly say that good science strives for objectivity, but we must also acknowledge that since science is performed by people who will always have some "blind spots" in what they believe, that science is never really 100% objective.  After all, it was less than 120 years ago that he best educated scientists in the world believed in an aether for the propagation of light that they could not detect.  Einstein eventually showed that this aether was not necessary (and it still has not been detected, might I add).

Today, many well-educated scientists believe in dark matter that they cannot detect.  A few do not, and they are usually derided.  In time, which group is correct will eventually be revealed, but the point here is that both groups are operating somewhat subjectively.  So, gentle reader, beware of those who are devout in the superiority of science because of its objectivity.  They know not what they do.

Wednesday, August 4, 2010

"Hidden" Assumptions

The scientific method as I laid out is often touted by many as an "objective" process that is superior to other processes because of its objectivity.  While I would certainly agree that there is some value in its approach of seeking objectivity, that the conclusions drawn are highly informative and potentially highly beneficial, and certainly that the quest for self-understanding that ultimately drives it is of great value, it is a philosophical weakness not to understand the underlying assumptions of it.

The process begins with observation of something.  Be it an apple falling off a tree or a bug skittering across water or a warm summer's breeze, observation is the first step in science.  Much of the value in science historically came (and still comes) from a largely unconscious assumption (still very widely made and very widely unconscious) that what is observed is apart from the observer.  Quantum mechanics (about which I will have a great deal to say later) begins to challenge this notion, but it is a notion that has been widely held throughout the history of science by scientists.  I do not wish to challenge the value of that assumption, but I do want to point out that it is an assumption that is largely made with no conscious forethought.

The fact is, we can never really know whether our beliefs and expectations influence the outcome of any experiment or not.  No one can design an experiment to tell us that.  Quantum theory suggests that the observer does play a role, though, in the outcome of the experiment, at the very least by forcing a collapse of the wavefunction (in the Copenhagen interpretation).  The question is, does the observer's role extend beyond that?  We cannot truly know through scientific means (it is worthwhile to think that one through).  Let us take a simple example.  If we weigh a cup of water three times, we would expect to get very much the same answer each of the three times.  Ignoring evaporation and other molecular and atomic processes for the moment, our conception is that the mass of the cup of water is unchanging throughout time.  Thus, naively, we would expect that the weight will come out the same each time.  Were we to actually perform this experiment (and correct for evaporation), we would find out that the result is not exactly the same each time but varies a little bit around a central value.  Analytical chemists tend to think of this as "random error", which is a very scientific sounding way of saying "We don't know what's making it change".  Some of the "random error" is indeed due to fluctuations is air currents, electrical power, temperature, pressure, etc., etc (though usually we cannot say how much of the variation is due to any one of these factors).  Generally, though, it is assumed that there is no actual change in the mass of the cup of water (apart from the previously mentioned atomic and molecular processes).  No doubt that if one could eliminate all the aforementioned sources of variation, we would see the same value over and over again without variation.  The question is, is that because the mass never changes or because our measurement of the mass never changes?  In other words, if we begin the experiment believing that the mass cannot change, then does the mass measure not change because it is truly static or because we believe it cannot change?  Is it possible that it does change but that our measuring device also changes proportionally to compensate so that we still get the same answer?  From a functional ponit of view, the answer to this question may not be terrifically important, but it is important philosophically because it gets to the core of what science is and what it can and cannot tell us.

I want to point out, too, that it is not my assessment that if, indeed, our beliefs about experimental outcomes affect those outcomes that science falls apart and becomes useless.  Hardly!  However, it is important when we start talking about scientific conclusions and how they may be applied to our lives.  It is also a very important point to bear in mind when considering the value of scientific conclusions versus conclusions produced by other thought systems.  Many advocates of science will belittle religious thought, for example, because of its lack of objectivity, while failing to notice their own assumptions (taken on faith because they appear to be self-evident).  The intrinsic "correctness" of "self-evident" axioms is another "hidden" assumption that many advocates of science rarely consider (at least, publicly), and it will be the next one that I address.

Tuesday, August 3, 2010

In the Beginning...

Science as a distinct branch of philosophy began a long time ago.  How long it is hard to say, but, certainly, its roots were in place with the Ancient Greeks, and it may well have begun long before that.

Today, science is usually thought of as its own "thing", with knowledge and tools that are not directly related to any other branch of human thought.  In fact, it is often seen as conflicting with other schools of human thought, especially religion.  The media are particularly fond of science, reporting on its latest conclusions even though the reporters are often not scientifically trained and cannot critically assess the information on which they are reporting.  That inability does not prevent them from telling you and I what foods we should eat, which chemicals will make us die earlier, and what new miracles may be around the bend.  Of course, if you are not yourself a trained scientist, then you have no real way of assessing the "scientific" information given to you, either.  Thus, you may run about being worried about margarine, only to later be worried about trans fats and anal leakage caused by olestra.  In another decade or so, you'll be worried about the next butter substitute and the horrific things it can do to you.  The point here is that there is a lot of scientific conclusions running amok with few people actually trying to make sense of it all.

At a deeper level, scientific conclusions have some very interesting things to tell us about ourselves and our reality, if we are willing to listen.  If we are wise scientific consumers, we must take these conclusions with a grain of salt, which is a theme that will be repeated over and over again in this blog.  However, with a healthy dose of skepticism, we can then look at what these things tell us and draw our own conclusions about what they mean.  It is this for this purpose first and foremost that I have created this blog.  I hope to relay very complicated scientific concepts in an easily accessed manner and offer my interpretations as to their meaning, all the while leaving you, gentle reader, to decide for yourself what it means, if anything.

For those that could use a refresher, science is not really a body of knowledge, but a process unto itself.  The process is as follows: observe, hypothesize, test, evaluate, repeat.  If we test enough times a given hypothesis, it becomes a theory, and, after enough positive tests, a law (possibly).  It can take only one test that gives results inconsistent with the whole theory to bring it down.  Indeed, Einstein dismantled the aether explanation of electrodynamics without a test at all.  He only had to publish a better explanation that took into account already existing explanations.  Tests performed later by other people verified the validity of relativity theory.  The scientific process (or method), then, results in conclusions that we have to evaluate as seeming to align with our reality or not.  This last part is where the fun is, and it is where I am going to be concentrating.  Before I do, though, my first critical post (in the true sense of word - rather than meaning that it will be "negative") will be on the assumptions underlying the scientific method - one key one in particular.

We need to understand the potential holes in the scientific process before we can examine any of its conclusions, and that, really, is what the philosophy of science is all about.