Thursday 19 November 2020

Climate Change Revisited

In my earlier post on anthropogenic global warming (AGW), I provided four sequential questions to ask about global warming, to better think about the issue.  I am pleased to have seen these same four questions raised by others since then, although I cannot claim to be their source.  This post is an extended update on the subject, including what I have learned since then, through reading IPCC documents, science publications, and several articles on the "other side" of the issue, as well as various web sites.  A good site to follow, that covers the science and multiple perspectives, is Watts Up With That.

I begin with an update on the four questions, with my current take on some answers:

1. Is the planet Earth warming?

Yes it is quite clear that there has been some warming, but just how much, and from which starting point remain uncertain. The planet appears to be warming slowly over the past century or more, by way of small increases, separated by level periods, or even occasional declines.  The amount of warming depends on who you ask and what year you choose for your baseline, but it seems to be about one degree (either Fahrenheit or Celsius depending on your perspective) at this point.  Few people deny this or claim there has been no warming at all.  Moreover, the arctic appears to be warming more than the rest of the land, which in turn has warmed somewhat more than the oceans.  

As a caution however, a century ago the world was coming out of a cool spell, known as the "little ice age", when the climate was decidedly sub optimal, and there have been previous warm and cold periods.  For example, 12,000 years ago we were in an ice age. Indeed, over geological time, the Earth has been much warmer and much colder at various times in the past.  The climate has always been changing and there is no such thing as a stable climate. The bigger question is, how much warming will there be in the future? And therein lies the controversy.

2. Are we causing the warming?

Here too, the answer appears to be "yes", human activities are very likely causing some of the temperature rise.  My initial guess was a cheat of 50 +/- 40%, and the Intergovernmental Panel on Climate Change (IPCC) agrees that we are the cause of at least half of it, thereby allowing that other factors surely play a role.  Other respectable sources suggest less than 50%.  Few people claim no human cause for the warming, while some claim we are causing almost all of it.  For now I'll stick with half and maybe tighten up the uncertainty a bit, say 50 +/-30%.  Mind you, this alone says very little about future warming possibilities.  In addition to burning fossil fuels and the increasing carbon dioxide (CO2) concentration in the atmosphere, there are other factors influencing global temperatures and regional climates: deforestation, solar shifts, ocean currents, urban sprawl, cloud cover, etc.  None of the models in use today accurately capture all the influences. Indeed, we do not even understand some of relations.

3. Is the warming bad?

This, of course, is where we shift from "global warming" to "climate change", with all the added confusion and uncertainty that entails.  Here too we get into the difficulties and disagreements driving the controversy.  Any change to the past climate in some regions will be bad in some ways, for some people or ecosystems, but it is tricky to nail down any of the supposed effects for certain.  The IPCC focuses on the assumed negative consequences of rising CO2 levels: melting ice and permafrost, rising sea levels, warmer seasons, shifting biomes, etc.  Despite a lot of hype in the media however, even the IPCC does not find much evidence for strong effects on rainfall, drought, storms and other weather extremes.

While most public reports point to the negative effects of increasing CO2 concentration and rising temperatures, fewer point to the clear benefits. Increased CO2 has led to a significant greening of the world: more leaves on trees, faster crop growth, longer growing seasons, etc.  Plants need and benefit from CO2 and the more of it in the air, the more they will soak up, thereby storing more "carbon".  I will doubtless get blasted by the "consensus" crowd for saying this, but shorter winters and warmer growing seasons should be good news for northern climes such as Canada.  In general, one should look for both pros and cons to assess any change, and not look only at the negative effects.

One of the bad effects most often mentioned is sea level rise, but that has been grossly overstated.  Sea level has been rising ever since the last ice age ended. The average rate over the past century was something like 1.4 to 1.8 mm per year, depending who you ask.  Even if that increased to 3 mm/yr (unlikely), the rise would be only about eight inches by 2100. This would surely cause additional problems for low-lying shores, but it is hardly a global catastrophe, or the end of civilization as we know it, and there is lots of time to adapt. 

This points to what is missing in the above question: the projections into the future. Almost all of the supposed negative consequences of climate change arise from uncertain models of the effects of global warming on the planet, ecosystems and human activity.  Those models in turn are based on projections of temperature rise 60 or 100 years or more into the future, which come out of other models for the Earth's climate. Given that climate models have not been able to account for changes over even the past twenty years, to rely on them to predict a century ahead seems risky at best.  And modelling supposed climate change consequences based on those projections just multiplies the uncertainties.

Given that the worst supposed effects of climate change are not projected to occur for many decades, an unbiased reader can perhaps see why so many people are leery of directing massive public policy changes on the basis of the worst-case published results of these same models.

4. Can we do anything about it?

Here I would like to split the question in two: physical possibility, and political likelihood. First, is it even physically possible for humans to change the climate significantly? The climate has changed - sometimes a lot (e.g. the ice ages) - without human causation for eons, suggesting that there are more important factors beyond our control.  However, if 50% of recent warming has indeed been caused by humans, then in principle we have some control over future warming and the resulting climatic changes.  According to global warming theory, greatly reducing or eliminating CO2 emissions - primarily from burning fossil fuels - would be the necessary step. 

This is a reductionist approach, assuming that humanity has a climate control knob called "CO2 emissions" that we can readily adjust; the Earth's thermostat as it were. Stating it this baldly and simplistically underscores the fact that the "climate change" issue goes well beyond CO2 emissions and the things we can control.  Nevertheless, it may, in principle, be possible to exercise some degree of "control" over future warming by drastically changing our use of fossil fuels.

The second part asks whether humanity can work together to achieve this goal, even in part?  Is there the global political will and wherewithal to drastically reduce coal, oil and gas usage for our usual purposes of industry, transportation, home heating, etc.?  Based on the track record of the past 30 or more years, the answer appears to be, "no, we cannot".  How many Kyoto accords, Paris agreements and other major international plans have come and gone with little to show for them?  When one major player (the USA) has backed out of Paris, and the other (China) declares it will only begin reducing  CO2 in 2030, even as it builds more coal-fired power plants each year, it seems unlikely that major decreases in CO2 are likely to be achieved any time soon, notwithstanding all the talk, planning, and insistence from various quarters. 

It is true that there have been small improvements in some jurisdictions, and there are ways to convert some CO2 producers to reduced-emission energy sources.  Wind and solar power are touted as the solution, but have their own problems; e.g. the need for large scale energy storage.  And changing from coal and oil to natural gas in many situations reduces the net CO2 emissions.  Despite the wishful thinking, however, gas, oil, and even coal will be with us for many decades to come, especially since we cannot ethically deny their use by developing countries to get their populations up to developed-world levels of health, education, infrastructure, etc. 

Perhaps the most effective short-term initiative would be efficiency: finding better ways to do more with less energy.  Along those lines, it should be possible to encourage people to cut down on fossil fuel use via smaller cars, well-insulated houses, working from home, less air travel, fewer gas-guzzling "toys", etc.  There are some trends in this direction, and some countries and cities have been able to make small reductions in their CO2 emissions.  But wholesale cuts beyond perhaps 20% over the next decade or so seem highly unlikely (barring new pandemics).  To reduce CO2 emissions by 50% or more would require severe changes in how we live, work, play and run most of our human activities.  Indeed, despite all the virtue signalling, when push comes to shove, "climate change" is not very high on the public's priority list.

Further Discussions:

Based on these four questions and my answers, a fifth question now seems necessary: Should we try to do something about climate change?  Some shrill voices insist that we must do everything we possibly can immediately to save the planet.  Others claim that the high cost of the demanded changes would be worse than the likely effects of climate change for the realistically foreseeable future.  The former voices then call the latter ones "deniers", and in return, the latter may refer to the former people as "alarmists".  As usual, prudence and reason suggest a reality somewhere between these extremes.

Part of the answer to this new question would be to seriously study the pros and cons of a warmer climate, realizing the uncertainties and biases inherent in all the models, and leaving aside any prior judgements on the matter.  Part of that in turn, is to come up with realistic calculations of the likely rise in CO2 over the next few decades, based on reasonable assumptions.  Then a range of approaches to estimating the effect of increased CO2 on the global temperature can be taken.  Of course both these approaches have been pursued, but there are other players than the IPCC who come to quite different conclusions.  Rather than circling the wagons and calling each other names, scientists should humbly look at ALL arguments and assess all data fairly, and then predict a realistic range of warming into the future, along with a variety of cost-benefit analyses regarding its effects.  In this vein, it is telling that previously warm periods, for example, Roman (1 to 300 AD) and medieval (800 to 1200 AD) times, were known as climate "optimums", in the sense that the world seemed better for civilization when it was warmer.

In the meantime, I expect that most people would agree to pursue research and development of ways to mitigate and adapt to likely changes in regional weather patterns and temperatures that realistically might happen over the next say, 20 or 30 years.  These analyses could look at further efficiency improvements, reasonable tax and incentive approaches, practical energy storage processes, realistic carbon capture and sequestration options, safe nuclear energy, and probably many other ways to incrementally reduce fossil fuel usage.  After all, eventually coal and oil will truly run out and mankind will need to have other large reliable energy sources in hand.  Thus, I support a shift away from fossil fuels, but realistically expect that to take several decades at least.

The climate change bandwagon has pushed ahead by referring to sceptics as "deniers", harping up the IPPC reports, getting children and schools riled up, and getting government lip service in line.  In 2010-2019 it seemed to be the biggest topic for the United Nations, all Greens, various liberal governments, the news media, Hollywood, many science agencies, etc.  Such was the “consensus”.  Most of the public attention was thereby focused on the IPCC reports and the climate change warnings that regularly come out stating we have only 12 or 10, or even 5 years left to "save the planet".  These groups and opinions wag the finger at the "deniers" and any governments too slow in acting seriously about "climate change" despite the supposed "settled scientific consensus" on the subject.  However, one has to look beyond government virtue signalling and media hype to hear the other side of the controversy in order to get a balanced view of things.  And there certainly is another side.

A View From the Other Side:

Much less has been published about the over-the-top climate change evangelists, sometimes referred to as “alarmists” for overstating the issues and hyping up the science.  Therefore, I would next like to turn the tables and take a closer look at them and how their "sky is falling" message has skewed reasonable discussion on the topic of CO2 emissions, global warming, climate change, and their likely effects.  The following are a few of the concerns I have seen raised about the alarmist position, which act to undermine their arguments and the whole climate change crisis story:

1. The IPCC is fundamentally biased in its mandate and approach: to find evidence for anthropogenic global warming and to explore its negative effects.  It is assumed that CO2 is the primary cause of recent warming and little effort is spent studying other causes.  Meanwhile, all the effects of climate change are assumed to be negative whereas there are clear benefits of increased CO2 and a warmer planet.  A balanced (scientific) approach would include evidence, analyses, models and opinions from all sides.

2. The climate change warnings arrive with vague fears for the (mostly distant) future, with unspecified "tipping points", apocalyptic worries, and shrill demands for immediate major changes to global civilization.  These come as hyped interpretations of selected modelling reports.  Other, more moderate voices are purposely squelched, ignored, or drowned out by alarmist shouting. 

3. I have seen credible, persistent and detailed reports of selected or even doctored data used as evidence for AGW and its related effects. Biased assumptions, suspicious   "corrections" to past data, and skewed analyses are common.  Even choosing the baseline or starting point for statements about CO2 concentration, average global temperature and the supposed effects of AGW are not accepted by everyone.  And few ever explain what temperature would be "ideal" for the world.  When did we ever have an optimum and stable climate?  Can we expect the Earth's climate to remain the same forever according to human wishes? 

4. The supposed "consensus" and "settled science" about climate change are simply not true.  The oft-mentioned 97% consensus comes from from a highly selective and unrepresentative sample of views.  Claims of scientific “consensus” as an argument should be suspicious for any complex subject.  For AGW and climate change, it is simply false, yet the claim continues to be made, despite repeated correction.

5. Some alarmists misrepresent or dismiss past climate changes - both positive and negative shifts in global temperature - as well as their effects (both positive and negative).  Just think of the infamous "hockey stick" graph that supposedly made global warming seem unprecedented, yet settled science, but which was later shown to include a built-in warming trend, even when presented with random data.

6. The various climate models over the past 20 or 30 years have consistently produced spectacularly failed projections, predicting temperature increases far above those actually measured.  Clearly the models assume an unrealistically strong connection between CO2 concentration and radiant heat balance at Earth's surface.  If they are wrong over 20 years, how can their 60 to 100 year projections be taken seriously?  Yet we are told we absolutely must do everything demanded to prevent their worst-case projected outcomes and thereby "save the planet".

7. The overreaching demands for political and economic upheaval, supposedly essential to prevent climate catastrophe are presented with little supporting analysis to back them up.  Surely dire predictions and demands for civilizational reset require solid, generally accepted evidence, and not just pleas to failed models and the precautionary principle.  It is telling that those demands look a lot like socialist utopian dreaming.

8. The demands to stop using fossil fuels arrive with wholly unrealistic hopes and expectations (dare I say dreams?) for alternative energy sources in the near term.  Any reasonable look at where mankind gets its energy clearly shows fossil fuels will be needed for many decades yet, especially in developing countries.  Without low-cost, reliable and massive energy storage technology, wind and solar energy cannot replace fossil fuels in most applications.

9. Alarmists are spreading irrational fear among people, and especially children, via schools, the mainstream and social media, Hollywood, etc.  See for example, Al Gore’s movie "An Inconvenient Truth" with its unfounded extreme predictions, or the shrill statements of, "only ten years to save the world" (repeated every decade or so).  Most of this is simply unsupported, "the sky is falling" nonsense.  Climate change as an existential threat?  I have not read anything meaningful pointing to that as a serious possibility.  Please stop the fearmongering, then we can talk.

10. Finally, there are the hypocritical agreements, statements and spokespersons who do not walk the talk they insist the rest of us accept.  Wealthy people flying personal jets lecture the middle class on turning down thermostats and using public transit.  Public figures hype sea level rise, while buying ocean-front properties.  If you cannot practise what you preach, stop demanding it of the rest of us.

Aspects like these make it hard to trust the supposed “settled science” that the standard narrative insists on.  If there is solid evidence that global warming is caused by us and is truly bad, then explain the science and logic and deal with counter arguments fairly.  Don't hide behind a trumped up “consensus”, with all the attendant hype and fear.  If there truly is a serious risk to the world and humanity 80 or 100 years from now, then we can plan realistic actions to minimize or mitigate the worst effects.  Don't just preach global climate doom and then make unreasonable demands.  I note in passing that the people most worried about potential AGW effects 100 years from now, often do not seem at all concerned about the reality of economic doom from ever-increasing public debt that will assuredly weigh down our children and grandchildren.

Finally, there are entirely reasonable things we can do individually and together to reduce our “carbon footprint”: less travel, smaller houses and cars, walk or bike when possible, work from home, improve energy efficiency, reduce waste, live within our means, encourage modest increases in renewable energy (with energy storage tech), help developing nations form stable governments and economies, stop enabling and fighting wars, stop cutting down rainforests and polluting the oceans, grow more plants in and around cities, adopt safe nuclear energy, and so on and on.  There is a long list of things people can do to reduce the need for fossil fuels, improve CO2 capture in plants, and minimize our individual and societal impacts on the planet. 

In conclusion, the planet Earth is indeed warming somewhat at present, and the climate is changing for various partially understood reasons.  There will doubtless be some moderate negative impacts in certain regions, but this is not a human existential crisis.  Surely there is a sane middle ground between denier and alarmist that most people can agree on, and accept as a starting point? As time goes on, we will learn more about the climate and be able to do more to adapt to or mitigate the impacts of any changes, however, there is no need to try to do impossible things immediately to address a problem that potentially might get bad in a century or so.


Tuesday 18 August 2020

Some COVID-19 Analyses

I don't usually blog about current events or issues in the news, but COVID-19 is such a big thing this year that I cannot resist throwing my thoughts into the fray.  So here is what I analysed and concluded, based on analysis of the Ontario, Canada data, lately taken from their official COVID-19 data web page: https://covid-19.ontario.ca/data

My interest in analysing the pandemic began in early April with the initial published data on cases and deaths.  It seemed to me that at the time, the deaths per case rate (then around 6%) was higher than the officially published estimates (1 to 4%), and was slowly rising from week to week.  Of course I realized that was to be expected since, at any point, the accumulating deaths would lag the accumulating cases by about a week because it would take about seven days for someone to die from the disease after being diagnosed.

Instead, I then looked at the number of deaths and the number of "resolved" cases -- the survivors -- which may be lagging in the opposite way, it taking longer to show that someone is no longer ill than it takes to die.  That number has remained around 9% of resolved cases (survivors plus deaths) every week I updated the numbers.  Adding to those results the estimated 30% of "asymptomatic" cases -- cases undetected because the person does not feel sick (and does not die) -- and the overall fatality rate drops to something between 4 and 6%, still above the WHO and international estimates.

I started looking into these results and discrepancies, and quickly hit on the age of the person as the principal factor in determining risk of death.  The graphs of Ontario deaths by decadal age range, normalized to a population of 100,000 showed a distinct exponential curve shape, so I took their data and plotted it on a logarithmic scale.  I was shocked to find how precise the exponential trend was.  I redid this analysis every two weeks and in June, got the results shown in the following graph.


Here I have plotted three trends by age group, using an X-Y plot instead of the usual bar chart to allow further analysis.  Each age group is therefore one decade of people; e.g. the 30 to 39 year datum is shown as age 35.  I plot the number of cases up to June 8 per 100,000 population in that age range for Ontario, by age group, from 0-9 years through 90+ years old (blue squares). More description about that curve later.

The orange diamonds plot the number of deaths per 100,000 population up to the same date.  It was this straight line that first intrigued me -- an almost perfect exponential relationship!  However, since the number of deaths continue to accumulate over time, I decided to further normalize the data, by dividing the orange diamond data by the average risk of death for the entire population, to get the yellow triangles, which represent the relative risk of death (percentage) by age, which should not change over time.

The yellow triangles are, of course, also a straight line, and I have included the trend line as an exponential equation.  The R-squared value indicates how good the fit is over eight orders of magnitude, from 20 to 95 years old.  (Below 20 years old, the numbers are estimates since, at that date, there had been no deaths in that age range.)  I imagine such close a fit is almost unheard of in medical and disease statistics -- a fascinating discovery!

To better explain the yellow trend line, the average relative risk of death is 1.00 by definition, and that occurs around age 67 years old.  Older people are at more risk, as everyone knows, while younger folk are at reduced risk.  However, nowhere else have I seen it shown how important this age factor is, and over such a wide range.  Indeed, between age 20 and age 95, the risk of death from this disease doubles every 5.77 years older you are!  An 85 year old is therefore over 450X as likely to die from COVID-19 as a 35 year old, all other factors being similar!

That is a huge difference, yet rarely talked about.  After finding these results, I sent them to the Ontario public health people, my elected member of the provincial parliament (MPP), and the local health authorities.  I figured they would want to know these results and perhaps apply them to choose public policy going forward.  I heard nothing back, so I wrote a letter with the results to my local community paper.  The letter got published and I got a couple of private responses, but again, no public, or broader-media recognition.  I guess I am not an "expert" so no one pays me any attention.

There are some further things to note in this data.  For the oldest Ontarians, those over 90 years old, the disease has been devastating.  Fully 35% -- more than a third -- of those who got this disease died!  That is comparable to the fatality rate for the black plague in medieval Europe.  This was reflected in the huge number of deaths seen in nursing homes or "long-term care" facilities in the early months of the pandemic, a black mark on Ontario and how we care for our oldest citizens.

If applied, these results could have an important influence on public health policy.  Clearly, people over 70 need to be kept safe from contact with infected people, and monitored closely.  This is well known but never quantified as shown here.  On the other hand, people under 50 years old have a very low risk of death, especially those without underlying health conditions.  Perhaps they could get back to work with minimal constraints?  Even more, with few cases and almost no deaths for people under 20 years old (read "students"), there is almost negligible risk if they are otherwise healthy, so re-opening schools with few constraints looks feasible.

The true danger lies in mixing the young with the old: elementary teachers nearing retirement, retired folk driving school buses, kids living with their grandparents, young personal-care workers serving in nursing homes, etc.  There would have to be some careful policies put in place to avoid contagion in those, relatively few situations.  People between 50 and 70 could decide for themselves what risks they are willing to take, based on their own home, work, life, and health situations.

Such an approach would go a long way to re-opening economies, and thereby minimizing the non-health impacts of the pandemic.  Governments in various places seem to be headed in this direction, although hesitantly and still with severe constraints on the young about distancing and masks.  Despite ongoing hype about increasing "cases", worries about the fall influenza, and comparisons among different countries' and states' experiences, there does not seem to be general recognition of the results I have graphed here.

I continued to track the Ontario COVID-19 numbers to see how the results would change over time. A month after the above plot I redid the analysis and got the following graphs.  As you can see, they are very similar.  The number of cases and deaths per 100,000 people has gone up slightly as you would expect, but the overall shapes have remained the same.  There was one death of a child under ten, which skews the bottom end in a non-statistically relevant way.  For the "relative risk of death" curve (green triangles in this graph), between ages 20 and 95, the results are almost identical, fitting the same exponential equation as a month earlier.  The doubling time for the death risk is also almost identical at 5.85 years. Since July there have been fewer new cases to add and even fewer deaths in Ontario, so the results are essentially unchanged as of this writing.


One further note about the top curve (blue squares). between 20 and 80 years old, this curve is almost flat, with about 250 cases per 100,000 population. This suggests that COVID-19 is just as easy to catch at any age -- an equal opportunity disease -- and that it is the fatality that varies with age, not the contagiousness.  However, both ends of the curve are different.  Below age 20, there are fewer cases found per 100,000 people.  This probably reflects that fact that young people don't generally get sick from this corona virus, so many infected people would not be tested to become "cases".  One could, in principle, look at the difference between the data points and the 250 average to estimate what percentage of people under 20 had the disease without being tested, at least in comparison to the same percentage for older adults (unknown, but estimated around 30%).

The top end of the curve is more troubling.  More people per 100,000 over 80 years old got this disease than those under 80, and it is even worse for those over 90.  This may have several causes, but surely reflects the rapid spread of the disease through unprepared nursing homes as we saw in April and May.  Another possibility is that some of those folk may have actually died from other causes -- they were in nursing homes after all -- and were then found to have COVID-19, and so were counted as having died from it rather than just with it.

I have stopped analysing COVID-19 cases and deaths in Ontario now that they have levelled off, but I wanted to capture all these thoughts and results, if only for posterity.  If you are reading this, you can compare the results where you are to see if they mirror Ontario's experiences so far.  These results may also help you decide how worried you should be about this virus, depending on your own age and that of your loved ones.  If it merely helps some parents feel better about sending their kids back to school, then it has been a useful exercise.

2021 Update

 I dug out the Ontario cumulative case and death numbers up to January 25, 2021 and repeated the same analysis.  In the following graph, I only include the percent risk of death from COVID. That is, for each age group, the percentage of cases over the past year who died. The exponential trend is still clear, albeit not quite as precise.  The risk doubling time is still 5.8 years of age, and people over 90 years old have a 30% chance of dying if they get the disease, slightly lower than reported above. This may reflect better care being taken in long-term care facilities?

The average risk of death has dropped considerably to about 2.5%. That is largely because it is mostly younger people who are getting COVID-19 now, but the risk to young people is still very low, less than the death risk from the flu for people under 40 years old!

With vaccines being widely administered over the next months, one hopes the overall number of cases and deaths will stop increasing everywhere. As the vaccines are first given to the oldest and most at risk populations, one might expect this curve to shift slightly, but I hope there will be many fewer deaths going forward than over the past ten months. If so, then this curve will not change significantly.



Thursday 30 July 2020

Echoes of The Trinity

Of all the doctrines of Christianity, belief in the Trinity: three persons - Father, Son and Holy Spirit - in one Godhead, is perhaps the most mysterious.  It defies logical understanding: how can three persons make up one God?  There is much confusion about this, even among Christians, who largely take it on faith without trying to fathom how it works.  After all, "God works in mysterious ways", we say, or "how can we hope to understand an infinite God?"  Indeed, this doctrine is not explicitly found in the Bible, as some like to argue.  Yet the Bible makes it clear that the Father, the Son (Jesus), and the Holy Spirit are each persons in their own right, and yet are each God. At the same time, the Bible is clear there is only one God.  The early church therefore concluded that to reconcile these facts, God must consist of three identifiable persons - hence the Trinity.

At the very least, this doctrine is confusing and it is no wonder many Christians shy away from talking about it or trying to explain it.  Of course, most other people just ignore it, or deny it as impossible religious mumbo-jumbo, not to be taken seriously.  Other religions denounce this doctrine as a weak point of Christianity.  In Islam, for example, the Christian Trinity is interpreted as three separate gods, and thus Christianity is denounced as polytheism.  For a historical look at the concept and usage of the Trinity through time and across cultures, see: http://159.203.24.119/2018/12/29/meditation-triune-god-2/

Part of the Trinity mystery is that we tend to think of "persons" as human, and it makes no sense for three human persons to be one human being.  But God is not human, of course, so maybe our anthropomorphising him is getting in the way of our understanding?  Notwithstanding the apparent mystery and confusion, what if three-in-one was a common reality in the Universe?  Threesomes and dividing single entities into thirds are built into many aspects of the world we understand and manage daily.  The following are some examples of three-in-one concepts from various aspects of life as we know it.  Some of these are trivial, or even contrived perhaps, but others are more serious.

Most twisted ropes, from ancient times, contain three identical strands, making up one rope. There is even a Bible reference for this: Ecclesiastes 4:12.  Clearly each strand is separate from the others, yet there is but one single rope.  In ancient Greece and Medieval times, it was understood that creation had three parts: the heavens, the Earth, and the underworld.  These were distinct locations, yet all part of one creation.

In other mundane affairs, in many countries the government has three distinct parts - the executive branch (President), the legislature (Congress), and the judiciary (the courts) - but together they make up one government.  In the courts of law there are three parts as well: prosecution, defence, and judge/jury. And our governments are often further divided into three levels: National, Provincial or State, and local or municipal, each with their own duties, authority and responsibilities. 

On a more private level, most families have three parts: father, mother, children, making up a single family unit.  Most personal of all, each of us may think of ourselves as having three parts - body, mind and spirit - yet we consider ourselves a unitary persons.  Some people divide their lives temporally as youth, adulthood, and old age, making up a single lifetime, but that is somewhat arbitrary.  Similarly (and trivially) a song or a story may be considered to have three parts: beginning, middle and end.

On the physical side, when we think of the Earth, we think of land, oceans/lakes, and atmosphere, making up one biosphere. This of course, reflects the three states of matter - solid, liquid and gas - making up most common things. For example, steam, rain and snow are distinct states, but all are the same H2O water.  None of these examples is quite the same as the divine Trinity, but they do give some insight perhaps.
 
Deeper into physics, we find several three-in-one scenarios.  Most matter is made up of atoms and each atom contains three distinct subatomic particles: protons, neutrons and electrons.  Deeper still, each proton or neutron is made up of three quarks, which are different but inseparable under normal conditions.  Furthermore, in the complete "standard model" of quantum physics, there are three levels of particles.  All of normal matter is made up of particles in the first level, but the higher two levels - various high-energy bosons, mesons and quarks - have been identified experimentally. 

One aspect of this is the neutrinos, those ghostly particles that can pass through the Earth without effect. They exist in three flavours: electron neutrinos, muon neutrinos, and tau neutrinos. In this example, however, it turns out that each kind can turn into the other kinds as they travel through the universe - or the Earth.  Another interesting connection to the number three is that the electric charge on an electron is exactly (to at least 12 decimal places) one-third, or two-thirds of the charge on a quark, depending on the type of quark.  If that were not true, electrical insulators would be impossible.

There is also the most fundamental physical reality itself: the three dimensions of normal space.  Height, depth and width are distinct as X, Y, and Z axes, set 90 degrees to each other, yet there is one 3D space.  We can tell up-down from left-right, and north-south from east-west. These designators may change as your reference shifts, but there are always three dimensions.  

Finally, delving into geometry and forms, there is the lowly triangle: three sides and three angles, but one figure or shape.  The triangle is the epitome of three-in-one-ness; the simplest polygon, and maybe the most recognizable shape.  In the design of structures, triangles are fundamental - the most stable shape that makes up towers, bridges, trusses, and other rigid constructions. 

As you can see, there are many examples of three related but distinct things making up a single entity.  Some of these are fundamental to how the Universe itself exists.  So perhaps the Trinity is not so strange a doctrine after all.  With so many similar concepts around us, a Trinity of divine persons making a single Godhead is perhaps not so mysterious?

Thursday 18 June 2020

Philosophy-103: Who Am I, Who Are You?


In a previous post, I took a somewhat tongue-in-cheek, dispassionate view of what you are, from a physics and chemistry perspective.  But who you are is a quite different question that takes us deep into metaphysics.  I would therefore like to take a philosophical look at who are you? Or more personally, who am I?

Note that what are you and who are you are different questions because we are conscious beings. There is no "who" for inanimate objects.  And no one asks "who are you?" about an amoeba or a plant or even a worm.  We might ask that of higher life forms, but usually in an anthropomorphizing way.  At the highest levels, it is not unreasonable to ask that question about a horse or a dog since they seem to have individual personalities, but even then you will not get an answer from the animal.

Beyond my (or your) name, relationships, occupation, hobbies, and other attributes that identify and describe me (or you) to others, how do I identify myself to myself?  First, as previously noted, I am a conscious being, but while we all know that implicitly, it is harder to know precisely what that means. Beyond neurons, neurotransmitters and electrochemical impulses in my brain, who is this person who experiences the fully immersive, interactive movie going on inside my head?  This individual is myself, the aggregation of my awareness, memories, intentions, introspection, and first-person subjective experiences.  Figuring out what that means and how it relates to brain physiology and psychological theories is the "hard problem of consciousness", which science has had difficulty getting a handle on to study effectively, much less resolve.

The self - myself - is a collection of abilities, memories, aspirations, viewpoints, personality traits, foibles, habits, and so on that make up my existence, or my being as a human person.  But more than that, it is the spark of self awareness and interior subjective viewpoint, introspection and stream of consciousness, my intentions, will and decision making.  In short, it is the "I" inside me, the person that makes me me.  And of course, you are a different self made up in a similar way of your own unique set of these various aspects.

This self-hood is difficult to define and identify clearly.  Who precisely am I?  Who are you?  Are you the same person you were when you were born?  How about the same as last year, or even yesterday? You feel like the same coherent self, the continuity of "you" throughout your life, yet ever changing with new experiences and memories, shifting outlooks, viewpoints and opinions.  Science cannot fully elucidate this slippery "self", so we must turn to metaphysics and religion for further insight.

Your self, your whole being as a human person, includes your body but also your mind and spirit, that vague but real part of you that makes you truly yourself!  You are not a body with a brain, nor is your mind merely your brain.  Rather, the "you" you are has a brain and uses it for your own purposes.  So what is this core or heart of your person and from whence comes it?  Odd how our most intimate inner self is so mysterious!  You know for certain that you exist, but it is hard to point to or specify what this being at your core is.  Your self is what makes you truly human, and truly unique. It is also what ties you to others and makes us all want to know who we are.  These are deep metaphysical questions for each one of us to explore and delve into.

My own perspective, as a Christian, is that God created my spirit and somehow connected, or infused it with or into my developing body, to be a living soul (see Genesis 2:7 and Psalm 139:13).  The true "me" has developed, learned, experienced, collected traits, and remembered during my unique journey through life, as did yours.  We are each still growing and trying to understand who we are as we experience reality around us.  And when my body ultimately fails and the community of cells that make up my physical being ceases to function, I firmly believe that my spirit, or inner self, will somehow continue to exist.  Without worrying too much how all that works, I nevertheless entrust myself - my inner being - to God, both in this life and in the one to come.

One often hears someone comment on how fortunate they are to be born here in a peaceful, wealthy country, and at this time in history.  This usually contrasts with a harder life in former times, or other places.  In a very real way, however, I - the person I am - could not have been born elsewhere or elsewhen.  If I had been, even with an identical genetic makeup, I would not now be the same person, the same "self" that I actually am; I would be some other person with different experiences, memories, values, and so on.

This is obviously true in various ways: every person is the result of numerous influences and effects. Genetically, we are determined by a semi-random mix of DNA from our two parents.  If born at another time or place, "we" would have had different parents, hence unrelated DNA, which guides numerous physical and other attributes of who we are.  Hence, future cloning prospects notwithstanding, it is impossible to have someone with same DNA born at different time or in another country. 

Beyond genetics are the numerous environmental influences on who we become as adults: parental upbringing with all its norms, priorities, disciplines, modelling, teaching, etc.; our schooling later on, friends, local culture, or society, and so on.  Even people born at same time in the same neighbourhood will develop differently.  Indeed, even "identical" twins for all their genetic and environmental sameness, develop differently as the exigencies of life and all the random experiential variations accumulate.  Who you are is the summation and outcome of all these influences, absorbed and summed together in your person: your behaviours, preferences, abilities, and especially your memories.  You are unique in this respect and it is impossible to conceive of being the same "you" if you were born elsewhere or at a different time.

But what about who I am now?  Am I the same person as yesterday or last year, will I be the same person tomorrow or next year?  In one sense, of course, the answer to both of these is, "no, you are different from who you were yesterday and will be different again tomorrow."  After all, if who you are is determined by your experiences and memories, then take away or add to these, and the self changes.  Obviously you are not the same as you were as a child, and God willing, will be quite different again in many ways before you die.

Clearly the person you are changes throughout your life.  Yet in a very real sense, we each feel like the same person from day to day.  I am the same "me" that went to elementary school, who got married many years ago, who worked in an engineering career, who fathered children, and so on.  Those are MY experiences and memories.  I feel that I have somehow been the same "self" all my life.  How do we reconcile these two contrasting perspectives?

One obvious way is to say that the changes in my person or self from day to day are miniscule compared to the accumulated years gone by.  This does not apply to my life as a new-born, of course, but since I don't remember that, it doesn't much matter.  By the time I was old enough to experience myself as a "self" and to remember past events in a meaningful way, one day was already a tiny percentage of my past.  Thus, day-to-day changes are small, and represent tiny shifts or incremental adjustments to who I am.  I can therefore feel the continuity of my life over time, and my self seems to flow continually through the years, developing and piling up memories yes, but somehow remaining the same "me", even though how I define myself today may be quite different than, say, ten years ago.

Perhaps that concept of a new-born becoming her own self over time is one way of looking at personhood; a certain minimum assemblage of personality traits, experiences, awareness, and thinking ability is needed for the "self" to come into being.  Maybe the fact we do not remember anything from our first months or years means "we" did not truly exist yet?  That seems bizarre, but might be worth considering.  Certainly it takes years for a mature sense of self to emerge.  Observing developing children is often a good way to explore philosophy questions!  The reverse effect occurs at the end of life for some people. More on that below...

Yes, continuity of self is the key to the feeling of being the same person throughout life.  All the accumulated experiences, and memories are all mine!  None come from elsewhere, ported into my being by some mysterious manner, various sci-fi stories notwithstanding.  Except in some rare cases, my life is a continuum and I experience it as one self passing through time on the journey of life.  Exceptions might include a long comatose period, for instance: waking as a quite different person (older, probably weaker and disoriented).  Another example would be amnesia, not knowing who I am or much of my past life.  But even in these cases, there is some partial continuity (personality, abilities, language, etc.) that allow me to remain me.  It might be good to study such people as they rebuild their sense of "self" to see: a) how much of that sense they actually lost, and b) whether and how it is different from before.

Does the self have components, or is it a singular entity?  Apart from schizophrenia, most people feel like a coherent, unitary entity.  Apparently people who have undergone split brain surgery to quell epileptic seizures remain as single integrated individuals, despite having two unconnected  half-brains.  Can one somehow imagine dismantling one's self to separate out aspects of the "me" within, other than hypothetically?  Many of us would like to change aspects of who we are; we are aware of our better and darker parts, if we are honest.  But while we can work to change who we are, we cannot really peel off and discard the parts we don't like.

The idea of losing parts of oneself is worth exploring further.  When drunk or on drugs, there may be a temporary loss or confusion of one's faculties.  Yet the unitary self remains as long as the person is conscious.  Brain damage due to accident or disease may also cause loss of faculties or behavioural changes, but here too, unless the damage is extreme, the changes leave the person feeling like they are still a human person, and more or less, the same person as they were before, albeit reduced somehow.

The ultimate instance of this is people suffering from dementia, who indeed appear to be slowly losing parts of themselves.  Yet they too remain unitary, albeit reduced, selves until nearly the end.  As the disease progresses, the person's family will see changes in them, and experience a slow loss of their loved one.  People with dementia have written books about how it feels to lose their faculties and experience the world and their life differently.  Yet they still feel like the same person, albeit reduced somehow.  Nevertheless, in the end dementia is a true example of the self dissolving.

A final question about who I am relates to my spirit or soul.  Materialists will deny the existence of a non-material spirit, even if they accept the "soul" as the organizing life component in a living body, as opposed to a dead one.  Can the "self" exist apart from the body?  This of course, is the ultimate question of metaphysics, religion and the mind-brain problem of consciousness.  I am not going to resolve it here except to say I believe humans have spirits that outlive their physical bodies.  There is considerable evidence backing up that claim, if one is willing to consider it.

So where are we now?  I have not said anything new about consciousness or the self, and many authors, far more intelligent and thoughtful than I, have explored these questions in deeper ways for millennia.  The "self" remains a mystery, even as it is the most obvious aspect of our reality; "I think therefore I exist".  Even if we cannot fully understand it, we can all think about who we are, what we want to become, and how to live to pursue our goals.  That is, the "I" can take charge of my life, seek to understand who I am, and work toward being a better self.

Thursday 7 May 2020

The Wrath of God

The Bible records God's wrath in numerous passages.  Sometimes it is hard to reconcile what we commonly think of as "wrath" (anger, rage, or fury) with what we think of as a good God: kind, loving, and forgiving.  But wrath can also refer to gentler feelings like displeasure, exasperation, or irritation.  Indeed, the Bible teaches that God loves everyone, but he can also get very upset with us, both individually and together as the human race.  It is easy to see how God might have such responses to many of the things humans do.  See my theodicy 101 post for some general thoughts about the apparent discrepancy between his anger and love, but in this post I want to focus on one particular version of God's wrath, what it means and why it applies.

A longstanding and straightforward understanding of God's wrath is simply the natural consequences of going against God's will.  Good parents will protect and love their children, but will also allow them to experience the consequences of their actions, both good and bad.  If junior disobeys, then something "bad", from his perspective, happens to teach him that actions have consequences.  This could be viewed, from junior's perspective, as the wrath of his parents.  He may cry "unfair", or "you're mean" when a toy is taken away, or he is sent to a corner for a time out, or later when he is grounded for a week.  Yet we understand that actions indeed have consequences, and parents need to discipline their children appropriately for them to learn and mature properly.

In some ways it is the same for us and God.  If we are smokers, we should not blame God if we get lung cancer.  If we turn our backs on his directives regarding our sexuality, instituted for our good, we should not complain to him if we get an STD, or if our relationships falls apart.  If we are selfish, hard on others, deceitful, or otherwise behave poorly, we should expect consequences that work to our detriment: losing friends, being lied to in return, feeling cheated even when we are treated fairly, etc.  In this way, much of God's wrath can be seen as the natural, or predictable results of our own actions or disobediences.  Clearly, there is more to God's wrath than this, but this can be a good starting place.  God gives us rules for our own good, and if we ignore them, we can expect unhappy results to follow.

Let's see how this plays out at a larger scale; for groups, nations and the entire world.  For ancient Israel, God's wrath consisted of their nation being conquered by foreign armies and them taken into exile.  This involved many deaths and cruelties: I won't pretend that these effects were one-on-one aligned with individual sins and personal disobediences, although there was doubtless some of that.  No, in that case, God had often instructed, then berated and warned the people and their leaders against idolatry and injustice, but they kept falling back into sin and turning their back on God.  Although slow to anger (hundreds of years), God eventually had enough and, for the nation's own good, destroyed their nation, and took them into exile in Babylon.  Apparently they did learn that lesson and after their return to Israel, did not return to blatant idolatry.

Closer to here and now, let's look at modern Canada (or Europe, or the USA).  Sixty or so years ago, Canada could reasonably have been called a Christian nation.  Not everyone was Christian of course, but our laws, schooling, acceptable public behaviour, institutions, and mores were all based more or less on Judeo-Christian principles of truth, integrity, honesty, care for others, rule of law, fairness, etc.  There were obviously glitches, and not everything was perfect, but most people honoured God to some extent and obeyed rules based largely on his precepts and commands.

Fast forward to the 21st century: now Biblical truths are unknown, derided, ignored, or put down as "oppression" or "bigotry" by many.  Schools can teach anything except Christian values it seems, and can push ideologies directly counter to Christian doctrines.  How many millions of God's highest and most innocent creatures do we deliberately kill off each year by abortion?  And now we push for euthanasia, killing off those at the other end of life.  Entertainment, the mass media, various government edicts and court decisions continue to pare away Godly influences, or undermine Biblical teachings, often promoting their opposites.  Fewer people attend worship services, read the Bible, or even pray to God these days.  If God smiled on Canada 60 years ago, is he still smiling, or has he begun to withdraw his blessings?

The ancient Hebrew blessing reads, "The Lord bless you and keep you; the Lord make His face shine on you and be gracious to you; the LORD lift up His countenance upon you, and give you peace." (Numbers 6:24-26).  If a nation or people turns its back to God, ignoring him, or worse, denying his very existence and demeaning those who seek to follow his ways, can that nation expect God to lift His countenance upon us?  C.S. Lewis wrote, "There are only two kinds of people in the end: those who say to God, "Thy will be done," and those to whom God says, in the end, "Thy will be done."  When we don't listen to God, he in effect says, "OK, have it your way, along with the consequences that come with your choices".

What are those consequences?  We don't have to look far.  While our standard of living has risen since mid 20th century, has our level of happiness also increased?  It seems more people are lonely and stressed than ever; we have higher personal and national debts, there are more demands on our time, rising mental health problems, and societal pressures abound.  The list goes on.  The sexual revolution of the 1960's was supposed to bring more freedoms, but instead brought diseases, abuse, broken and complicated families, insecure and anxious children, rampant divorce, the flood of abortion, sexually transmitted diseases, widespread loneliness, relationship headaches, and so on.  We have growing wealth, but also pollution, inequality, underemployment, the rat race, heavy competition for jobs and promotions, rising anger and discontent.  Politically, there are increasing deficits, taxes, political conflicts, distrust, suspicion, and polarization.  Not many blessings in all that!

The picture is a bit less clear at the global level.  Christianity, however defined, is the largest religion in the world, with over two billion adherents of one affiliation or other.  In a global population around eight billion, Christians are still a minority.  The faith is growing in some places, but retreating in others.  Moreover, there is a growing hatred of Christianity world wide.  Communism still holds in some countries, with its atheist biases and the oppression of churches and free expression.  Islam, and even Hinduism and Buddhism now have a growing dislike for Christianity: think of the "blasphemy" laws in Pakistan, the near eradication of Christians in Iraq and Syria, and Islamic extremists targeting churches, for example.  Christians are the most persecuted faith group in the world.

To this we may add the growing negative view of Christianity in Western secular nations, as noted above, much of it irrational: suspicion of clergy, biased media reporting, the narrowing of tolerated religious practice, forced behaviour and speech laws, and so on.  For example, why do secular westerners appear to accept widespread Muslim practices counter to liberal principles, while also attacking Christianity, when Judeo-Christian principles undergird much of what liberals hold dear?  Or contrast Christian and secular views of the Israel-Palestine conflict, along with repeated UN votes against the modern state of Israel.  Tally it all up and it looks like half the world is anti-Christian (and anti-Semitic) to some extent or other.  Why is that?  Most of the reasons on offer do not hold up to serious scrutiny.

In this context, what do you suppose God thinks of the world and where it is headed these days?  Is he just going to sit back and dismiss widespread denial and hatred, or is he just, meting out justice in the form of fair consequences for these worldwide anti-God trends?  I cannot say, but I do pray for mercy and God's forgiveness.  In another context, what might God think when his beautiful creation, in all its stunning variation and complexity, is passed off as merely the result of random chance and natural processes, with no purpose, direction or intelligence allowed?  If most scientists cannot bear even the possibility of a divine hand at work, should those researchers expect divine inspiration in their work?

Are some of the global problems we are encountering as a species at least partly the result of God's wrath?  The nuclear doomsday clock, global warming, pandemics, species extinctions, poverty, hunger, international tensions, inequality, violence and warfare; are these consequences of our anti-God words and behaviours, our flouting and denial of God's expressed directives?  Some people certainly think so.  Can we afford to go on ignoring God's plans and guidelines for humanity and the natural consequences of going against them?

Closely associated with God's wrath in the Bible is the broader concept of the "fear of the Lord".  While this is a good topic for another post, suffice it to say that this "fear" is usually taken to mean reverence or awe; God is so awesome that contemplating his omnipotence can bring on trepidation and a sense of our smallness, unworthiness and vulnerability.  Then again, it is right to truly fear God in the sense of dread or foreboding, if we do not humbly seek his grace and mercy.  God may condemn those who deny and work against him to the natural consequences of their lack of faith and their wish to have nothing to do with him.

Fortunately for us, it is not all wrath, fear and condemnation.  God still looks after his creation and provides for humanity, even if he lets his creatures, by their own choices, slip away from him.  God made us good and loves us still, providing everything we need in this world and this life.  Rather than despair, I am bouyed by hope that God's people, who still honour him and seek to follow his precepts, will be enough to blunt, or delay further instances of his wrath on mankind.  In this I am encouraged by two Bible stories.  In Genesis 18:16-33, father Abraham pleads with God not to destroy Sodom and Gomorrah, and talks God into withholding his destructive wrath if only ten righteous people remain there.

Later, in 1 Kings 19:14-18, the prophet Elijah complains that everyone is against him and he is the only one who honours God.  God tells him that there are still 7000 in Israel who have not worshipped idols, and that is enough for God to send Elijah back for Israel's benefit.  Perhaps having a billion Christians around the world holding to God's ways and truths will be enough to hold back any worse effects of God's wrath.  I pray that it will be so, and that our repentance and seeking him will instead bring forgiveness and continued blessing.

Saturday 18 April 2020

The End of Marriage

The institution of marriage has been in the news a lot in recent years, what with Supreme Court rulings and the culture war issues surrounding them.  Even before then, however, and spread over several decades, the purpose of marriage seems to have been pared down to any two people in love wanting to live together, more or less committed to each other - at least for awhile.  This is a long way from the traditional purposes of marriage in most human societies.  To annotate the changes, I'd like to look at some of the original purposes for the almost universal institution of marriage.

The family, normally in the form of mother, father and children living together, often with other relatives, has been the principal building block of stable cultures for millennia.  Parents, acting together, are the first and most important educators for their children, teaching them language, life skills, appropriate behaviour, and passing down knowledge and values.  Utopian dreams of perfect societies without families filling those roles are unrealistic when not implemented, and generally disastrous when seriously attempted over a long time.  Just as the family is the building block of society, marriage of one man and one woman has always been the basis and cornerstone of stable families.

One purpose of marriage in many societies through time is to protect the woman being wed.  Pax to any feminists who may bristle at that statement, in the past, women were considered as little more than chattel and dependants, under the care of their parents, until transferred by marriage to their husbands, who were then responsible for them.  An unattached, unmarried woman would be at significant economic, physical, and moral risk, having few rights, opportunities or legal protections.  Marriage before God, family and friends, therefore, provided a legal and moral support structure for women in that, at their weddings, their husbands promised to be responsible for their safety, well being and support.  The extended family and surrounding community would hold the husbands to that commitment and monitor the well being of the couple.

Now you may pooh-pooh this idea as being old fashioned and way outdated, but consider that single mothers are a large fraction of the poor in today's Western nations.  They often are left with minimal support, while having to look after children.  And being thereby unable to work full time, they are often forced into low-paying, part-time work, or placed on welfare, both of which largely guarantee they remain in poverty and marginalized.  They are also at increased risk of physical and sexual abuse, mental illness, and a host of other problems.  Contrast their situation with that of women who remain married, and as a result, generally tend to be better off, more economically secure, healthier and happier.

Another even more impolitic purpose for marriage is to domesticate men!  In the past, without tight societal controls, single young men could be rather rowdy: sowing their wild oats, spending their pay on booze, living an energetic, perhaps frivolous and risky life with their buddies roaming around the towns and country, and often getting into trouble.  Military service, religious life, moral norms, or supervised hard work had the partial purpose and benefit of keeping a lid on the worst effects of testosterone in immature - say under 25 year old - males.  By such means young males could mature under the direction and monitoring of their own families and other older, more stable men.

Marriage also served to clamp down on such young-male tendencies.  The community expectation that you would plight yourself to one woman only, and that you would promise to be faithful and supportive for the rest of your life, was a strong incentive and force to make the new husband settle down, become more responsible, a better citizen, and an upstanding contributor to society.  That is one reason it was called "wedlock"; the man voluntarily agreed to the marriage, and the community norms and strictures enforced the bond.  Men were watched and expected to fulfill the role of husband and father, and divorce was rare and looked down upon by everyone as a personal failure.  Adultery too might occur, but was generally condemned, and so kept under wraps.  This model of marriage was not perfect, of course, but it worked for most people in most contexts.

Of course, human nature does not really change over the centuries, so today, as the community, religious, legal and moral constraints have been stripped away, we also have gangs of youths getting into trouble; inner city unattached males who prey on women and sire children out of wedlock; and deadbeat dads who leave their erstwhile sex partners with the children and without money.  Of course, there have always been such problems, but it seems they have gotten much worse in many ways, and the legal, economic and educational solutions now implemented to quell them do not seem to be particularly effective.  I won't bore you with studies and statistics pointing to the scale of such problems; they are all wide spread and well known if one goes looking.

The main purpose for marriage, of course, was (and should remain) the raising of children.  Stable couples are needed to effectively reproduce the human race.  Any loose man can "beget" a child, but it takes commitment and faithful work to raise children over 20 years or more until they can successfully leave home and fend for themselves.  This too has not changed much.  While there are heroic single Moms and Dads who effectively raise their kids, it is clear that children generally do best when their biological mother and father are married and remain together until (at least) the children leave home.  Claims that other arrangements are just as good, or that divorce in many cases is better for the children are not supportable when all the evidence is examined objectively.  Dozens of studies clearly show that having faithfully married parents is the best way for children to start life and grow up healthy and happy.

You may notice that I have not yet mentioned "love".  The idea that people need to be in love to get married, while certainly preferred, is not universal, as proven by arranged marriages in many cultures, and the "mail-order brides" phenomenon that still sometimes occurs (mostly online now).  While love is certainly beneficial to build a marriage on, it is not essential to a successful marriage undertaken for the purposes noted above.  If the partners are open and willing, love may develop before, and grow after the wedding, and there have been marriages of convenience that serve both partners' purposes - and their children's well being - even though they may never actually have been in love.

Recent relaxations of societal norms and related laws have undermined marriage, perhaps irretrievably.  Living together out of wedlock, birth control and abortion, serial common-law marriage, easy no-fault divorce, same sex marriage, hook-up arrangements, and pornography have each whittled away at the original purposes.  All that seems left now is that any two people can be "in love" and choose to get married for whatever reason and for however long they wish.  Some people have serial marriages, others decide marriage is irrelevant and just live together.  There have even been cases of people "marrying" themselves.  How long will it be until bigamy laws are struck down and we have legal "marriages" of three or more?  One might ask whether there is anything left of marriage today?

The effects of these societal failures are evident all around us: broken families, litigious custody battles, single mothers in poverty, bitter single dads locked in animosity; rampant loneliness, alcoholism, loose sex, and short-lived, empty romances;  confused, insecure and hurting children; reduced self esteem, purposelessness, mental health issues, anxiety and stress; the list goes on.  For most people, a stable marriage is the best way to a good life.  Of course, marriage is not for everyone, and there have always been failed marriages due to adultery or abuse, but they were much rarer in the past than today.  With fewer true marriages and many broken ones, women today are having fewer children.  Indeed, with most wealthy countries now reproducing themselves well below the replacement level (2.1 children per woman), the culture of some countries is on track to fade away.

The benefits of traditional marriage norms also flow beyond the family to the entire culture.  A century ago, anthropologist J.D. Unwin studied many societies throughout human history.  He found that societies adopting "absolute monogamy" (one man, one woman married for life) were the most prosperous and productive, economically, artistically and scientifically.  Those that did not either remained primitive or went downhill within three or four generations.  We can pretend that our world is somehow immune to this trend, but Unwin's results do not bode well for our countries and Western culture in general.

It may now be too late to save marriage in its traditional form in Western societies.  However, those who are truly married can still adopt the above principles and purposes for their lives together: living faithfully, committed to each other, and thereby accepting the benefits of a true marriage.  Here too, numerous studies show that faithfully married couples tend to be healthier and happier in almost every respect.  Finally, given the downfall of traditional marriage, perhaps we should rename it "holy matrimony" for those who wish to live that way and thereby reclaim the stability, security and joys that true marriage can bring.

Wednesday 25 March 2020

A Model for Intelligent Evolution

Several cumulative pieces of evidence point to the role of an intelligent agent in the existence of life.  Some people who don't want to seriously consider Intelligent Design (ID) raise some supposed objections to the theory: who is the designer, and how does the ID process work?  In one sense, these are unfair questions.  One can identify clear fingerprints of design work without knowing who did it; for example SETI researchers would be overjoyed to find a designed message from outer space, clearly from an intelligent agent, without needing to know who sent it.  Similarly, knowing that something has been designed does not tell you how the design proceeded.  Nevertheless, I think biological sciences are now revealing a few pieces of the puzzle, that start to lift the curtain on the how and when of ID, at least as it applies to biology on Earth.  This approach is not the same as my prior speculations on this subject, although it may mesh with it.  Rather, this post looks at the science and then extrapolates to a possible ID process.
First the science. There are two principal lines of evidence to consider, the fossil record and modern molecular biochemistry.  Charles Darwin expected the "tree of life" in the fossil record to look something like Fig. 1, with "species" constantly but gradually changing and occasionally splitting in two as branches appear and diverge.  However, even he noticed a paucity of "transitional forms" in the fossil record.  Instead fossilized species appear suddenly without obvious precursors or intermediate steps from the supposed previous branch of the tree, change very little over the millions of years and then fade out and (mostly) go extinct, with no obvious new species arising from them, as shown roughly in Fig. 2. Science articles often add horizontal dashed lines to such "trees" to show the supposed connections and transitions, but those do not represent the actual fossils.

The fossil record aspect most perplexing to Darwin was the so called Cambrian explosion.  Some 540 million years ago (MYA), at the start of the Cambrian period of geological history, numerous new complex lifeforms, with sophisticated features like eyes, body plans, legs, swimming, mouths and guts, nervous systems, etc. burst into existence over just a few million years, with no apparent precursors from the previous Ediacaran age, which recorded only simple worms, algae mats, sponges, and the like as fossils.  Darwin hoped that subsequent fossil finds would fill in the blanks, but that has not happened for the most part, and the same is true all through the fossil record.


This conundrum for gradualist evolution remains as true today as it did 150 years ago, with no credible naturalistic explanation.  Indeed, the problem was so clear in 1972 that Niles Eldredge and Stephen Jay Gould suggested the Punctuated Equilibrium (PE) model for evolution, claiming that the transitions happened so quickly that none of them were fossilized, and that the new species became populous as equilibrium forms, with little of no further phenotype (physical) changes until the next punctuation hit.  Not only does this go against Darwin's clear requirement that there be no sudden major changes ("saltations" he called them), but it also goes against population genetics, which shows it is much harder to get major mutation changes - the kind needed to create a new species - over short period and in a small population, as required by the PE theory.

On the other hand, ID says, of course species remain largely fixed phenotypes as long as they exist, because the Darwinian mechanism, working in known ways on genomes, cannot create new features or the major genetic changes needed for totally new species.  Many - perhaps most - species have one or more gene unique to them, with nothing similar in any other species.  These so called orphan genes cannot be explained by a Darwinian mechanism (random mutation, plus natural selection) working on precursor genomes in any reasonable geological time.  Any new feature in the fossil record would have required many new proteins (and hence genes) to construct the feature, and others to integrate it into the lifeform and make it functional, thereby rendering the transition even less likely for any purely natural process.

This finding of molecular biochemistry leads to the second piece of evidence for the process I will outline below.  Studies of extant species demonstrate that for the most part, any traceable genetic effects leading to new species have arisen by devolution, that is, the damaging or removal of one or more genes in a pre-existing genome.  Hence polar bears, for example, evolved from brown bear cousins by damaging the genes that limited their fat intake, allowing them to eat fatty seals without health problems, and by damaging the gene for melanin that produces dark pigments, leaving their fur white.  Presumably such mutations first occurred in brown bears, but were not beneficial to them. However, when the polar bears forebears migrated into the arctic, those mutations were beneficial, helping the polar bear thrive in the new environment, and so were selected and then fixed into the growing population.

Yes, this is a clear case of Darwinian evolution producing a new species, just as Darwin claimed. Sometimes, Darwinism actually works!  Note however, that although beneficial, those mutations damaged the genome, knocking out or damaging multiple genes.  No new genetic information arose.  Michael Behe, in his book Darwin Devolves describes this effect in depth.  The same finding applies to almost every known case of beneficial mutation.  Bacteria become resistant to antibiotics by damaging the genes that generate the cell-wall portals through which the antibiotics attack them.  This is like an army burning its bridges behind them during a retreat: yes, it saves your army and slows down the enemy, but only by destroying part of your infrastructure that's needed in normal life.

Behe describes numerous examples, his first rule of adaptive evolution is to, "break or blunt any functional coded element whose loss would yield a net fitness gain".  In hindsight, this effect should have been obvious.  For any given gene, damaging mutations are far more likely than ones that might improve the gene somehow.  Any new feature or function in an organism requires new proteins or enzymes, which therefore require new genetic information, and not just one or two minor changes.  Most genes are hundreds of nucleotides long, and the probability of a new functional gene arising de-novo, by random mutations is all but negligible.  On the other hand, a single point mutation in a gene is often enough to damage or render inoperative the resulting protein molecule.  Thus, if a benefit to the species can be had by knocking out or damaging a  particular gene, it is easy to do - just wait for almost any mutation to that gene - and very effective.  It is easier and quicker to burn your bridges than to develop new weapons and rebuild your army to counterattack.  Other research supports Behe's arguments and conclusions.

So Darwinism works, at least in this very limited sense.  However, evolution by damaging the genome does not get us new biological features and functions.  The polar bear is still a bear, and indeed could still mate with brown bears in principle.  Devolution can only take you so far.  Another example is dogs, which apparently evolved from wolves, via selective breeding - not quite natural selection, but the same rule applies.  Most of the variations and new breeds of dogs have arisen by knocking out or damaging genes, or gene variants (alleles) found in wolves.  Sure you get new breeds, but all dogs are still dogs - no new features - and one does not have to think too long about how most dog breeds would survive in the wild competing with wolves (natural selection).  By damaging or removing parts of the genome, you can indeed get somewhat different life forms, but you cannot get anything completely new.  For that, you need to add new genetic information, new genes that work together for new functions or phenotypic features.  The Darwinian process of random mutations and natural selection simply cannot accomplish that.

In addition to the Cambrian explosion, there were other brief periods (almost moments) in geological time when whole batches of new life forms came into existence suddenly.  Some examples include the Avalon explosion, and the bursts of new life forms following mass extinction events.  Naturalist Darwinians claim these extinctions opened up numerous ecological niches which were then filled with newly evolved life forms.  Yet they have no mechanism to credibly account for where the new genetic information came from for these new species that suddenly pop into existence.

The evolution model I would like to propose to account for the tree of life as it appears in the fossil record, builds on these findings of the last century or so.  Figure 3 sketches what I have in mind, essentially an expansion of the PE model, with ID added in to account for the punctuations.  Most of the time life goes on without much change.  Species remain largely unchanged with some going extinct, and only occasional "new" species arising via devolution.  These new species are very similar to their precursors.  Think of all the dinosaurs, for example; numerous ones look very similar to each other, changed only in size or minor tweaks to their bones, yet each one gets a new name when unearthed.  This is Darwinism at work, tweaking the world a little bit over millions of years, while the fossil record remains in equilibrium mode or "stasis".

Then, at detectable moments in geological time, blasts of new species, genera, or even families arise almost suddenly, as "design" events, when new genetic information is injected into the biome.  A brief period of "consolidation" would occur as the new information is integrated and accommodated by the new life forms.  Some of those die out immediately as non-viable, or too few in number.  Others adjust their phenotype or morphology as the new genes take effect, and settle out as new species.

Numerous plant and animal species have no known transitions from prior species; bladderworts and squids for example.  Even some of the iconic transitional fossil sequences like horses and whales are not what they are presented to be in evolution textbooks.  The steps (saltations) between the supposed intermediaries are huge from a genetic perspective, requiring many genetic additions and long fixation times.

In the case of whales, for example, only a few million years separates the supposed Pakicetus, land-based starting point from the earliest known, fully-formed aquatic whale.  Such a transition would require an enormous amount of new genetic information in the form of many new genes.  Meanwhile, population genetic models suggest that the time required for even a single mutation to be "fixed" in a population of say, 10,000 whale precursors with a ten year reproduction cycle, could exceed that time frame.  It is beyond belief that many thousands of just right mutations, arising randomly, could occur in that time frame, resulting in a totally new family of whale species.

It is much more reasonable to suggest that, at such moments, an intelligent agent interceded somehow, to develop an array of new species and introduce them into Earth's biosphere.  How that could be done, whether by purposely adding to or modifying DNA in various extant species, or by "creating" new species using similar genetic building blocks, with a few additions, cannot be known at present, although I have previously speculated on how this might be done.  The injections of new information would likely occur in small populations, which are then released into the wild.  With low numbers and short times during these consolidations, few if any fossils would be saved for us to dig up, and the fossil record would appear the way it does in Figure 2.

With a batch of new genetic information added into various genomes, the initial "explosion" of different lifeforms would be huge.  Many of those would not be viable, perhaps, depending on the distribution of the new genes, and many new "lines" would doubtless go extinct even before they got started, either due to detrimental effects, or the exigencies of life - most individual lives are lost by random chance rather than from being slightly less fit.  Nevertheless, enough new life forms would survive the first few generations, and would quickly settle down, via Darwinian selection, into separate, stable, viable species.  By the time these became numerous enough to get one or more preserved in the fossil record, they would be a thriving species, now in equilibrium with its environment.  Hence the punctuations in the fossil record, leading to steady state or "stasis".

This model suggests a range of potential research areas.  The fossil record is an obvious starting place.  Look for and catalogue all the sudden eruptions of new life forms without apparent precursors. How often do they occur?  Are they all at once for all of life on Earth, as in the Permian extinction, or are they smaller in scale, limited to one area of the globe, or one set of lifeforms at the genus or family level?  The fossil record is well established, even if it is still filling in slowly, so the data to do this must exist, even if no one has looked at it in this way.

On the biomolecular side, continue to research and collect the genetic changes that have caused known speciation events.  In some cases, the "molecular clock" (such as it is) for random mutations, can be used to estimate when certain genes, similar among different species, were once the same gene, suggesting a branch point in life's tree, or at least in the subsequent existence of that gene.  Similar approaches can be used to estimate when a given gene, perhaps one unique to a particular species, first came into being.  For example, by looking at the variation in a single, uniquely human gene among extant human beings, the "age" of that gene - when it first arose - based on the accumulated variations, can be estimated.

It is conceivable that, as the fossil record is looked at more closely, and as the picture of genetic histories becomes clearer, the groups, dates and possibly locations may start to overlap, pointing to when and where the injections of new genetic info occurred.  This is obviously a long term project.  There will be lots of "noise" and gaps in the data, but if such a consilience or agreement between fossils and molecules can be found, it would be a major discovery.

The third piece of research would be to use comparative genetics to estimate how much genetic info was injected at these supposed design interventions by the intelligent agent.  For example, what features and functions existed in possible partial precursor species, and what additional genetic information would be needed to account for the added or different features seen in the new species that arose?  This could be done by carefully examining what genes are needed for similar features or functions seen in extant species.  None of this would be precise or definitive, of course, but it could be useful and could hint at how the information was added.

For example, if some newly arisen species had (or its extant descendants have) a set of genes required for some new feature, and it can be shown that those genes could have come from two different families that cannot breed together, then clearly, one good explanation is that the agent took existing genes from two disparate species, and combined them to produce a novel feature in a new species.  This would rule out the small genetic tweaks approach to developing a new line of lifeforms.

One last area of research for the proposed model would be to look closely at what blind Darwinian mechanisms can actually achieve over geological time scales.  That is, at what level of variation and adaptation among species can random mutation plus natural selection alone - without intelligent direction, or added genomic information - produce actual genomic and morphological changes, possibly including new species?  Several natural mutational and genomic change mechanisms are well known, and mathematical models exist to explore their realistic effects over time in a population of some size.  The influence of environment and competition can be added to these models to see how much blind evolution might be possible.  Actually, a lot of this work has already been done, including some empirical lab work, much of it by ID proponents seeking to determine where the line between natural evolution and ID must be in the hierarchy of life forms.  See Michael Behe's The Edge of Evolution, for example.  Indeed, initial considerations of this sort were key to the inception of ID theory, but additional research could provide a lot more understanding and insight.

Instead of trying to debunk Darwinism on the one hand, or completely dismissing Intelligent Design on the other, we should combine them into a better scientific paradigm for evolution.  This reflects Hegel's philosophy of thesis (Darwinism), antithesis (ID) and synthesis (the combined model) as applied to evolution.  As I understand the current situation, some form of ID theory is gaining credibility among scientists of various stripes, and in various places.  Once ID is allowed to exist peaceably alongside the 150 year sole reign of Darwinism and its own neo-synthesis, then the above research can begin in earnest, exploring the model offered in Figure 3.  It may be that the above suggestions will lead nowhere or to a mishmash of uncertainty, but even that would tell us something.  Surely, however, even the possibility of finding out something about when and how the intelligent agent was at work, would be a huge addition to our scientific knowledge about the origin and history of life on Earth, and our relationship with the creator and the cosmos.