Sunday 14 April 2019

Futile Technological Dreams

Some scientific ventures and technological projects should stop pretending that there will be a big breakthrough that will change everything. After decades of unfulfilled promises and lack of significant progress, some research areas should be mothballed, or at least scaled back so that the money and effort can be applied elsewhere. Here are some currently expensive science and technology projects that, in my humble engineering opinion, will not pan out as long hoped, notwithstanding all the public hype surrounding them:

Practical Fusion Energy:
The joke for the past 50 years or more has been that fusion power generation is always just 20 years away. It would be great if fusion power became a reality, and some of the research in that area is fascinating, but I think it is safe to say we will not have a practical (much less financially viable) fusion generating plant in this century. Even if the science hurdles could be overcome - and they are still huge - the technology is so esoteric, marginal and complex that building a fusion reactor to operate most of the time for years is way beyond anything conceivable today.

Fusion research focuses in two areas: magnetic and inertial confinement. Magnetic confinement is the original hope, with strong magnetic fields in some three-dimensional configuration, holding a very-high temperature plasma with a high enough density, long enough for tritium and deuterium nuclei to fuse together into helium nuclei.  This reaction releases some neutrons and lots of energy, which when thermalized, can drive a normal turbine/generator for electrical power.

The basic problem is trying to maintain the temperature and density high enough to produce more output energy than went into getting the reaction started. The plasma leaks out of the magnetic fields, the strong fields become unstable, and the peak conditions quickly fade. The newest fusion reactors have to use all sorts of special effects and added technology to achieve even momentary energy break-even. And then the system has to be shut down, and refurbished, or at least cleaned up and re-initialized for the next experimental run. Unwanted reaction products pollute the carefully assembled gas cloud. High energy particle irradiate and damage the interior walls of the chamber, and all the external high-power equipment is pushing the envelope of what is technically possible, so is difficult to keep in operation and expensive to maintain.

It seems the best promise is to make the reactor much larger, but that brings its own technical problems, not to mention costs. Slight improvements in materials and better understanding of the physics are unlikely to achieve the major advances needed for a commercially successful fusion station, much less the "energy so cheap, it won't be worth billing for" as originally dreamed.

Inertial confinement schemes suffer even worse problems. In this approach, a small target sphere with metal layers over the reacting Du and Tr atoms inside is carefully injected into the reaction chamber, and then blasted at a precise moment with focused, high-power laser beams on all sides. If everything is perfect, the metal shell is vaporized and blows off, pressing strongly against the reaction gases inside, compressing them enough for a few microseconds, allowing some of their nuclei to fuse together, releasing a lot of energy, which then blasts apart the gas, ending the reaction. Each target is small and expensive to make, and yields only small amounts of energy in a single burst. The hope is that the targets can be made cheaply, and injected and pulsed regularly, to provide continuing pulses of thermal energy, and hence, generate more power than is needed to operate the lasers. Experimental systems have apparently been able to achieve energy balance for single-shot tests.

The lasers involved are huge, very expensive, and difficult to keep focused and operational. Their lenses tend to heat up, distorting their shape and focus. The power supplies needed to drive the lasers are huge, and coordinating multiple laser pulses is tricky, even for a single pulse. The target pellets have multiple ablative layers which need to be precise in order to achieve uniform compression. Keeping the system, especially all the optics, clean with the exploding pellets and reaction products is a major hurdle for continuing operation. And then there is getting the released energy out of the reactor efficiently enough to make the power generation worthwhile, especially since much of the system needs to be cooled for proper functioning. As with magnetic confinement, it will be a steep uphill battle to solve all the engineering problems needed for continuous commercial operation.

Don't get me wrong, I think fusion research should continue. We will learn more about the physics and the techniques for controlling plasmas, the management of microscopic fusion explosions, and perhaps more important, the technologies developed to push beyond the current boundaries of materials, manufacturing, control mechanisms, and human understanding. But do not pursue this work hoping to build a practical fusion generating station in the foreseeable future.

As a reality check, compare how easy it was to get a stable nuclear fission "atomic pile" working in the 1940's with the troubles nuclear reactors have over the long term today: leaks, cracks, fuel handling and processing, redundant safety devices, human error, waste materials, decommissioning, etc. Given the experimental technological difficulties of sustained fusion, the construction and operating headaches of building and using such a reactor would be an order of magnitude more expensive and difficult. What power company is going to want to invest tens of billions in such a doubtful venture? Perhaps in a hundred years with new technology, materials and know-how, such a power plant may become feasible, but even then, will it be practical?

AI Consciousness:
There has been a lot of hype about artificial intelligence in the past decade or so: deep learning, Jeopardy-winning Watson, game playing champions, the dream of uploading our consciousness, the search for an artifical general intelligence, attempts to model animals' brains, and of course, the singularity, in which AI becomes smarter than humans, evolves itself and eventually takes over the world, the better or worse for mere humans.  It is true there have been impressive advances: self-driving cars, natural-language capture, expert systems, big-data mining, and so on. But none of this comes anywhere near having a self-aware, conscious AI, notwithstanding all the sci-fi movies and various futuristic warnings or promises in the news and media lately.

Any AI system is basically, a complex algorithm, using inputs and information from its memory, to do calculations, make decisions, draw conclusions, and produce outputs, all based on its programming.  Yes, some programs can adjust themselves to improve their game playing, or incorporate new data or goals set for it, but they do not "choose" their own goals on their own unless they have higher-level goals preprogrammed.  They do not think for themselves, unless you want to include what AI does as "thinking".  Unlike humans, they have no subjective sense of self, purpose, or meaning.  A chess playing program does not "know" that it is playing chess.  Sure, it may be able to answer questions about chess and tell you it is playing that game, but it is only doing so because of its programming, done by clever human minds.

No computer can "understand" the "meaning" of a poem or painting.  It may be able to parse the text and tell you what the poem is about, but it cannot think about the poem and what it means to itself, you or me.  Any computerized "emotions" are merely simulated in response to specified inputs.  No AI can have "intentions" aside from optimizing some parameter specified by its programmers.  Thus, all the apparent intelligence and seeming autonomy are narrow and programmed into it by humans, who do know what they are trying to achieve.

Don't get me wrong, AI has an amazing and huge future: medical diagnosis, human assistance, technology management, scientific research, etc. are all being helped by AI today, and that will only increase in the future. We will eventually get truly self-driving cars (although not as soon as some people project). We may get AI systems that can diagnose diseases better than human doctors. And so on. This work will continue and I hope it continues to benefit mankind in realistic ways.

It is quite possible that at some point in the near future, someone's clever AI system will be able to pass a fair Turing test, but that will not make it conscious, the equivalent of a human, much less of superior intelligence. Such a machine will have been programmed by humans to emulate a human well enough to fool other humans, but that won't make it human-like in any true sense. Humans are more general and flexible than any computer, and humans have self awareness, subjective intentionality, and the whole broad set of capabilities. They understand what they are doing!

So do not worry about AI's taking over the world or usurping humans. There is more danger from (human) hackers taking over the AI to crash a self-driving car, or damage a nuclear reactor's controls. Or some terrorist organisation getting hold of a smart weapon and sending it into the White House. It is true that some people will lose their jobs to AI systems, but that will happens slowly and people are usually flexible enough to find other employment niches. So AI will be disruptive, but not destructive, nor the end of the world as we know it. The singularity, uploading your consciousness, sentient androids, equal rights for AI's are all science fiction staples, but they will not happen in your lifetime, and perhaps never.

Quantum Computers:
There is a lot of research going on trying to develop a useful quantum computer. This is a device or system where the "bits" in a standard binary computer (all thjose 1's and 0's) are traded for "qubits". A qubit has the quantum property of being is an undefined state, or a "superposition" of being both a "1" and a "0" simultaneously. That sounds weird, as is pretty much everything about quantum physics, but it is perfectly true. So far, no problem.

In principle, with enough qubits all working together ("entangled" in the quantum jargon) such a computer could solve certain types of problems and simulate physical systems much faster than today's computers. Algorithms for quantum computers have been developed to take advantage of this capability, and have been shown to work on systems of a few qubits. Thus, the principle of quantum computing is sound.

The major hurdle to the development of powerful quantum computers is the number of qubits that can be entangled and maintained for the duration of the algorithm processing. It is relatively easy to get a large number of atoms entangled, but those are not qubits. To be useful, the qubits have to be maintained individually, programmed individually and properly, allowed to process the desired algorithm, and then read-out at the end of the run. The trick is to do that while keeping them all entangled long enough so they can all work on the algorithm simultaneously. In all such systems so far, "decoherence" sets in after a brief time, as the qubits begin to lose their entanglement. At that point - usually a few microseconds - errors creep into the processing so that the results quickly decay into nonsense.

In the initial years of development, various ways to generate and maintain qubits were researched. Then various ways to get several qubits working together were developed. It became possible to keep up to perhaps ten or a dozen qubits entangled and accessible long enough for testing simple quantum algorithms successfully. However, any useful quantum computer - "useful" meaning sufficiently faster than a normal computer to make the cost and effort worthwhile - would have to use hundreds of qubits, and preferably thousands, and that has proven to be a huge challenge.

Quantum computers boasting ten or so qubits are available for research purposes today. There are claims of systems having 40 or more qubits, but those are questioned by many people. Any system with 40 qubits that cannot get them all entangled together, carefully programmed, and kept in that state long enough to perform useful processing is not very helpful. I am not an expert on this, of course, but from what I have read, every added qubit makes the system much more difficult to set up, more susceptible to noise and errors, and reduces the duration of the entanglement. As a result, there has not been much advance in the past few years.

Based on this pessimistic outlook, I do not expect useful quantum computers to be ready for sale any time soon, and probably not for a long time, if ever. Here too, the research and technology are fascinating and should continue, but we shouldn't base our support for it on hyped up promises. I am not alone in my doubt, a recent IEEE Spectrum article described some additional concerns about the feasibility of quantum computing as the number of qubits increases.

Origin-of-Life Research:
This one is somewhat different in that the research does not aim to produce any new products or commercially useful processes, although if successful, some would doubtless flow from the results. Rather, the purpose here is to explore how life might have got started on Earth some 3.8 billion years ago or so, and as an added benefit, to see what conditions and processes could possibly cause life to arise on other worlds. This is called abiogenesis - life arising from non-life.

These are valid research projects, but there is another motive; that is to show that undirected natural processes could bring forth simple life forms, thereby undermining Intelligent Design theory and the need to posit supernatural creation help or intelligent guidance. As Richard Lewontin, an atheist scientist remarked, "materialism is absolute, for we cannot allow a divine foot in the door." Thus, some scientists are consumed with seeking only materialist causes, however hypothetical or unlikely.

And unlikely they are. Charles Darwin speculated about life beginning in a "warm little pond" somewhere on Earth, but the biomolecular science needed to define what that meant only came about a hundred years later with research into proteins, DNA, the genetic code and other aspects of living cells and organisms. There have been numerous hypotheses forwarded about how life might have got started: thermal cycling in warm ponds, non-equilibrium chemistry around sea-floor vents, surface catalysis on clays, organic molecules deposited by comets, lightning strikes in the atmosphere, even panspermia (seeding from outer space). It seems any superficially plausible speculation is good enough to get published, hyped up for the public, and even funded for further research.

The detailed chemistry aspects of all these ideas, however, involve immense hurdles, even with the probabilistic resources available over the surface of the Earth and hundreds of millions of years. Trying to find credible chemical pathways from simple organic molecules to complex biology capable of self-reproduction is extremely difficult, and so far has been impossible, even under carefully controlled laboratory experiments, with expert help in selecting pure chemicals, exacting conditions, purifying intermediate steps, preventing unwanted reactions, and then changing everything for the next step. To serioualy suggest this all happening in a random dilute mix of molecules, with no plan or guidance, beggars the imagination.

Some of the many difficulties are the fact that the building blocks of life are difficult to construct in mixed-chemistry situations, and tend to degrade or agglommerate in the wrong way once they are created. Getting a nice mix of nucleotides or amino acids together  under any realistic early-Earth conditions is essentially impossible. Getting them to then polymerize properly is another massive hurdle. All of life uses left-handed molecules, but non-biological processes tend to produce both left and right handed molecules at random, which are then difficult to separate.

The simplest possible cell needs more than the building blocks. Any realistic cell needs proteins to operate, specific sugars to consume for energy, various lipids for the cell wall, other carbohydrates, and complex enzymes to make the biochemical processes possible. Any living system uses DNA and other codes to specify these complex items, as well as pre-existing molecular machines (proteins) to make them and do the actual work. How any of this could have come about naturally anywhere is all but incredible. How all of it could have come about in the same place at the same time, by unguided, natural processes is essentially impossible.

Ah, but you will say, there was an intermediate step using RNA molecules as both the early genetic code, and the functional aparatus to do the work. Thus was born the RNA world hypothesis, wherein RNA somehow came about and was active enough to not only operate as a simple life form, but also store information and replicate itself, allowing Darwinian evolution to get going. This overlooks the need for lipids and sugars, but sounds interesting. It is presumed that the DNA code storage and the protein synthesis mechanisms evolved later via the process of natural selection.

Aside from the problem of supplying the lipids and sugars, and the undefined Darwinian magic, not to mention the chirality (left-right chemistry) issue, RNA molecules do not spontaneously come about by non-biological means, much less come together in long polymer chains. Even under controlled lab conditions (intelligent design here!), the best active RNA molecule that scientists could design and create, so far as I am aware, is one that cuts itself in two pieces; not exactly a promising start. More recently, a laboratory created RNA strand that can catalyse its own "reproduction" from other carefully produced smaller strands, under precise lab conditions, has apparently been demonstrated. The gap between these experiments and the creation of an "RNA world" under naturally occurring conditions is immense.

How any RNA protocell would develop the transcription hardware needed for DNA before needing DNA, or how it would make DNA before being able to use it has never been elucidated. A lot of intracellular machinery is needed for the DNA-RNA-protein synthesis process to work, so it would all be needed at the same time since unguided processes, even Darwinian ones, cannot foresee a future need for specific complex chemistries.

Over and above all of this is the information problem.  Even the simplest life form has megabits of functionally precise DNA code, which is needed to specify all of its hundreds of proteins and to control its cellular processes. In our uniform experience, huge amounts of meaningful information only arises from an intelligence, or a machine designed and built by an intelligence. There is no credible natural mechanism that can generate such volumes of genetic code, much less usefully insert it into a complex chemical system. In theory (but questioned in practice), the Darwinian mechanism can add a bit or two at a time, but that cannot account for the megabits of code needed before natural selection has something to work on - a self sustaining, self replicating protocell.

Every effort to produce life from scratch, even under lab conditions, with scientific guidance and control, have fallen far short of the goal. With each new finding in molecular biology, the complexity increases, and new hurdles to abiogenesis are revealed.  Thus, I do not expect that humans will ever find a credible natural route to making life.  I am not an expert here, but other, more knowledgeable chemists agree.  Such research efforts should probably continue as new pathways and processes may be revealed, but the prospects are not promising, notwithstanding the hype in popular magazines. Perhaps research should expand to explore how much functional information is needed, and allow for the possibility that intelligence provided that information, along with the inital biochemical tools needed to interpret and use the information.

No comments:

Post a Comment