The prospect of ending disease and dramatically slowing aging,
coupled with mass extinction of traditional jobs, at least in the developed
world, poses hefty ethical quandaries for humankind. We may find ourselves living longer than ever
before while having less meaningful, necessary work to do. I am an optimist and progressive by nature,
but I’m concerned about—perhaps even doubt—our ability to shepherd technology
in a way that benefits all mankind. Are
we mature enough as a species to be responsible stewards of our own brilliant creations?
We’ve already proven, time and again, our technology
outpaces our ability to reconcile it with our humanity. “If we can, we will” trumps “even if we can,
should we?” every time. By the time we’ve
cogitated on the ramifications, there ain’t no gettin’ that genie back in the
bottle. We cannot form public policy
fast enough to keep up. To wit: nuclear
weapons, weaponized germs, drones, electronic cigarettes, driverless cars,
computer-controlled high-speed securities trading, etc.
Many of us have heard the almost cliché ethical conundrum
associated with driverless cars: will the car be programmed to protect the occupants
or the people outside the vehicle if it comes to that? Who gets to decide the answer to that
question? The insurance industry?
Over the past few years, there have been discussions
regarding the theoretical possibilities of creating some doomsday form or state
of matter in the Large Hadron Collider (the largest and most powerful particle accelerator
in the world, housed in Switzerland and run by scientists of the European
nuclear research agency, CERN). Could a cadre
of Swiss PhD quantum supergeeks accidentally create a black hole that would
destroy our planet? My reading strongly
indicates the general consensus among the scientific community is that such a
possibility is extremely remote. Close
to zero. Still, who asked the question,
and who got to answer it? How close to
zero is okay for the rest of us?
This time lag, this growing gap, between technology
deployment and ethical due diligence is widening at an accelerating rate.
By the end of this century, we will very likely have
conquered, or at least significantly subdued, many, if not most, of the diseases
that vex us today—cancer, heart disease, and degenerative neurological and
muscular disorders. We will very likely
have the technological capability to prevent birth defects and even select genetic
traits in our offspring. It’s very
possible we will have a much better understanding of the mechanisms of aging,
and will have developed ways to retard the aging process. It’s all very exciting.
It’s also very scary.
Certainly, biotechnology breakthroughs are going to be available on a
very asymmetric basis, socio-economically.
Access, for the first decades, will be an expensive privilege, not a
right. We will vanquish disease, eliminate
birth defects, choose the color of our children’s eyes, and defy aging in the
affluent parts of our world first. The
fundamental distinction between “haves” and “have-nots” is relative financial
wealth—the chasm that incites and inflames global conflict. What will happen when the “haves” also gain
access to (or, more ominously, control of) more of the most precious resource of
all: time—years, maybe decades, of high-quality living? Look out world.
So, people in the richer parts of the world will be living
longer, healthier lives. Doing what for
a living is not yet clear.
Mankind began as hunter-gatherers. We lived in small social
groups, working to meet all of our needs on our own. We evolved, moved into cities, began
specializing, and trading on our specialties to meet all our needs. You made my bread, I shod your horses. Today, over half of human beings live in cities.
A tiny percentage of us provides the
food for all of us. Our progressing
specialization through the ages, accelerated by global competition, has been
the driving force behind automation.
Automation (including artificial intelligence) is merely the next logical,
inescapable, step in industrial development.
We are either past, or fast approaching, the point at which
most manufacturing jobs (as well as jobs in many other sectors—agriculture,
medicine, transportation, energy extraction, to name a few) can be technologically
performed better by machines than humans.
Safer. Faster. Much less variation. No human error. The “jobless recovery” after the 2008 global
financial meltdown gave us our first stark glimpse of this new reality. The only true decision being made right now: “Is
it more cost effective to automate in the developed world, or export the jobs
to low labor rate countries, where wages are still cheap enough to offset poor
efficiency and productivity?” This trend
is irreversible and accelerating.
So, we better figure out how and where we humans fit into
the world that is fast coming at us.
What functions are inherently incompatible with automation (at least for
the foreseeable future)? What fields
will we try to steer our children into to give them the best chance for
opportunity and success? Will there be
enough work for all of us to do? Will
that work be valuable enough—will we get paid enough—to allow us to support
ourselves and our families? How will,
how should, the economy work? We’ll be
living longer, capable of working longer.
Will our global population increase precipitously, will it accelerate, if
we’re all living longer?
These themes are not new.
They’ve been within the realm of science fiction for decades. What is new is that these themes are no
longer fiction. It’s happening. We’re witnessing the emergence. There are far more questions, with precious few
answers. I certainly have none to offer,
other than to start thinking about all of this right now. Now is the time to be articulating and
debating the important questions, because as soon as we can do it, we probably will
do it, whether or not we should do it. Cash
in on those investments. Exceed the
earnings projections. Reward the
shareholders. Long-term risks and consequences
be damned.
Again, I’m an optimist.
This technological progress in life sciences and manufacturing can, and
should, turn out great for all of us; but, I’m not convinced the forces of
nature—human and otherwise—will lead to good outcomes naturally. It won’t happen by accident. We have to decide and commit to playing
active offense, or passive defense. Are
we going to steer this car, or what?
God bless us, everyone.

Thanks for your insights!
ReplyDelete