JACK IN DAVAO
Other Places, Other Perspectives
On Our Technological Future, Futurology, And Why A Middle-Aged Lawyer Would Subject Himself To The Ignominy Of Graduate Student-hood Instead Of Making Big Bucks Practicing Law

One of the biggest mistakes that one can make in life, in my opinion, is the one that economists refer to as "preparing for the wrong world". Investing heavily in tape cassette manufacturing just as CD's were entering the scene, for example. Assuming that the future will be like the present (or, worse, the past) is an almost guaranteed way of preparing for the wrong world. Parents often make this mistake in advising their children. Smart kids perfectly understand that career strategies that may have worked for their parents in 1972 are probably not going to be optimal in 2006.

Ray Kurzweil says that technology progresses exponentially. (If you aren't familiar with Kurzweil's work, I highly recommend his latest book The Singularity Is Near, his earlier book The Age Of Spiritual Machines, and his web site.) For the non-mathematically inclined, "exponential" means that technological progress happens faster and faster. This only makes sense, since today's progress yields better tools and a wider range of materials and components and ideas that can be used to make even faster progress tomorrow.

I have had the good fortune to have lived and worked through enough of the computer revolution to have some perspective on how much technology can change things, and how fast. Barely a decade has passed since the internet burst into public consciousness. Less than five years ago, mobile phones were expensive, unreliable, and not at all commonplace. Twelve years ago, when I was working on my masters thesis in bioengineering, I sometimes found it necessary to drive 120 miles to Tucson to the UA medical library, to obtain articles in mainstream medical journals to which ASU did not subscribe. As I write this (in the spring of 2006), I am sitting in a house on an island in the southern Philippines, from which I can access the internet with my laptop over a broadband connection, and I can instantly search for and obtain nearly any article in any issue of any scientific journal ever published. I can talk to anyone anywhere in the world from my cell phone, or over the internet using my laptop and a headset, practically for free.

Now, at this point, you're probably thinking, "here we go again, another gee-whiz paean to computer technology." No. That isn't the point. That was merely an example of how fast things can move in a single technology. In, say, 1993, no one, except perhaps one or two visionaries whom no one believed, had a clue what was about to happen in the last half of the decade. Not even geeks -- I know, I was in graduate school in 1993, my thesis research was intensely computational, and the internet phenomenon took me completely by surprise. Even Bill Gates was blindsided by the internet.

First, Information Technology . . .

Superficially, the net may not seem like a big deal, basically television but with more channels (Dave Barry called it "CB radio for people who can type"). Wrong. The net reduces the cost of publishing to zero, so all kinds of new ideas and information are suddenly accessible. The net facilitates efficient management of enterprises, resulting in astonishing gains in productivity that benefit everyone. The net is probably responsible, more than any other factor, for lifting billions out of poverty in China and India by giving individuals and small enterprises a cost effective way to participate in the world economy. The net gives scientists and engineers almost instant access to the work of other scientists and engineers, information that formerly could be obtained only by spending hours or days in libraries searching by hand, and then only incompletely. The net enables increasing numbers of people to work without having to commute every day to an office downtown, potentially reducing consumption of fossil fuels, reducing pollution, improving the lives of children whose parents can now work from home, and allowing people a wider range of choices about where to live. None of this was anywhere on the radar screen as recently as a decade ago.

Then Biotech, Nanotech, Robotics . . .

Added 8/3/06: Here is what the highly regarded futurist Alvin Toffler has to say about it in his new book, Revolutionary Wealth (p. 11):

Serious scientists today are no longer afraid of damaging their reputations by talking about time travel, cyborgs, near immortaility, antigravity devices that could transform medicine and provide an endless source of non-fossil fuel energy, and many other possibilities once found only on the wilder shores of unbelievability.

Discussions of topics like these are not dismissed, as they were when we wrote about them in Future Shock in 1970 Nor is it only shaggy-haired scientists who are invewsting effort in these fields. Some of the biggest corporations in the world -- and some armies -- are spending huge sums to investigate them."

Biotech today is where the internet was in 1993. That is just my opinion, but a lot of smart people, including Kurzweil, share it. Nanotech and robotics are at most a decade behind. These technologies have the potential to alter human existence in ways that will make the late 90's seem like the stone age. Engineered perfect health. Dramatically extended life span. The end of the human misery that accompanies many chronic diseases. Cheap, custom manufacturing, of anything anyone can conceive of. An end to the drudgery of boring repetitive tasks.

That's just biotech, nanotech, and robotics, disciplines that are sufficiently far along to allow reasonably confident prediction. They are just the beginning. A small but growing number of very bright, successful, mainstream scientists -- not fringe crackpots by any stretch -- are working on problems like antigravity, free energy, aging reversal, faster-than-light travel. I expect to see breakthroughs in some of these areas in my lifetime. (I am a technology optimist.)

The Singularity

Kurweil takes these kinds of projections much further, as do a growing number of "Singularitarians" and "Transhumanists" who have embraced and extended Kurweil's ideas. Kurzweil argues that the exponential increase in the capabilities of information technology, biotechnology, nanotechnology, and robotics, and the convergence of these technologies, will, within no more than a few decades, culminate in a "singularity" where artificial computational and robotic intelligence will be so far in excess of human intelligence that it is impossible to predict what the world may be like beyond that point. In twenty years or so, according to Kurzweil's projections, the thousand dollar PC on your desktop will have the computational power of one human brain. Ten years or so after that, a thousand dollar PC will have the computational power of the entire human race. What might such an intelligence accomplish? Who could predict?

Some observers, like Bill Joy ("Why The Future Doesn't Need Us"), have wondered what kind of role there would be for humans in a world of artificially created superintelligence (pets?). Others (e.g. Berkeley philosopher John Searle) argue that "strong AI" is not possible in the first place, that Kurzweil's PC with the computing power of the whole human race will still be basically nothing but a really fast calculator, not conscious, not 'intentional'. Kurzweil, for his part, predicts that the evolving superintelligence will not be a threat to human existence because we will merge with it and be part of it, gradually replacing our biological parts with artificial ones and becoming more and more directly networked with machine intelligence.

The Convergence . . .

Here, I part company (somewhat) with the "Singularitarians". I have no idea what the world will be like thirty years from now, and I don't think they do either. I would prefer to focus on a shorter time frame, where there is more certainty and less motivation for existential debate. Kurzweil's exponential trends in IT, biotech, nanotech, and robotics seem to me unquestionably sustainable over the short term -- say, the next ten years. Even if the exponential increase eventually fizzles out, and progress after the (say) ten-year point becomes merely linear, the rate of progress and change will still be more than fast enough to keep things plenty interesting. So I would rather speak of what we might call the "Convergence" -- the accelerating progress in IT/biotech/nanotech/robotics, coupled with the rapidly increasing degree of interdependence and cross-fertilization among those fields, over the next (say) decade or two. The "Singularity" is, by definition, speculative; the "Convergence" is, in my view, very close to a sure thing, barring some intervening catastrophe.

I have no idea what the role of humans might be in a "Singularity", but the opportunities for humans in the Convergence are obvious. Forget about Kurzweil's predicted Nirvana-like merger of humans with an all-pervading artificial cyber-intelligence. No need to go that far. Try this: suppose you define "intelligence" in terms of the performance of a single human on some test event. An IQ test would suffice, but let us choose something more challenging: say, the Graduate Record Examination in Physics. Let us then specify that the test taker is a person with a recent university degree in physics, and will be subject to the usual time limit, but may use any resources he or she cares to bring to the exam. An optimal test-taker in 1976 would probably have brought a box of books and an electronic calculator. An optimal test taker in 2006 would bring a broadband web-connected laptop with appropriate software loaded (Mathematica would be No. 1 on my list).

Of course, we don't measure intelligence that way, but if what we want is a measure of an individual's true potential ability to function intellectually, why don't we measure that ability in the context of the resources to which a trained intellect would actually have access in real life? And if we did that, wouldn't we find that human intelligence has already increased enormously over the last few decades, as a direct result of technological improvements in our access to knowledge? Now, think about how much further we could increase our practical intelligence just with improvements that are already on the horizon -- ubiquitous high bandwidth net access, improved search utilities to help weed out the irrelevant, direct human-machine interfaces (am I the only one who finds it incredible that we are still mostly communicating by pressing buttons with our fingers??)

Why Does It Matter?

From one perspective, accelerating technological progress has been a rising tide that lifts all boats. To a greater or lesser degree, nearly everyone has benefitted from the burst of IT over the last decade. Progress can also be disruptive, however. Those who are able to avail themselves of the latest technological developments have a significant, and widening, advantage over those who cannot.

I take a slightly radical view, which diverges from that of futurists like Kurzweil. Kurzweil and other tech cheerleaders argue that technological breakthroughs are inherently democratizing; each new advance will very quickly become available to everyone. New technologies, they say, pass through predictable phases. PC's, laptops, cell phones -- at first expensive and so unreliable as to be barely useful, later moderately expensive and starting to become fairly useful, and ultimately completely reliable, ubiquitous, and so cheap that anyone who wants one can have one.

That argument may be reassuring to the Luddite masses, as I am sure it is intended to be. "Don't worry, no one is going to try to take advantage of technological change to actually get ahead -- that would be undemocratic -- just stay glued to American Idol, don't worry, be happy, we'll make sure you get your share."

Maybe it's just lawyerly cynicism, but I don't buy it. Humans evolved by competing, and I don't see any sign that the contest has miraculously been called off. Compare two competitors, one having access to the state of the art and one merely a year behind the curve. The pace of progress is, remember, increasing exponentially. That means that the absolute magnitude of the difference between state-of-the-art and a year behind gets bigger, and bigger, and bigger. So, whoever is ahead is going to choose not to avail himself/herself of that competitive advantage? Is that a credible hypothesis?

In fact, the difference between state-of-the-art and technology-available-to-the-masses is already huge; if you don't believe that, try comparing the standard of medical care at your HMO with, say, the Mayo Clinic. Where is cheap-reliable-ubiquitous as applied to, say, magnetic resonance imaging machines? Pharmaceuticals? Personal jet aircraft (for that matter, single-engine Cessnas)? And wouldn't someone who truly understands computational technology obtain a much bigger benefit even from cheap and ubiquitous items like PC's and laptops, than would the average technophobic consumer?

I think that there is a very great and rapidly growing competitive advantage to being as close to the cutting edge as it is possible to get, and I am not naive enough to assume that those who are motivated by greed for money and power have somehow overlooked this. Human society is, I suspect, even now in the process of making decisions that will determine whether the rapidly advancing technology enables a better world for everyone, or whether it merely enables a still higher level of consumption and power for a grabby few. Western society may already be in a process of bifurcating, to eventually gel into two distinct groups: one relatively small, comprising those who embrace and control the technology, the other comprising everyone else. If so, the two groups will diverge rather rapidly in terms of the breadth of their respective options. This kind of bifurcated society has existed many times before (and exists even today in many countries) and may even be the natural default state of human society.

I don't mean to suggest that 'everyone else' would necessarily be made worse off in material terms (although that is certainly possible as technology provides would-be oligarchs with improved tools of domination). The point is that those who 'get' the technology will be in a position to be proactive. They will design the future, while those who wait for the benefits to trickle down will be mere consumers of it. But, in a sense, the rapid advancement of technology is a quintessentially egalitarian force -- anyone who wants to can join the club, all it takes is the will to do so and the willingness to work at learning the necessary skills. Thanks to the net, the information is increasingly accessible to everyone.

I don't want the future to be controlled by a small, power-hungry elite, for its own exclusive benefit. It is, I think, terribly important for the preservation of a free and open society that the rest of us not cede control to a dominant few out of ignorance or laziness. It is terribly important that the inevitable empire building urges of the alpha gorillas among us be counterbalanced by moderate voices who understand and take part in what is happening at the frontiers of technological progress.

The bleeding edge is where the action is. At the moment, I've got a front row seat. I wouldn't trade that for mere money.



HOME
PATENT LAW PRACTICE
SITE CONTENTS
ABOUT
CONTACT



Copyright 2011-12 Jack S. Emery. License is freely given to reproduce site content provided authorship is acknowledged and URL or link to the source page are prominently displayed.