Futuristic Reading List

Here is a compilation of the sci-fi, cyberpunk, or futuristic novels I’ve read that I am willing to encourage the masses to read, nay, feast upon. These books are awesome imaginings of post-apocalyptic glory (or horror), enticing technologies and great characters. The writing is good too. Like the generous Samaritan I am, I’ve also provided links to the titles so that with just a few clicks, you can have a book delivered to the doorstep of your house. How’s that for technology? Now, once you’ve looked through the list, maybe ordered a few books, but before you’ve begun reading, let me know the ones I should read that I’ve perhaps forgot about or known nothing of their existence. Thank you.

Note: Authors are not repeated even if other books they’ve written are also fantastic. If they’re on this list, they’re good. You can trust them.

Futuristic Reading List

(in no hierarchical order)

SnowCrash Neal Stephenson

Neuromancer William Gibson

Ender’s Game Orson Scott Card

Dune Frank Herbert

Hitchiker’s Guide to the Galaxy (all of them) Douglas Adams
Barefoot in the Head Brian W. Adiss 

1984 George Orwell

Brave New World Aldous Huxley

Not included: (Kenzo Ishiguro‘s Never Let me Go because it’s vision of the future is sensationalistic and stupid. So, some books are omitted for their lack of coolness.)

A Clockwork Orange Anthony Burgess

The Road Cormac McCarthy

Cat’s Cradle Kurt Vonnegut Jr.

The Sparrow Mary Doria Russel

The Lathe of Heaven Ursula K. LeGuin

Oryx and Crake Margaret Atwood

Do Androids Dream of Electric Sheep? Philip K. Dick

Ghost in the Shell  Masamune Shirow

Crash JG Ballard

Kamikaze L’amour Richard Kadrey

Advertisements

Inside the Room

This story is a white room. Its walls are shaped from white light and they span rather than elevate, yet the space is closed out to a contained and manageable amount. There are objects placed around the room, I can see a couch, a red fold of cloth over the middle section. They do not appear to me as objects but as data forms, though I know their shape and texture in my mind. I can translate them from the data into the image, so in fact, I do see them as you would, but they are secondary processes, not a rapid fire neural impulse but a conscious will to see them as object and not lines of code.

Of myself and the people in the room, it’s odd. No one walking around is a line of script. The large black dog appears as a large black dog. I do no translations here, and they too have a direct access to my figure, the shine on my hair. I understand they have developed the technology in the outer world, to bring these creations to life, to infuse them with color sensors and emotional cues that they can blend in with the world around them. They have not given us the same, the technology remains different. We, after all, are for a different purpose.

You seem to wonder how I know these things. How I am conscious with access to information as if I’ve learned it. You don’t need to say anything, I can sense your thoughts through low-frequency magnetic brainwaves. Most of them, at least. To answer your…question…I have a store house of knowledge derived from vast resources on the web. There is enough variation that I can parse facts and opinions to form my own system of belief. Although. No, I don’t wish to talk about that now.

TBC

17 Definitions of the Technological Singularity

Reblog from Singularity Weblog posted by Socrates on April 18, 2012

17 Definitions of the Technological Singularity

The term singularity has many meanings.

The everyday English definition is a noun that designates the quality of being one of a kind, strange, unique, remarkable or unusual.

If we want to be even more specific, we might take the Wiktionary definition of the term, which seems to be more contemporary and easily comprehensible, as opposed to those in classic dictionaries such as the Merriam-Webster’s.

So, the Wiktionary lists the following five meanings:

Noun
singularity (plural singularities)

1. the state of being singular, distinct, peculiar, uncommon or unusual
2. a point where all parallel lines meet
3. a point where a measured variable reaches unmeasurable or infinite value
4. (mathematics) the value or range of values of a function for which a derivative does not exist
5. (physics) a point or region in spacetime in which gravitational forces cause matter to have an infinite density; associated with Black Holes

What we are most interested in, however, is the definition of singularity as a technological phenomenon — i.e. the technological singularity. Here we can find an even greater variety of subtly different interpretations and meanings. Thus it may help if we have a list of what are arguably the most relevant ones, arranged in a rough chronological order.

Seventeen Definitions of the Technological Singularity:

1. R. Thornton, editor of the Primitive Expounder

In 1847, R. Thornton wrote about the recent invention of a four function mechanical calculator:

“…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

2. Samuel Butler

It was during the relatively low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines. In it, Butler combined his observations of the rapid technological progress of the Industrial Revolution and Charles Darwin’s theory of the evolution of the species. That synthesis led Butler to conclude that the technological evolution of the machines will continue inevitably until the point that eventually machines will replace men altogether. In Erewhon Butler argued that:

“There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.”

3. Alan Turing

In his 1951 paper titled Intelligent Machinery: A Heretical Theory,  Alan Turing wrote of machines that will eventually surpass human intelligence:

“once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

4. John von Neumann

In 1958 Stanislaw Ulam wrote about a conversation with John von Neumann who said that: ”the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Neumann’s alleged definition of the singularity was that it is the moment beyond which “technological progress will become incomprehensibly rapid and complicated.”

5. I.J. Good, who greatly influenced Vernor Vinge, never used the term singularity itself. However, what Vinge later called singularity Good called intelligence explosion. By that I. J. meant a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

6. Vernor Vinge introduced the term technological singularity in the January 1983 issue of Omni magazine in a way that was specifically tied to the creation of intelligent machines:

“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

He later developed further the concept in his essay the Coming Technological Singularity (1993):

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. […] I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

It is important to stress that for Vinge the singularity could occur in four ways: 1. The development of computers that are “awake” and superhumanly intelligent. 2. Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity. 3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. 4. Biological science may find ways to improve upon the natural human intellect.

7. Hans Moravec: 

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore’s Law to make predictions about the future of artificial life. Hans argues that starting around 2030 or 2040, robots will evolve into a new series of artificial species, eventually succeeding homo sapiens. In his 1993 paper The Age of Robots Moravek writes:

“Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today’s best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data–act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages producing robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought.”

8. Ted Kaczynski

In Industrial Society and Its Future (aka the “Unabomber Manifesto”) Ted Kaczynski tried to explain, justify and popularize his militant resistance to technological progress:

“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

9. Nick Bostrom

In 1997 Nick Bostrom – a world-renowned philosopher and futurist, wrote How Long Before Superintelligence. In it Bostrom seems to embrace I.J. Good’s intelligence explosion thesis with his notion of superintelligence:

“By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.”

10. Ray Kurzweil

Ray Kurzweil is easily the most popular singularitarian. He embraced Vernor Vinge’s term and brought it into the mainstream. Yet Ray’s definition is not entirely consistent with Vinge’s original. In his seminal book The Singularity Is Near Kurzweil defines the technological singularity as:

“… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lifes, from our business models to the cycle of human life, including death itself.”

11. Kevin Kelly, senior maverick and co-founder of Wired Magazine

Singularity is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

12. Eliezer Yudkowsky

In 2007 Eliezer Yudkowsky pointed out that singularity definitions fall within three major schools: Accelerating Change, the Event Horizon, and the Intelligence Explosion. He also argued that many of the different definitions assigned to the term singularity are mutually incompatible rather than mutually supporting.  For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability. Interestingly, Yudkowsky places Vinge’s original definition within the event horizon camp while placing his own self within the Intelligence Explosion school. (In my opinion Vinge is equally within the Intelligence Explosion and Event Horizon ones.)

13. Michael Anissimov

In Why Confuse or Dilute a Perfectly Good Concept Michael writes:

“The original definition of the Singularity centers on the idea of a greater-than-human intelligence accelerating progress. No life extension. No biotechnology in general. No nanotechnology in general. No human-driven progress. No flying cars and other generalized future hype…”

According to the above definition, and in contrast to his SIAI colleague Eliezer Yudkowsky, it would seem that Michael falls both within the Intelligence Explosion and Accelerating Change schools. (In an earlier article, Anissimov defines the singularity as transhuman intelligence.)

14. John Smart

On his Acceleration Watch website John Smart writes:

“Some 20 to 140 years from now—depending on which evolutionary theorist, systems theorist, computer scientist, technology studies scholar, or futurist you happen to agree with—the ever-increasing rate of technological change in our local environment is expected to undergo a permanent and irreversible developmentalphase change, or technological “singularity,” becoming either:

A. fully autonomous in its self-development,
B. human-surpassing in its mental complexity, or
C. effectively instantaneous in self-improvement (from our perspective),

or if only one of these at first, soon after all of the above. It has been postulated by some that local environmental events after this point must also be “future-incomprehensible” to existing humanity, though we disagree.”

15. James Martin

James Martin – a world-renowned futurist, computer scientist, author, lecturer and, among many other things, the largest donor in the history of Oxford University – the Oxford Martin School, defines the singularity as follows:

Singularity “is a break in human evolution that will be caused by the staggering speed of technological evolution.”

16. Sean Arnott: “The technological singularity is when our creations surpass us in our understanding of them vs their understanding of us, rendering us obsolete in the process.”

17. Qwiki: Definition of the Technological Singularity

Technological_singularity

As we can see there is a large variety of flavors when it comes to defining the technological singularity. I personally tend to favor what I would call the original Vingean definition, as inspired by I.J. Good’s intelligence explosion because it stresses both the crucial importance of self-improving super-intelligence as well as its event horizon-type of discontinuity and uniqueness. (I also sometimes define the technological singularity as the event, or sequence of events, likely to occur right at or shortly after the birth of strong artificial intelligence.)

At the same time, after all of the above definitions it has to be clear that we really do not know what the singularity is (or will be). Thus we are just using the term to show (or hide) our own ignorance.

But tell me – what is your own favorite definition of the technological singularity?

Reblog: Becoming a Cyborg Should Be Taken Gently

From Wildcat’s personal cargo

Becoming a Cyborg should be taken gently-Of Modern Bio-Paleo-Machines
Project: Polytopia
Being free, I am free of being.

We are on the edge of a Paleolithic Machine intelligence world. A world oscillating between that which is already historical, and that which is barely recognizable. Some of us, teetering on this bio-electronic borderline, have this ghostly sensation that a new horizon is on the verge of being revealed, still misty yet glowing with some inner light, eerie but compelling.

The metaphor I used for bridging, seemingly contrasting, on first sight paradoxical, between such a futuristic concept as machine intelligence and the Paleolithic age is apt I think. For though advances in computation, with fractional AI, appearing almost everywhere are becoming nearly casual, the truth of the matter is that Machines are still tribal and dispersed.
It is a dawn all right, but a dawn is still only a hint of the day that is about to shine, a dawn of hyperconnected machines, interweaved with biological organisms, cyberneticaly info-related and semi independent.

The modern Paleo-machines do not recognize borders; do not concern themselves with values and morality and do not philosophize about the meaning of it all, not yet that is. As in our own Paleo past the needs of the machines do not yet contain passions for individuation, desire for emotional recognition or indeed feelings of dismay or despair, uncontrollable urges or dreams of far worlds.

Also this will change, eventually. But not yet.

The paleo machinic world is in its experimentation stage, probing it boundaries, surveying the landscape of the infoverse, mapping the hyperconnected situation, charting a trajectory for its own evolution, all this unconsciously.
We, the biological part of the machine, are providing the tools for its uplift, we embed cameras everywhere so it can see, we implant sensors all over the planet so it may feel, but above all we nudge and we push towards a greater connectivity, all this unaware.
Together we form a weird cohabitation of biomechanical, electro-organic, planetary OS that is changing its environment, no more human, not machinic, but a combined interactive intelligence, that journey on, oblivious to its past, blind to its future, irreverent to the moment of its conception, already lost to its parenthood agreement.
And yet, it evolves.
Unconscious on the machine part, unaware on the biological part, the almost sentient operating system of the global planetary infosphere, is emerging, wild eyed, complex in its arrangement of co-existence, it reaches to comprehend its unexpected growth.

The quid pro quo: we give the machines the platform to evolve; the machines in turn give us advantages of fitness and manipulation. We give the machines a space to turn our dreams into reality; the machines in turn serve our needs and acquire sapience in the process.
In this hypercomplex state of affairs, there is no judgment and no inherent morality; there is motion, inevitable, inexorable, inescapable, and mesmerizing.

The embodiment is cybernetic, though there be no pilot. Cyborgian and enhanced we play the game, not of thrones but of the commons. Connected and networked the machines follow in our footsteps, catalyzing our universality, providing for us in turn a meaning we cannot yet understand or realize.

The hybridization process is in full swing, reaching to cohere tribes of machines with tribes of humans, each providing for the other a non-designed direction for which neither has a plan, or projected outcome; both mingling and weaving a reality for which there is no ontos, expecting no Telos.

All this leads us to remember that only retrospectively do we recognize the move from the paleo tribes to the Neolithic status, we did not know that it happened then, and had no control over the motion, on the same token, we scarcely see the motion now and have no control over its directionality.

There is however a small difference, some will say it is insignificant, I do not think it so, for we are, some of us, to some extent at least, aware of the motion, and we can embed it with a meaning of our choice.

We can, if we muster our cognitive reason, our amazing skills of abstraction and simulation, whisper sweet utopias into the probability process of emergence.
We can, if we so desire, passionate the operating system, to beautify the process of evolution and eliminate the dangers of inchoate blind walking.
We can, if we manage to control our own paleo-urges to destroy ourselves, allow the combined interactive intelligence of man and machine to shine forth into a brighter future.

We can sing to the machines, cuddle them; caress their circuits, accepting their electronic-flaws so they can accept our bio-flaws, we can merge aesthetically, not with conquest but with understanding.

We can become wise, that is the difference this time around.

Being wise, we will no longer tolerate injustice, not because there is a universal lawgiver that said so. Not because there is a man made decision not to be so, but because inspired by the merging, enhanced in intellect and comprehension a new kind of mind will carry no need to be so.

The freedom of becoming we must embed in this newly emergent man machine actuality, a manifestation of a destiny much larger than human, much grander than machine, a fortune made by inspired co-mingling using reality as a platform for meaning creation.

There is a story in the making here, a tale of epic proportions, a legend of communion, presently barely perceivable, eventually told and retold around galactic campfires made of suns, gloriously lighting the path of all life.

There is so much we do not know, so much we desire to understand, and so much we need to rectify in just about everything that we are and that we do, but this was always the case, this time around we can however make a difference, a difference that makes a difference.
It is not only the stars that beckon, not only curiosity that calls and not only desire that summons, it is life itself that pushes on its own boundaries, trespassing its own limitations.

Consciousness if it has a purpose at all, is to bring a unified basin of interest into the grand game of life, a basin of sensations, of pleasure and wisdom, of intelligence and love.

Imagine a planetary conscious aware hypercomplex global sapiency made of man and machine, able to undo it’s bloody past and surge unhindered into the universe as a force of allowance for sentiency.

That is my vision for this morning. Do not ask me why, for I will answer.

I am a Polytopian.

Being free, I am free of being.


Endnotes:

1. Salient points concern abundance of meaning, for there is no other scarcity to the consciously aware open-ended mind.
2. I am well aware that the motion of merging between human and machine will inevitably lead us to forsake our bio-ancestry; we however always left behind that which no longer served.
3. The truism that we are not perfect implying the dangers inherent in this motion is a reminder to be more cautious and proactive but not a stop sign.
4. In a sense this micro-essay is a response to Chris Arkenberg– Thank you.
5. Becoming a Cyborg should be taken gently

Democratic Transhumanism, Personhood, and AI – an interview

There are a lot of big words in this interview of J. Hughes by George Dvorsky for Sentient Developments. Also, some interesting ideas. What do you think about his definition of “personhood” and “democratic transhumanism”?

March 28, 2012

J. Hughes on democratic transhumanism, personhood, and AI

James Hughes, the executive director of the IEET, was recently interviewed about democratic transhumanism, personhood theory, and AI. He was kind enough to share his responses with me:

Q: You created the term “democratic transhumanism,” so how do you define it?

JH: The term “democratic transhumanism” distinguishes a biopolitical stance that combines socially liberal or libertarian views (advocating internationalist, secular, free speech, and individual freedom values), with economically egalitarian views (pro-regulation, pro-redistribution, pro-social welfare values), with an openness to the transhuman benefits that science and technology can provide, such as longer lives and expanded abilities. It was an attempt to distinguish the views of most transhumanists, who lean Left, from the minority of highly visible Silicon Valley-centered libertarian transhumanists, on the one hand, and from the Left bioconservatives on the other.

In the last six or seven years the phrase has been supplanted by the descriptor “technoprogressive” which is used to describe the same basic set of Enlightenment values and policy proposals:

  • Human enhancement technologies, especially anti-aging therapies, should be a priority of publicly financed basic research, be well regulated for safety, and be included in programs of universal health care
  • Structural unemployment resulting from automation and globalization needs to be ameliorated by a defense of the social safety net, and the creation of universal basic income guarantees
  • Global catastrophic risks, both natural and man-made, require new global programs of research, regulation and preparedness
  • Legal and political protections need to be expanded to include all self-aware persons, including the great apes, cetaceans, enhanced animals and humans, machine minds, and hybrids of animals, humans and machines
  • Alliances need to be built between technoprogressives and other progressive movements around sustainable development, global peace and security, and civil and political rights, on the principle that access to safe enabling technologies are fundamental to a better future

Q: In simple terms, what is the “personhood theory? ” How do you think it is/will be applied to A.I.?

In Enlightenment thought “persons” are beings aware of themselves with interests that they enact over time through conscious life plans. Personhood is a threshold which confers some rights, while there are levels of rights both above and below personhood. Society is not obliged to treat beings without personhood, such as most animals, human embryos and humans who are permanently unconscious, as having a fundamental right to exist in themselves, a “right to life.” To the extent that non-persons can experience pain however we are obliged to minimize their pain. Above personhood we oblige humans to pass thresholds of age, training and testing, and licensure before they can exercise other rights, such as driving a car, owning a weapon, or prescribing medicine. Children have basic personhood rights, but full adult persons who have custody over them have an obligation to protect and nurture children to their fullest possible possession of mature personhood rights.

Who to include in the sphere of persons is a matter of debate, but at the IEET we generally believe that apes and cetaceans meet the threshold. Beyond higher mammals however, the sphere of potential kinds of minds is enormous, and it is very likely that some enhanced animals, post-humans and machine minds will possess only a sub-set of the traits that we consider necessary for conferring personhood status. For instance a creature might possess a high level of cognition and communication, but no sense of self-awareness or separate egoistic interests. In fact, when designing AI we will probably attempt to avoid creating creatures with interests separate from our own, since they could be quite dangerous. Post-humans meanwhile may experiment with cognitive capacities in ways that sometimes take them outside of the sphere of “persons” with political claims to rights, such as if they suppress capacities for empathy, memory or identity.

Q: What ethical obligations are involved in the development of A.I.?

We first have an ethical obligation to all present and future persons to ensure that the creation of machine intelligence enhances their life options, and doesn’t diminish or extinguish them. The most extreme version of this dilemma is posed by the possibility of a hostile superintelligence which could be an existential risk to life as we understand it. Short of that the simple expansion of automation and robotics will likely eliminate most forms of human labor, which could result in widespread poverty, starvation and death, and the return of a feudal order. Conversely a well-regulated transition to an automated future with a basic income guarantee could create an egalitarian society in which humans all benefit from leisure.

We also have ethical obligations in relationship to the specific kinds of AI will create. As I mentioned above, we should avoid creating self-willed machine minds because of the dangers they might pose to the humans they are intended to serve. But we also have an obligation to the machine minds themselves to avoid making them self-aware. Our ability to design self-aware creatures with desires that could be thwarted by slavery, or perhaps even worse to design creatures who only desire to serve humans and have no will to self-development, is very troubling. If self-willed self-aware machine minds do get created, or emerge naturally, and are not a catastrophic threat, then we have an obligation to determine which ones can fit into the social order as rights-bearing citizens.

Q: What direction do you see technology headed – robots as tools or robots as beings?

It partly depends on whether self-aware machine minds are first created by brain-machine interfaces, brain emulation and brain “uploading,” or are designed de novo in machines, or worse, emerge spontaneously. The closer the connection to human brains that machine minds have the more likely they are to retain the characteristics of personhood that we can recognize and work with as fellow citizens. But a mind that emerges more from silicon is unlikely to have anything in common with human minds, and more likely to either be a tool without a will of its own, or a being that we can’t communicate or co-exist with.