The Wonders of Technology

The ppl over at lifeboat foundation are totally awesome. Not only for their beautiful webdesign, but because their content is super interesting.They’ve done a list on 10 materials to watch for in the (+/-)future. Can you imagine diamonds used as building materials? How about floating cities made from a foamed-up titanium/aluminum mixture? I’d actually love to hear your imaginings of a world with diamond buildings and floating cities – leave a comment if your imagination runs wild.

Special Report

10 Futuristic Materials

by Lifeboat Foundation Scientific Advisory Board member Michael Anissimov.

1. Aerogel

Aerogel protecting crayons from a blowtorch.

This tiny block of transparent aerogel is supporting a brick weighing 2.5 kg. The aerogel’s density is 0.1 g/cm3.
Aerogel holds 15 entries in the Guinness Book of Records, more than any other material. Sometimes called “frozen smoke”, aerogel is made by the supercritical drying of liquid gels of alumina, chromia, tin oxide, or carbon. It’s 99.8% empty space, which makes it look semi-transparent. Aerogel is a fantastic insulator — if you had a shield of aerogel, you could easily defend yourself from a flamethrower. It stops cold, it stops heat. You could build a warm dome on the Moon. Aerogels have unbelievable surface area in their internal fractal structures — cubes of aerogel just an inch on a side may have an internal surface area equivalent to a football field. Despite its low density, aerogel has been looked into as a component of military armor because of its insulating properties.

2. Carbon nanotubes

Carbon nanotubes are long chains of carbon held together by the strongest bond in all chemistry, the sacred sp2 bond, even stronger than the sp3 bonds that hold together diamond. Carbon nanotubes have numerous remarkable physical properties, including ballistic electron transport (making them ideal for electronics) and so much tensile strength that they are the only substance that could be used to build a space elevator. The specific strength of carbon nanotubes is 48,000 kN·m/kg, the best of known materials, compared to high-carbon steel’s 154 kN·/kg. That’s 300 times stronger than steel. You could build towers hundreds of kilometers high with it.

3. Metamaterials

“Metamaterial” refers to any material that gains its properties from structure rather than composition. Metamaterials have been used to create microwave invisibility cloaks, 2D invisibility cloaks, and materials with other unusual optical properties. Mother-of-pearl gets its rainbow color from metamaterials of biological origin. Some metamaterials have a negative refractive index, an optical property that may be used to create “Superlenses” which resolve features smaller than the wavelength of light used to image them! This technology is called subwavelength imaging. Metamaterials would used in phased array optics, a technology that could render perfect holograms on a 2D display. These holograms would be so perfect that you could be standing 6 inches from the screen, looking into the “distance” with binoculars, and not even notice it’s a hologram.

4. Bulk diamond

We’re starting to lay down thick layers of diamond in CVD machines, hinting towards a future of bulk diamond machinery. Diamond is an ideal construction material — it’s immensely strong, light, made out of the widely available element carbon, nearly complete thermal conductivity, and has among the highest melting and boiling points of all materials. By introducing trace impurities, you can make a diamond practically any color you want. Imagine a jet, with hundreds of thousands of moving parts made of fine-tuned diamond machinery. Such a craft would be more powerful than today’s best fighter planes in the way an F-22 is better than the Red Baron’s Fokker Dr.1.

5. Bulk fullerenes

Diamonds may be strong, but aggregated diamond nanorods (what I call amorphous fullerene) are stronger. Amorphous fullerene has a isothermal bulk modulus of 491 gigapascals (GPa), compared to diamond’s 442 GPa. As we see in the image, the nanoscale structure of the fullerene gives it a beautiful iridescent appearance. Fullerenes can be made substantially stronger than diamond, but for greater energy cost. After a “Diamond Age” we may eventually transition to a “Fullerene Age” as our technology gets even more sophisticated.

6. Amorphous metal

Amorphous metals, also called metallic glasses, consist of metal with a disordered atomic structure. They can be twice as strong as steel. Because of their disordered structure, they can disperse impact energy more effectively than a metal crystal, which has points of weakness. Amorphous metals are made by quickly cooling molten metal before it has a chance to align itself in a crystal pattern. Amorphous metals may the military’s next generation of armor, before they adopt diamondoid armor in mid-century. On the green side of things, amorphous metals have electronic properties that improve the efficiency of power grids by as much as 40%, saving us thousands of tons of fossil fuel emissions.

7. Superalloys

A superalloy is a generic term for a metal that can operate at very high temperatures, up to about 2000 °F (1100 °C). They are popular for use in the superhot turbine areas of jet engines. They are used for more advanced oxygen-breathing designs, such as the ramjet and scramjet. When we’re flying through the sky in hypersonic craft, we’ll have superalloys to thank for it.

8. Metal foam

Metal foam is what you get when you add a foaming agent, powdered titanium hydride, to molten aluminum, then let it cool. The result is a very strong substance that is relatively light, with 75–95% empty space. Because of its favorable strength-to-weight ratio, metal foams have been proposed as a construction material for space colonies. Some metal forms are so light that they float on water, which would make them excellent for building floating cities, like those analyzed by Marshall T. Savage in one of my favorite books, The Millennial Project.

9. Transparent alumina

Transparent alumina is three times stronger than steel and transparent. The number of applications for this are huge. Imagine an entire skyscraper or arcology made largely of transparent steel. The skylines of the future could look more like a series of floating black dots (opaque private rooms) rather than the monoliths of today. A huge space station made of transparent alumina could cruise in low Earth orbit without being a creepy black dot when it passes overhead. And hey… transparent swords!

10. E-textiles

If you meet up and talk to me in 2020, I’ll likely be covered in electronic textiles. Why carry some electronic gadget you can easily lose when we can just wear our computers? We’ll develop clothing that can constantly project the video of our choosing (unless it turns out being so annoying that we ban it). Imagine wearing a robe covered in a display that actually projects the night sky in realtime. Imagine talking to people over the “phone” just by making a hand gesture and activating electronics in your lapel, then merely thinking about what you want to say (thought-to-speech interfaces). The possibilities of e-textiles are limitless.

Futurist Profile: Nick Kaloterakis

Nick Kaloerakis is an amazing designer with a mind for futuristic development. Very focused on metallic streamlined designs, Nick’s work is indicative of a trend towards envisioning melded technologies: electronics with engineering and elegance. For example, Nick draws a stunning image of hypersonic jets, streamlined to make trips from New York to Tokyo in 2 hours. Check out his work for an insight on how awesome designers are viewing future technologies:

You can find Nick’s work gracing the covers and insides of Popular Science, National Geographic and Discovery Channel.

Data is Power by Nick Kaloterakis


Mars Rover by Nick Kaloterakis

Deus Ex Machina by Nick Kaloterakis


17 Definitions of the Technological Singularity

Reblog from Singularity Weblog posted by Socrates on April 18, 2012

17 Definitions of the Technological Singularity

The term singularity has many meanings.

The everyday English definition is a noun that designates the quality of being one of a kind, strange, unique, remarkable or unusual.

If we want to be even more specific, we might take the Wiktionary definition of the term, which seems to be more contemporary and easily comprehensible, as opposed to those in classic dictionaries such as the Merriam-Webster’s.

So, the Wiktionary lists the following five meanings:

singularity (plural singularities)

1. the state of being singular, distinct, peculiar, uncommon or unusual
2. a point where all parallel lines meet
3. a point where a measured variable reaches unmeasurable or infinite value
4. (mathematics) the value or range of values of a function for which a derivative does not exist
5. (physics) a point or region in spacetime in which gravitational forces cause matter to have an infinite density; associated with Black Holes

What we are most interested in, however, is the definition of singularity as a technological phenomenon — i.e. the technological singularity. Here we can find an even greater variety of subtly different interpretations and meanings. Thus it may help if we have a list of what are arguably the most relevant ones, arranged in a rough chronological order.

Seventeen Definitions of the Technological Singularity:

1. R. Thornton, editor of the Primitive Expounder

In 1847, R. Thornton wrote about the recent invention of a four function mechanical calculator:

“…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

2. Samuel Butler

It was during the relatively low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines. In it, Butler combined his observations of the rapid technological progress of the Industrial Revolution and Charles Darwin’s theory of the evolution of the species. That synthesis led Butler to conclude that the technological evolution of the machines will continue inevitably until the point that eventually machines will replace men altogether. In Erewhon Butler argued that:

“There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.”

3. Alan Turing

In his 1951 paper titled Intelligent Machinery: A Heretical Theory,  Alan Turing wrote of machines that will eventually surpass human intelligence:

“once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

4. John von Neumann

In 1958 Stanislaw Ulam wrote about a conversation with John von Neumann who said that: ”the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Neumann’s alleged definition of the singularity was that it is the moment beyond which “technological progress will become incomprehensibly rapid and complicated.”

5. I.J. Good, who greatly influenced Vernor Vinge, never used the term singularity itself. However, what Vinge later called singularity Good called intelligence explosion. By that I. J. meant a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

6. Vernor Vinge introduced the term technological singularity in the January 1983 issue of Omni magazine in a way that was specifically tied to the creation of intelligent machines:

“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

He later developed further the concept in his essay the Coming Technological Singularity (1993):

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. […] I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

It is important to stress that for Vinge the singularity could occur in four ways: 1. The development of computers that are “awake” and superhumanly intelligent. 2. Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity. 3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. 4. Biological science may find ways to improve upon the natural human intellect.

7. Hans Moravec: 

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore’s Law to make predictions about the future of artificial life. Hans argues that starting around 2030 or 2040, robots will evolve into a new series of artificial species, eventually succeeding homo sapiens. In his 1993 paper The Age of Robots Moravek writes:

“Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today’s best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data–act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages producing robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought.”

8. Ted Kaczynski

In Industrial Society and Its Future (aka the “Unabomber Manifesto”) Ted Kaczynski tried to explain, justify and popularize his militant resistance to technological progress:

“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

9. Nick Bostrom

In 1997 Nick Bostrom – a world-renowned philosopher and futurist, wrote How Long Before Superintelligence. In it Bostrom seems to embrace I.J. Good’s intelligence explosion thesis with his notion of superintelligence:

“By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.”

10. Ray Kurzweil

Ray Kurzweil is easily the most popular singularitarian. He embraced Vernor Vinge’s term and brought it into the mainstream. Yet Ray’s definition is not entirely consistent with Vinge’s original. In his seminal book The Singularity Is Near Kurzweil defines the technological singularity as:

“… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lifes, from our business models to the cycle of human life, including death itself.”

11. Kevin Kelly, senior maverick and co-founder of Wired Magazine

Singularity is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

12. Eliezer Yudkowsky

In 2007 Eliezer Yudkowsky pointed out that singularity definitions fall within three major schools: Accelerating Change, the Event Horizon, and the Intelligence Explosion. He also argued that many of the different definitions assigned to the term singularity are mutually incompatible rather than mutually supporting.  For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability. Interestingly, Yudkowsky places Vinge’s original definition within the event horizon camp while placing his own self within the Intelligence Explosion school. (In my opinion Vinge is equally within the Intelligence Explosion and Event Horizon ones.)

13. Michael Anissimov

In Why Confuse or Dilute a Perfectly Good Concept Michael writes:

“The original definition of the Singularity centers on the idea of a greater-than-human intelligence accelerating progress. No life extension. No biotechnology in general. No nanotechnology in general. No human-driven progress. No flying cars and other generalized future hype…”

According to the above definition, and in contrast to his SIAI colleague Eliezer Yudkowsky, it would seem that Michael falls both within the Intelligence Explosion and Accelerating Change schools. (In an earlier article, Anissimov defines the singularity as transhuman intelligence.)

14. John Smart

On his Acceleration Watch website John Smart writes:

“Some 20 to 140 years from now—depending on which evolutionary theorist, systems theorist, computer scientist, technology studies scholar, or futurist you happen to agree with—the ever-increasing rate of technological change in our local environment is expected to undergo a permanent and irreversible developmentalphase change, or technological “singularity,” becoming either:

A. fully autonomous in its self-development,
B. human-surpassing in its mental complexity, or
C. effectively instantaneous in self-improvement (from our perspective),

or if only one of these at first, soon after all of the above. It has been postulated by some that local environmental events after this point must also be “future-incomprehensible” to existing humanity, though we disagree.”

15. James Martin

James Martin – a world-renowned futurist, computer scientist, author, lecturer and, among many other things, the largest donor in the history of Oxford University – the Oxford Martin School, defines the singularity as follows:

Singularity “is a break in human evolution that will be caused by the staggering speed of technological evolution.”

16. Sean Arnott: “The technological singularity is when our creations surpass us in our understanding of them vs their understanding of us, rendering us obsolete in the process.”

17. Qwiki: Definition of the Technological Singularity


As we can see there is a large variety of flavors when it comes to defining the technological singularity. I personally tend to favor what I would call the original Vingean definition, as inspired by I.J. Good’s intelligence explosion because it stresses both the crucial importance of self-improving super-intelligence as well as its event horizon-type of discontinuity and uniqueness. (I also sometimes define the technological singularity as the event, or sequence of events, likely to occur right at or shortly after the birth of strong artificial intelligence.)

At the same time, after all of the above definitions it has to be clear that we really do not know what the singularity is (or will be). Thus we are just using the term to show (or hide) our own ignorance.

But tell me – what is your own favorite definition of the technological singularity?

Ten emerging technology trends to watch over the next decade

by Andrew Maynard for 2020 Science on December 25, 2009

Ten years ago at the close of the 20th century, people the world over were obsessing about the millennium bug – an unanticipated glitch arising from an earlier technology.  I wonder how clear it was then that, despite this storm in what turned out to be a rather small teacup, the following decade would see unprecedented advances in technology – the mapping of the human genome, social media, nanotechnology, space-tourism, face transplants, hybrid cars, global communications, digital storage, and more.  Looking back, it’s clear that despite a few hiccups, emerging technologies are on a roll – one that’s showing no sign of slowing down.

So what can we expect as we enter the second decade of the twenty first century?  What are the emerging technology trends that are going to be hitting the headlines over the next ten years?

Here’s my list of the top ten technologies I think are worth watching. I’m afraid that, as with all crystal ball gazing, it’s bound to be flawed. Yet as I work on the opportunities and challenges of emerging technologies, these do seem to be areas that are ripe for prime time.


2009 was the year that geoengineering moved from the fringe to the mainstream.  The idea of engineering the climate on a global scale has been around for a while. But as the penny has dropped that we may be unable – or unwilling – to curb carbon dioxide emissions sufficiently to manage global warming, geoengineering has risen up the political agenda.  My guess is that the next decade will see the debate over geoengineering intensify.  Research will lead to increasingly plausible and economically feasible ways to tinker with the environment.  At the same time, political and social pressure will grow – both to put plans into action (whether multi- or unilaterally), and to limit the use of geoengineering.  The big question is whether globally-coordinated efforts to develop and use the technology in a socially and politically responsible way emerge, or whether we end up with an ugly – and potentially disastrous – free for all.

Smart grids

It may not be that apparent to the average consumer, but the way that electricity is generated, stored and transmitted is under immense strain.  As demand for electrical power grows, a radical rethink of the power grid is needed if we are to get electricity to where it is needed, when it is needed.  And the solution most likely to emerge as the way forward over the next ten years is the Smart Grid.  Smart grids connect producers of electricity to users through an interconnected “intelligent” network.  They allow centralized power stations to be augmented with – and even replaced by – distributed sources such as small-scale wind farms and domestic solar panels.  They route power from where there is excess being generated to where there is excess demand.  And they allow individuals to become providers as well as consumers – feeding power into the grid from home-installed generators, while drawing from the grid when they can’t meet their own demands.  The result is a vastly more efficient, responsive and resilient way of generating and supplying electricity.  As energy demands and limits on greenhouse gas emissions hit conventional electricity grids over the next decade, expect to see smart grids get increasing attention.

Radical materials

Good as they are, most of the materials we use these days are flawed – they don’t work as well as they could.  And usually, the fault lies in how the materials are structured at the atomic and molecular scale.  The past decade has seen some amazing advances in our ability to engineer materials with increasing precision at this scale.  The result is radical materials – materials that far outperform conventional materials in their strength, lightness, conductivity, ability to transmit heat, and a whole host of other characteristics.  Many of these are still at the research stage.  But as demands for high performance materials continue to increase everywhere from medical devices to advanced microprocessors and safe, efficient cars to space flight, radical materials will become increasingly common.  In particular, watch out for products based on carbon nanotubes.  Commercial use of this unique material has had it’s fair share of challenges over the past decade.  But I’m anticipating many of these will be overcome over the next ten years, allowing the material to achieve at least some of it’s long-anticipated promise.

Synthetic biology

Ten years ago, few people had heard of the term “synthetic biology.”  Now, scientists are able to synthesize the genome of a new organism from scratch, and are on the brink of using it to create a living bacteria.  Synthetic biology is about taking control of DNA – the genetic code of life – and engineering it, much in the same way a computer programmer engineers digital code.  It’s arisen in part as the cost of reading and synthesizing DNA sequences has plummeted.  But it is also being driven by scientists and engineers  who believe that living systems can be engineered in the same way as other systems.  In many ways, synthetic biology represents the digitization of biology.  We can now “upload” genetic sequences into a computer, where they can be manipulated like any other digital data.  But we can also “download” them back into reality when we have finished playing with them – creating new genetic code to be inserted into existing – or entirely new – organisms.  This is still expensive, and not as simple as many people would like to believe – we’re really just scratching the surface of the rules that govern how genetic code works.  But as the cost of DNA sequencing and synthesis continues to fall, expect to see the field advance in huge leaps and bounds over the next decade.  I’m not that optimistic about us cracking how the genetic code works in great detail by 2020 – the more we learn at the moment, the more we realize we don’t know.  However, I have no doubt that what we do learn will be enough to ensure synthetic biology is a hot topic over the next decade.  In particular, look out for synthesis of the first artificial organism, the development and use of “BioBricks” – the biological equivalent of electronic components – and the rise of DIY-biotechnology.

Personal genomics

Closely related to the developments underpinning synthetic biology, personal genomics relies on rapid sequencing and interpretation of an individual’s genetic sequence.  The Human Genome Project – completed in 2001 – cost taxpayers around $2.7 billion dollars, and took 13 years to complete.  In 2007, James Watson’s genome was sequenced in 2 months, at a cost of $2 million.  In 2009, Complete Genomics were sequencing personal genomes at less than $5000 a shot.  $1000 personal genomes are now on the cards for the near future – with the possibility of substantially faster/cheaper services by the end of the decade.  What exactly people are going to do with all these data is anyone’s guess at this point – especially as we still have a long way to go before we can make sense of huge sections of the human genome.  Add to this the complication of epigenetics, where external factors lead to changes in how genetic information is decoded which can pass from generation to generation, and and it’s uncertain how far personal genomics will progress over the next decade.  What aren’t in doubt though are the personal, social and economic driving forces behind generating and using this information. These are likely to underpin a growing market for personal genetic information over the next decade – and a growing number of businesses looking to capitalize on the data.


Blurring the boundaries between individuals and machines has long held our fascination. Whether it’s building human-machine hybrids, engineering high performance body parts or interfacing directly with computers, bio-interfaces are the stuff of our wildest dreams and worst nightmares.  Fortunately, we’re still a world away from some of the more extreme imaginings of science fiction – we won’t be constructing the prototype of Star Trek Voyager’s Seven of Nine anytime soon.  But the sophistication with which we can interface with the human body is fast reaching the point where rapid developments should be anticipated.  As a hint of things to come, check out the Luke Arm from Deka (founded by Dean Kamen).  Or Honda’s work on Brain Machine Interfaces.  Over the next decade, the convergence of technologies like Information Technology, nanoscale engineering, biotechnology and neurotechnology are likely to lead to highly sophisticated bio-interfaces.  Expect to see advances in sensors that plug into the brain, prosthetic limbs that are controlled from the brain, and even implants that directly interface with the brain.  My guess is that some of the more radical developments in bio-interfaces will probably occur after 2020.  But a lot of the groundwork will be laid over the next ten years.

Data interfaces

The amount of information available through the internet has exploded over the past decade.  Advances in data storage, transmission and processing have transformed the internet from a geek’s paradise to a supporting pillar of 21st century society.  But while the last ten years have been about access to information, I suspect that the next ten will be dominated by how to make sense of it all.  Without the means to find what we want in this vast sea of information, we are quite literally drowning in data.  And useful as search engines like Google are, they still struggle to separate the meaningful from the meaningless.  As a result, my sense is that over the next decade we will see some significant changes in how we interact with the internet.  We’re already seeing the beginnings of this in websites like Wolfram Alpha that “computes” answers to queries rather than simply returning search hits,  or Microsoft’s Bing, which helps take some of the guesswork out of searches.  Then we have ideas like The Sixth Sense project at the MIT Media Lab, which uses an interactive interface to tap into context-relevant web information.  As devices like phones, cameras, projectors, TV’s, computers, cars, shopping trolleys, you name it, become increasingly integrated and connected, be prepared to see rapid and radical changes in how we interface with and make sense of the web.

Solar power

Is the next decade going to be the one where solar power fulfills its promise?  Quite possibly.  Apart from increased political and social pressure to move towards sustainable energy sources, there are a couple of solar technologies that could well deliver over the next few years.  The first of these is printable solar cells.  They won’t be significantly more efficient than conventional solar cells.  But if the technology can be scaled up and some teething difficulties resolved, they could lead to the cost of solar power plummeting.  The technology is simple in concept – using relatively conventional printing processes and special inks, solar cells could be printed onto cheap, flexible substrates; roll to roll solar panels at a fraction of the cost of conventional silicon-based units.  And this opens the door to widespread use.  The second technology to watch is solar-assisted reactors.  Combining mirror-concentrated solar radiation with some nifty catalysts, it is becoming increasingly feasible to convert sunlight into other forms of energy at extremely high efficiencies.  Imagine being able to split water into hydrogen and oxygen using sunlight and an appropriate catalyst for instance, then recombine them to reclaim the energy on-demand – all at minimal energy loss.  Both of these solar technologies are poised to make a big impact over the next decade.


Drugs that enhance mental ability – increasingly referred to as nootropics – are not new.  But their use patterns are.  Drugs like ritalin, donepezil and modafinil are increasingly being used by students, academics and others to give them a mental edge.  What is startling though is a general sense that this is acceptable practice.  Back in June I ran a straw poll on 2020 Science to gauge attitudes to using nootropics.  Out of 207 respondents, 153 people (74%) either used nootropics, or would consider using them on a regular or occasional basis.  In April 2009, an article in the New Yorker reported on the growing use of “neuroenhancing drugs” to enhance performance. And in an informal poll run by Nature in April 2008, 1 in 5 respondents claimed “they had used drugs for non-medical reasons to stimulate their focus, concentration or memory.” Unlike physical performance-enhancing drugs, it seems that the social rules for nootropics are different.  There are even some who suggest that it is perhaps unethical not to take them – that operating to the best of our mental ability is a personal social obligation.  Of course this leads to a potentially explosive social/technological mix, that won’t be diffused easily.  Over the next ten years, I expect the issue of nootropics will become huge.  There will be questions on whether people should be free to take these drugs, whether the social advantages outweigh the personal advantages, and whether they confer an unfair advantage to users by leading to higher grades, better jobs, more money.  But there’s also the issue of drugs development.  If a strong market for nootropics emerges, there is every chance that new, more effective drugs will follow.  Then the question arises – who gets the “good” stuff, and who suffers as a result?  Whichever way you look at it, the 2010′s are set to be an interesting decade for mind-enhancing substances.


Cosmetics and pharmaceuticals inhabit very different worlds at the moment.  Pharmaceuticals typically treat or prevent disease, while cosmetics simply make you look better.  But why keep the two separate?  Why not develop products that make you look good by working with your body, rather than simply covering it?  The answer is largely due to regulation – drugs have to be put through a far more stringent set of checks and balances that cosmetics before entering the market, and rightly so.  But beyond this, there is enormous commercial potential in combining the two, especially as new science is paving the way for externally applied substances to do more than just beautify.  Products that blur the line are already available – in the US for instance, sunscreens and anti dandruff shampoos are considered drugs.  And the cosmetics industry regularly use the term “cosmeceutical” to describe products with medicinal or drug-like properties.  Yet with advances in synthetic chemistry and nanoscale engineering, it’s becoming increasingly possible to develop products that do more than just lead to “cosmetic” changes.  Imagine products that make you look younger, fresher, more beautiful, by changing your body rather than just covering up flaws and imperfections.  It’s a cosmetics company’s dream – one shared by many of their customers I suspect.  The dam that’s preventing many such products at the moment is regulation.  But if the pressure becomes too great – and there’s a fair chance it will over the next ten years – this dam is likely to burst.  And when it does, cosmeceuticals are going to hit the scene big-time.

So those are my ten emerging technology trends to watch over the next decade.  But what happened to nanotechnology, and what other technologies were on my shortlist?

Nanotech has been a dominant emerging technology over the past ten years.  But in many ways, it’s a fake.  Advances in the science of understanding and manipulating matter at the nanoscale are indisputable, as are the early technology outcomes of this science.  But nanotechnology is really just a convenient shorthand for a whole raft of emerging technologies that span semiconductors to sunscreens, and often share nothing more than an engineered structure that is somewhere between 1 – 100 nanometers in scale.  So rather than focus on nanotech, I decided to look at specific technologies which I think will make a significant impact over the next decade.  Perhaps not surprisingly though, many of them depend in some way on working with matter at nanometer scales.

In terms of the emerging technologies shortlist, it was tough to whittle this down to ten trends. My initial list included batteries, decentralized computing, biofuels, stem cells, cloning, artificial intelligence, robotics, low earth orbit flights, clean tech, neuroscience and memristors – there are many others that no doubt could and should have been on it.  Some of these I felt were likely to reach their prime sometime after the next decade.  Others I felt didn’t have as much potential to shake things up and make headlines as the ones I chose.  But this was a highly subjective and personal process.  I’m sure if someone else were writing this, the top ten list would be different.

And one final word.  Many of the technologies I’ve highlighted reflect an overarching trend: convergence.  Although not a technology in itself, synergistic convergence between different areas of knowledge and expertise will likely dominate emerging technology trends over the next decade.  Which means that confident as I am in my predictions, the chances of something completely different, unusual and amazing happening are…  pretty high!

Update, 12/27/09  Something’s been bugging me, and I’ve just realized what it is – in my original list of ten, I had smart drugs, but in the editing process they somehow got left by the wayside!  As I don’t want to go back and change the ten emerging technology trends I ended up posting, they will have to be a bonus.  As it is, drug delivery timelines are so long that I’m not sure how many smart drugs will hit the market before 2020.  But when they do, they will surely mark a turning point in therapeutics.  These are drugs that are programmed to behave in various ways.  The simplest are designed to accumulate around disease sites, then destroy the disease on command – gold shell nanoparticles fit the bill here, preferentially accumulating around tumors then destroying them by heating up when irradiated with infrared radiation.  More sophisticated smart drugs are in the pipeline though that are designed to seek out diseased cells, provide local diagnostics, then release therapeutic agents on demand.  The result is targeted disease treatment that leads to significantly greater efficacy at substantially lower doses.  Whether or not these make a significant impact over the next decade, they are definitely a technology to watch.
Read more: