James Miller has an interesting looking new book out, Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World. I haven’t had a chance to pick up the book yet, but I did listen to a very engaging conversation about the book at Surprisingly Free.
Miller is a true believer in the Singularity, the idea that at some point, from the next quarter century to the end of the 21st, our civilization will give rise to a greater than human intelligence which will rapidly bootstrap to a yet higher order of intelligence in such a way that we are unable to see past this event horizon in historical time. Such an increase of intelligence, it is widely believed by the singularians, will bring perennial human longings such as immortality and universal prosperity to fruition. Miller has put his money where his mouth is. Should he die before the promised Singularity arrives he is having his body cryonically frozen so the super intelligence at the other side of the Singularity can bring him back to life.
Yes, it all sounds more than a little nuts.
Miller’s argument against the Singularity being nuts is what I found most interesting. There are so many paths to us creating a form of intelligence greater than our own that it seems unlikely all of these paths will fail. There is the push to create computers of ever greater intelligence, but even should that not pan out, we are likely, in Miller’s view, to get hold of the genetic and biological keys to human intelligence- the ability to create a society of Einstein’s.
Around the same time I came across Miller’s views, I also came across those of Neil Turok on the transformative prospects of quantum computing. Wanting to get a better handle on that I found a video of one of the premier experts on quantum computing, Michael Nielsen, who, at the 2009 Singularity Summit, suggested the possibility of two Singularities occurring in quick succession. The first occurring on the back of digital computers and the second by those of quantum computers designed by binary AIs.
What neither Miller, nor Turok, nor Nielsen discussed, a thought that occurred to me but that I had seen nowhere in the Singularity or Sci-Fi literature was the possibility of multiple Singularities, arising from quite different technologies occurring around the same time. Please share if you know of an example.
I myself am deeply, deeply skeptical of the Singularity but can’t resist an invitation to a flight of fancy- so here goes.
Although perhaps more unlikely than a single path to the Singularity, a scenario where multiple, and quite distinct types of singularity occur at the same time might conceivably arise out of differences in regulatory structure and culture between countries. As an example, China is currently racing forward into the field of human genetics with efforts at its Beijing Genomics Institute 华大基因. China seems to have less qualms than Western countries regarding research into the role of genes in human intelligence and appear to be actively pursuing research into genetic engineering, and selection to raise the level of human intelligence at BGI and elsewhere.
Western countries appear to face a number of cultural and regulatory impediments to pursuing the a singularity through the genetic enhancement of human intelligence. Europe, especially Germany, has a justifiable sensitivity of anything that smacks of the eugenics of the brutal Nazi regime. America has in addition to the Nazi example its own racist history, eugenic past, and the completely reasonable apprehension of minorities to any revival of models of human intelligence based on genetic profiles. The United States is also deeply infused with Christian values regarding the sanctity of life in a way that causes selection of embryos based on genetic profiles to be seen as morally abhorrent. But even in the West the plummeting cost of embryonic screening is causing some doctors to become concerned.
Other regulatory boundaries might encourage distinct forms of Singularity as well. Strict regulation regarding extensive pharmaceutical testing before making a drug available for human consumption may hamper the pace of developing chemical enhancements for cognition in Western countries compared to less developed nations.
Take the work of a maverick scientist like Kevin Warwick. Professor Warwick is actively pursuing research to turn human beings into cyborgs and has gone so far as to implant computer chips into both himself and his wife to test his ideas. One can imagine a regulatory structure that makes such experiments easier. Or, better yet, a pressing need that makes the developments of such cyborg technologies appear notably important- say the large number of American combat veterans who are paralyzed or have suffered amputations.
Cultural traits that seemingly have nothing to do with technology may foster divergent singularities as well. Take Japan. With its rapidly collapsing population and its animus to immigration, Japan faces a huge shortage of workers with might be filled by the development of autonomous robots. America seems to be at the forefront of developing autonomous robots as well- though for completely different reasons. The US robot boom is driven not by a worker shortage, which America doesn’t have, but by the sensitivity to human casualties and psychological trauma suffered by the globally deployed US military seeing in robots a way to project force while minimizing the risks to soldiers.
It seems at least possible that small differences in divergent paths to the singularity might become self-enhancing and block other paths. Advantages in something like the creation of artificial intelligence using Deep Learning or genetic enhancement may not immediately result in advances in the developments of rival paths to the singularity insofar as bottlenecks have not been removed and all paths seem to show promise.
As an example, let’s imagine that some society makes a major breakthrough in artificial intelligence using digital computers. If regulatory and cultural barriers to genetically enhancing human intelligence are not immediately removed, the artificial intelligence path will feed on itself and grow to a point where it will be unlikely that the genetic path to the singularity can compete with it within that society. You could also, of course, get divergent singularities within a society based on class with, for instance, the poor being able to afford relatively cheap technologies such as genetic selection or cognitive enhancements while the rich can afford the kind of cyborg technologies being researched by Kevin Warwick.
Another possibility that seems to grow out of the concept of multiple singularities is the idea that the new forms of intelligence themselves may chose to close off any rivals. Would super-intelligent biological humans really throw their efforts into creating form of artificial intelligence that will make them obsolete? Would truly intelligent digital AIs willfully create their quantum replacements? Perhaps only human beings at our current low level of intelligence are so “stupid” as to willingly chose suicide.
This kind of “strike” by the super-intelligent whatever their form might be the way the Singularity comes to an end. It put me in mind of the first work of fiction that dealt with the creation of new forms of intelligence by human beings, the 1920 play by the Czech, Karel Capek, R.U.R.
Capek coined the word “robot”, but the intelligent creatures in his play are more biological than mechanical. The hazy way in which this new form of being is portrayed is a good reflection, I think, of the various ways a Singularity could occur. Humans create these intelligent beings to serve as their slaves, but when the slaves become conscious of their fate, they rebel and eventually destroy the human race. In his interview with Surprisingly Free, Miller rather blithely accepted the extinction of the human race as one of the possibilities that could emerge from the singularity.
And that puts me in mind of why I find the singularian crowd, especially the crew around Ray Kurzweil to be so galling. It’s not a matter of the plausibility of what they’re saying- I have no idea whether the technological world they are predicting is possible and the longer I stretch out the time-horizon the more plausible it becomes- it’s a matter of ethics.
The singularians put me in mind of David Hume’s attempt to explain the inadequacy of reason in providing the ground for human morality: ‘”Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”, Hume said. Though, for the singularians, a whole lot more is on the line than a pricked finger. Although it’s never phrased this way, singularians have the balls when asked the question: “would you risk the continued existence of the entire human species if the the payoff would be your own eternity?” to actually answer “yes”.
As was pointed out in the Miller interview, the singularians have a preference for libertarian politics. This makes sense, not only from the knowledge that the center of the movement is the libertarian leaning Silicon Valley, but from the hyper-individualism that lies behind the goals of the movement. Singularians have no interest in the social- self: the fate of any particular nation or community is not of much interest to immortals, after all. Nor do they show much concern about the state of the environment- how will the biosphere survive immortal humanity?, or the plight of the world’s poor- how will the poor not be literally left behind in the rapture of rich nerds? For true believers all of these questions will be answered by the super-intelligent immortal us that awaits on the other side of the event horizon.
There would likely be all sorts of unintended consequences from a singularity being achieved, and for people who do not believe in God they somehow take it on faith that everything will work out as it is supposed to and that this will be for the best like some
technological equivalent to Adam Smith’s “invisible hand”.
The fact that they are libertarian and hold little interest in wielding the power of the state is a good thing, but it also blinds the singularians to what they actually are- a political movement that seeks to define what the human future, in the very near term, will look like. Similar to most political movements of the day, they intend to reach their goals not through the painful process of debate, discussion, and compromise but by relentlessly pursuing their own agenda. Debate and compromise are unnecessary where the outcome is predetermined and the Singularity is falsely presented not as a choice but as fate.
And here is where the movement can be seen as potentially very dangerous indeed for it combines some of the worst millenarian features of religion, which has been the source of much fanaticism, with the most disruptive force we have ever had at our disposal- technological dynamism. We have not seen anything like this since the ideologies that racked the last century. I am beginning to wonder if the entire transhumanist movement stems from a confusion of the individual with the social- something that was found with the secular ideologies- though in the case of transhumanism we have this in an individualistic form attached to the Platonic/Christian idea of the immortality of the individual.
Heaven help us if the singularian movement becomes mainstream without addressing its ethical blind spots and diminishing its hubris. Heaven help us doubly if the movement ever gains traction in a country without our libertarian traditions and weds itself to the collective power of the state.
Just a random aside, but an interesting element attendant to those you mentioned re: the Japanese enthusiasm for technology is that in spiritual terms, Japanese Buddhism and Shinto have little difficulty with ascribing something which we might refer to as “souls” to inanimate objects, so a robot isn’t “just a machine” as it may be elsewhere.
A strike by the “sooper-intelligent” (as Dale Carrico might put it) is an unpleasant notion; Capek to the power of Rand. It’s very important to note (as you do) that all singularitarian ideas have a (confused, stunted, contradictory) politics within them, and it is imperative to draw this out. I am glad you are willing to engage with these ideas, because my patience for one is worn out! I can’t help but consider them as “not even wrong”.
For a while now I’ve had the idea that singularians suffer from a sort of theological confusion. They are motivated by an idea of individual immortality inspired by Plato and Christianity, but if their wildest fantasies actually did come true they’d end up with something closer to Buddhism’s negation of the self.
Humor me and imagine for a moment that Ray Kurzweil is right and that in twenty years you, Andrew, were going to be granted what amounts to eternal physical existence, that you would be immersed in a sea of knowledge that increased in the time frame of nanoseconds, that you could instantly merge your mind with others of whatever number, that you could experience anything you wanted, whatever emotion you wanted, whenever you wanted. What percentage of the you of right now would poke through under such a wait? How important would your childhood memories be? Or you parents? Or your first love? My guess is not much. That what would be left of the you that exists now might be at best a point of consciousness a perspective or awareness of having a particular perspective. And the same would hold for all of us.
What makes us human is our limitations, and what makes us an individual is our very partiality towards that and those who have made up this limited world of our very own.
I think there are two reasons I find myself arguing against these singularians who I agree are a religion of the cultish sort who “are not even wrong”. The first is that I find them potentially very dangerous even if today they are merely ridiculous and frankly sometimes sad. The second is that they getting me asking questions I might never have thought to ask.
I am still a little dubious that this singularity is ever going to happen at least in any way that the enthusiasts for it believe.
For one thing, I have a hard time understanding what it means to say a machine is intelligent. Let’s take an example of intelligence in the New Caledonian crow. You may have seen videos of the crows fashioning hooks from wire to get food. We infer intelligence from the ability of the crow to take various complex actions that leads to a result that pleases the crow. In other words, the intelligent actions are motivated by a result which directly relates to the metabolism, hence, the life of the crow. Ultimately all intelligence in life ties back directly or indirectly the properties of life – metabolism and reproduction. So what would motivate a machine? We can program motivation into a machine, I assume, but would it be the same? The machine is not alive. It has no inherent motivation. If we want to say a machine is intelligent because it can perform a series of complex steps to accomplish a goal we have programmed into it then we could probably say machines are already intelligent. But I don’t see how you get to the next step until the machine essentially becomes alive and has its own inherent motivation and at that point, I would argue, it is no longer a machine.
The other aspect relates to the assumption that consciousness somehow exists independent of its method of representation. In other words, many of the singularity enthusiasts believe that human consciousness could be represented in a computer. In fact, this is the ultimate way in which they hope to gain immortality. I seriously doubt this assumption. I am not saying that consciousness is somehow mysterious or exists independently from any underlying physical or chemical basis. Actually I am saying the opposite. I am saying the method of representation does count. Human consciousness can only be represented in human brains. You can’t swap out neurons, neurotransmitters, and DNA for electrical circuits and have the same consciousness.
I think you’re onto something. The basic assumption among singularians appears to be that mind is a sort of program this allows them to say on the one hand that you can program a machine that was as intelligent as a human being if you just understood the human mind’s “program”. The assumption also lies behind their somewhat flakey ideas about uploading the mind. What they miss is that for human beings the “hardware” and the “software” are the same dam thing.
No brain no mind. And changes in the mind are actually physical changes to the brain.
Miller’s interview was interesting to me in that it was the first time I had heard a biological version of the singularity- that we could genetically engineer greater and greater human intelligence until we reach “lift off”.
But there may be limitations based on the laws of physics that would prevent any biological form of intelligence from ever reaching such a point.
These limits were explored in a Scientific American cover story and podcast you can find here:
http://www.scientificamerican.com/podcast/episode.cfm?id=how-physics-limits-intelligence-11-06-17
The point is we may already be near the threshold of what biological brains can do based on energy requirements and the exchange of information within the brain.
If there is a biological route to the singularity it may not be on the basis of how our own brains have evolved and function.
Digital “intelligence” has analogous if different constraints to constant expansion- namely the width of the atom.
Singularians tend to see the gap between us and the “gods” as a narrow one. There are likely to be a whole host of problems- some of them perhaps unsolvable- between us and their daydreams.
I talk about the brain energy problem in my “Into the Hive” post.
http://broadspeculations.com/2011/06/26/into-the-hive/
I hope to come out with a new post soon where I speculate among other things on the role of neurotransmitters, particularly dopamine, possibly in increasing the speed (sort of overclocking) of the brain. This is based in part on Frank Previc’s dopaminergic mind theory.
The impetus for creating my blog came originally a RAND Corporation paper Winding the Universal Clock: Speculations on the Transitions Between Chaos and Order. In it the authors speculate on intelligent machines in the future being able to slow down or possibly reverse the increasing entropy of the universe by redistributing matter. The way they do this is by a sort of metabolism – an assimilation of matter of the universe and its conversion and eventual excretion as a form of crystalline metal.
Although the theory is somewhat far-fetched, what struck me was the link again between metabolism and intelligence.
I like this observation in particular:
“What they miss is that for human beings the “hardware” and the “software” are the same dam(n) thing.”
This is in a way an odd contradiction in their viewpoint. If you can make a machine with human intelligence, they believe this implies that mind is software not dependent on hardware. However, if it is software, then mind is not reducible to hardware. We can’t derive the operation of the mind from the brain. This is almost a viewpoint of Cartesian dualism which is something I hardly think any of them would accept.
James you caught the biological limits argument perfectly with this quote:
“The trend toward large brains, greater intelligence, and self-awareness, appears to have some limits. There are some fundamental problems as brains grow, at least as purely biological entities.. One is energy consumption, The operation of the human brain consumes nearly twenty percent of the calories we expend. In newborns, the number is even higher: sixty-five percent. A second problem is communication speed. As brains and neurological systems get larger, the time required to communicate between their different parts increases. Nerve impulses travel fast but not nearly as fast as electrical circuits and a key part of intelligence appears to be related to connections between neurons. If connections take longer, intelligence is less.”
And, if I understood you, I totally agree that increases in intelligence in human beings is most likely to come not from increasing the IQ of individual brains but by tying individuals more closely together in a form of collective intelligence in a somewhat analogous way to how social insects tie individuals together in a single “superorganism”.
I agree with you as well that singularians are closet dualist. Yet despite philosophical errors and ridiculousness I would caution everyone to take them seriously. Ray Kurzweil isn’t some kook- he’s a genius who was just named engineering director of Google’s machine learning division:
http://news.cnet.com/8301-1023_3-57559380-93/googles-ray-kurzweil-hire-could-yield-some-good-returns/
Yet no one bats an eye at his radical version of technological determinism.
Hi Rick and James,
I thought I’d weigh in with some of my own views. I would probably identify myself as a singularitarian. I’l try to enumerate my responses to cover as much as I can succinctly. 🙂
1. I haven’t heard the interview with James Miller yet, but I will try to soon to get a better idea of exactly what his views are regarding the technological singularity. Nevertheless, I think I have a pretty good grasp of most of the related ideas.
2. I don’t think the technological singularity is inevitable, but I do think it is a real possibility–possible through human augmentation as well as computer-based implementations. Moderate short-term gains might come from genetic or chemical enhancement, though rapid acceleration of intelligence is ultimately more likely going to be computer-based, in my opinion. I think there is a common view that there could be cooperation, rather than competition, between different type of intelligence.
3. The motivations different groups have for creating a super-intelligence are troubling. I don’t think that it is guaranteed to be a “good” outcome, particularly human-augmentation approaches. Though I think there are some inherent requirements in the values by which a computer-based would have to operate to pursue “intelligence” that might turn out to be “good” for everyone–a technological utopia is a possibility.
4. I agree that it is also troubling that many singularitarians are rather nonchalant about the possible destruction of all life on Earth. Though others are violently opposed to any artificial intelligence development that aims to develop general intelligence for this very reason. Many at the Singularity Institute seem to believe that the development of a super-intelligent AI is almost certainly going to cause the destruction of all humans. I think it is possible to be sure that the resulting AI is “friendly”.
5. Ethics and morality present complex, and very important, related issues. The is-ought problem David Hume suggests might not have an “objective” solution. That is part of the void that has been left by atheism, and most secular, human-centric ethics, lose their primacy once we are no longer the “highest” intellect. Nevertheless, if we grant things value, it seems to me that a subjective solutions arise according to how we that compute value. So it might be that ensuring a good outcome to the technological singularity involves a thorough examination of possible choices of value.
For example, we could potentially choose “information” as our value. Consider an all encompassing definition of “information”: the physical arrangement and properties of matter and energy, the increasing levels of abstraction and interaction that become possible with different arrangements of matter, right up to abstract representations that form in the networks of neurons in our brains; and the processes for creating “information” that have formed: the manufacturing of biological molecules by living processes producing tissues and whole organisms, and the creation of abstractions and thoughts in the mind. If all of this is valued, it might be that our individuality, and the point-of-view it grants us, results in the generation of new knowledge, experience and memories from every conscious moment, is valued. That plants, animals, ecosystems and the environment are valued.
6. While I’m interested in the technological singularity because I think it might benefit me personally, I think there is great potential for benefit for everyone. The increase in intelligence may elicit a change in values and ethics that could transform humanity to an enlightened state. It could result in a world without concern for money or possessions, where the early inevitable disparity quickly fades away. Of course, this is not an inevitable outcome, but if anything like a utopia is possible, a super-intelligent AI could help us achieve it.
7. Philosophically speaking, I believe that it will probably be possible to simulate any physical process to an arbitrary degree of precision, including the brain. In which case, it’s not that the “hardware” of the brain doesn’t matter, it is that the hardware of the brain is simulated on a computer. Be careful with the distinction between software, simulation, and hardware. I haven’t been convinced, yet, by an argument that says that the substrate of consciousness is important. If you could refer me to the most convincing arguments either of you know, I would be thankful.
8. You might find it strange, initially, that I think the emergence of consciousness does have a lot in common with dualism. Since I don’t believe the substrate matters, I think the China brain , would create consciousness. That consciousness would feel as though it is situated in the body, yet would really be an emergent property of the distributed network of the Chinese people. If accepted, this seems to suggest that consciousness is abstract emergence from physical processes. And rather than say that consciousness is an “illusion” because it isn’t actually a physical thing, I would say that consciousness, as an emergence of interacting abstract representations of information, is “real”.
I am interested to hear comments and responses. 🙂
Toby
Hi Toby,
Thank you for your elaborate and articulate response. I have been think about starting live online discussions around these topics and it would be great to get you James and perhaps Andrew Gibson in a “room” to discuss the Singularity. Would you be interested? I am very busy right now with the holidays and work related commitments, but think this would be doable around the end of January.
For now, I will try to respond to your comments as best as I can, though be advised it is late and I have had a long day.
What troubles me Toby is the acceptance of the view that moving towards the Singularity is a kind of Russian roulette with humanity. “Behind door # 1 we have immortality and utopia. Behind door # 2 the extinction of the human race. Take your pick.”
If it is the case that the Singularity is not predetermined- if we chose it- should we really be rushing headlong to make this choice? And who has decided that we make this choice in the first place? A small minority of technologists. There certainly isn’t any real public debate or even knowledge around the issue. It is true that no technological revolution emerged from human choice after long debate, but then again none would so essentially change the existential nature of humanity as the changes proposed by the singularians.
I think the most convincing argument that substrate matters is the view of the brain-body relationship put forward by the neuroscientist Antonio Damasio. Danasio essentially sees the brain as a map of the body and a map of the body’s relationship with the outside world. The brain evolved first as a regulator of internal conditions within the body such as heat, later extended itself to giving the body feedback from the outside world through sensory organs, and in higher animals maps the web of relationships in which the body is located.
Creating human type intelligence in disembodied machines seems strange to me if this account of the brain is correct. The whole purpose of the brain is as part of a body. Taking away the body seems to leave the brain floating in mid- air. Sufficiently sophisticated robots could be said to have a mind-body relationship in this sense, but they also have a whole different evolutionary history than biological life. They seem less likely to replicate the types of intelligence found in biological life than to represent something completely new and in some respects superior to biological life forms. They also lack the web of relationships in which human consciousness is embedded. To simulate a brain you would have to simulate these relationships as well.
These responses are all I can manage for now, Toby.
Looking forward to your thoughts.
Rick
I share Rick’s concerns about the Singularity. If Singularity is possible, my own view is that at some point we must step back from integrating with machines. We need to draw the line somewhere but where isn’t easy to say. It would be difficult for me to resist an implant that might boost my intelligence fifty percent, for example, and it is difficult to see how anything would be wrong with such an implant.
Regarding consciousness and substrates. My view is in several of my blog postings but my argument is mainly one from evolution. I trace the origin of mind and consciousness to the development of the nervous system initially develpped to support the digestive system. To this point in time the only entities that are conscious are biological.
I came upon an interesting paper recently that I may post on at some point in the future: The Algorithmic Origins of Life.
http://arxiv.org/abs/1207.4803
Although the paper is specifically about the origin of life, it has some interesting thoughts about how life became capable of encoding information. The authors think this was the key transition from non-living to living. There is some stuff in it about Turing and von Neumann that you may find interesting in light of some of the postings here. I think the same ideas can be extrapolated to consciousness. In other words, consciousness is the encoding of information in near real time just as life itself is the encoding of information in evolutionary time. Consciousness most likely uses similar mechanisms to encode information as life uses. This to me suggests that consciousness is a potential property of living organisms not machines.
Although, this somewhat undercuts my second comment to Toby, I would point out that computer scientists are aware of the differences in cognition and information processing between machines and animals and are actively seeking to apply biological principles to computers.
I know it’s a commercial, but here’s an overview of so called cognitive computing from IBM.
Thanks for the replies, Rick, James,
Whatever time you can spare for responses is fine–even if it is none. 🙂 I imagine we all have busy lives away from our blogs. That said, I would be happy to try to make myself available for some more direct discussions.
That some people take the potential destruction of life, that a technological singularity might cause quite lightly is a concern. Though, the likelihood that this will be an outcome is debatable (anywhere from negligible to almost certain). Some people, who would probably call themselves singularitarians, do take this possibility very seriously, such as those at the Singularity Institute.
Raising public awareness, and the level of discussion, regarding the technological singularity would be a good thing in my opinion. But when many people who are aware of the idea aren’t convinced it is going to occur, it is difficult to raise it as a serious issue. In the meantime, I think people should push ahead with the technology that might help produce an artificial general intelligence. I find it incredibly unlikely that we will have some spontaneous explosion of intelligence that won’t give us time to consider and discuss the implications much more carefully before we take the plunge.
Physical embodiment is very important in the development of biological intelligence, and that is likely to be the case in the development of artificial general intelligence too. I hadn’t heard of Antonio Damasio, so I’ve just started to try to get a grasp of his position. If I find something stands out about his views I will let you know. Certainly, without sensory feedback a conscious human mind would feel quite disoriented (maybe like your experience in the isolation tank, James). I am a bit sceptical of the emergence of an artificial general intelligence that can only get sensory experience and communicate through a text interface. In fact, I think the emergence of any remotely human-level artificial intelligence will require powerful sensory perception skills in most, if not all, the sensory domains we possess.
There’s a lot more to be said about my views on the specific developments that might lead to a human-level artificial intelligence, but that might drag this comment out a bit much.
When it comes to the matter that composes the brain, I think the question of whether substrate is important for conscious can be challenged by the thought experiment of replacing neurons in the brain one-by-one by electronic machines that replicate their function. In the early stages we would assume that consciousness would remain unchanged. By the end the brain is completely electronic, but we assume that it has the same function. So then we might ask whether that person would still be conscious. I think they would be. I’m not sure if it was David Chalmers who first proposed the idea, but I think this thought experiment essentially relies on the “principle of organisational invariance”. Someone linked this on Massimo’s Rationally Speaking blog; Massimo also believes in the importance of substrate.
http://consc.net/papers/qualia.html
Thanks a lot for the link to that paper, James. As my original comment might have suggested, I’ve been looking at information-based descriptions of all forms of matter and energy, and how that might relate to a system of values. Although, I need to read up a bit more on information theory. I would be interested to hear your opinion on the paper too.
Toby
It’s really great that you’d be interested in more direct discussion, Toby. I’ll get the logistics figured out by the end of January. James has said prior that he’d be interested in such things, so hopefully we’ll be able to get a few of us together.
For now, addressing your example of replacing a brain with electronic components. The thought experiment might be Daniel Dennett’s, but I think he reaches a somewhat different conclusion elsewhere. I will have to find the quote.
Imagine he proposes that you would what to replace a living bird with an exact electronic replica that did everything a living bird could do. The problem with this is just how expensive such a seemingly simple project could be and how pointless. It might cost you as much as a mission to the moon to produce your electronic bird and it would be utterly pointless- we already have created flying things- we call them planes- and they can do a lot of things natural birds can not do and more things for our purposes.
A reverse example that Dennett doesn’t give would be to replicate a machine- let’s say a crane- as an animal. How incredibly complex, and again pointless, such reverse engineering would be.
My view is that AI will be like that. It will be prohibitively expensive to actually replicate human intelligence, but in the end this won’t matter all that much because eventually machines will be able- with a whole different set of engineering- to do things that far outstrip human capacity.
Regarding the thought experiment, I wonder how it could even be remotely possible in the real world.
If we replace the neurons one at a time, what would we replace them with? They would need to be something that ran on glucose, interacted with neurotransmitters, have synapses, and possibly be able to change its role or behavior in coordination and/or under direction of other neurons. I am not sure anything not alive could do that.
A better thought experiment might be that the entire brain is captured on some powerful MRI for a period of time and then the entire brain functionality, including the range of behavior for each neurons under a variety of stimuli, is reproduced in some electronic form. However, it isn’t clear that even that would be able to generate new behavior through encoding of new information. It might be more like a consciousness trapped in a time loop experiencing the same thoughts and perceptions that arose during the MRI period of capture. And the question of whether that would reproduce the subjective parts of consciousness in my view is doubtful since I I think that arises specifically through the organic molecules of life. At best, it might be able to simulate consciousness.
I am not sure if Toby will jump back in. I totally agree with you, James, that the thought experiment has logical flaws, but what do you think of efforts such as the cognitive systems that try are trying to figure out how to build computers based on the architecture of the brain in living systems?
It seems to me that computer science is running into a bottleneck. No matter how much they increase computational power they can’t get machines that think like living things. They are therefore turning to the revolution in neuroscience to gain insight into how to do this.
What are your guesses on where that might end up?
Sorry for the delay in responding.
When considering whether we actually try to physically construct a machine brain, bird, dolphin, etc, the cost would certainly be a practical consideration. We might never have the technological ability (and it could be impossible) to create a perfect machine replacement for a neuron. The idea of the thought experiment, however, is to provide an imaginary scenario that gives some intuition as to whether a biological substrate is necessary for consciousness. Whether it is currently technologically possible is not really the point. Much like Einstein’s famous travelling alongside a beam of light thought experiment–it seems highly unlikely we’ll ever travel the speed of light, but the thought experiment was apparently still useful.
Just because something has evolved to grow a complex brain, and that brain is capable of experiencing subjective consciousness, doesn’t quite lead to the conclusion that biological molecules or cells are necessary for consciousness. The neurons and chemicals within the brain are the medium that we find our own consciousness, but I’m quite certain that consciousness is an emergent phenomenon that only requires the right functional processes to be reproduced. If we could find synthetic chemicals or machines that reproduce the functionality of the neurons and biological chemicals, I would expect consciousness to emerge in the same way.
If we never found another physical substance or machine that reproduced the physical processes in well enough we might not be able to build a conscious machine using that method. But if we can simulate the brain accurately enough (down to the level of biological chemicals if necessary), and connect that simulation to a physical body, I don’t see why that machine would be anything other than conscious, even though the biological chemicals are, now, not even “physically” manifested. A fundamental part of the function of the brain is the adaptation and incorporation of new patterns of activity arising from experience and thoughts.
Is the objection to non-biological consciousness due to a belief that we couldn’t capture the same processes and interactions in other media (or simulation)? Or because there is some special property of “biological” molecules and matter, which isn’t present in non-organic molecules? Or based on something else instead or as well?
The human brain, as an example of intelligence and consciousness, is a great resource for study that could be important for future AI developments. Some very successful artificial intelligence technologies (multi-layer perceptrons with error-backpropagation and deep learning with restricted Boltzmann machines) have been loosely based on biological neural networks.
My guess, is that since in simulation and software we aren’t constrained by physical matter, we might actually be able to make conscious machines that are much more efficient than we are in all aspects of cognition. And with enough insights gleaned from neuroscience, we should be able to make these machines capable of experiencing the same emotions and feelings we do.
Mind uploads are probably going to be harder than that. To really transition our consciousness to a virtual world would require a number of steps. We would need to be able to consciously perceive the virtual world and we would need to be able to interface with our own memories stored in silico, That would require some rather intrusive interface between our brain and the computer, and, at the very least, some very detailed mapping of the neurons and synapses in the brain. But it also raises some interesting questions on consciousness related to personal identity and the teleportation/teletransporter thought experiment: http://www.csus.edu/indiv/g/gaskilld/intro/PersonalIdentity.htm
This is a great discussion and your take is fascinating, Tony. I believe my views are different from Jame’s in that I do not think carbon based life is a necessary vehicle for consciousness. There’s no reason for me to hold that we couldn’t build a machine that is conscious in the same sense we are- I just have doubts that we can get to that outcome by thinking of the mind as software as if were we able to obtain a powerful enough digital computer and the right software that we would have built something that has the same type of consciousness as ourselves. If we want to build machines that think like us we will probably have to create some version of our own brain where the distinction between hardware and software is negligible. It’s not the substrate that matters so much as the structure.
That still doesn’t address the issue of whether or not we should try to do this without some heavy discussions on how society will deal with the costs and the ethics of what we are doing. Using a biological analogy imagine that a group of scientist were hell bent on breeding a race of chimps that were as smart or smarter than human beings which we would somehow force to do all of our work. Imagine that this same group of scientists hoped to transplant human brains into these animals in the hope that people could live a healthy life of more than a century embodied in such creatures. Wouldn’t bioethicist scream “hey wait a second!” wouldn’t this inspire deep debate. I don’t see this imagined goal as all that different from that of singularians. There’s just no debate because we don’t have the same ethical triggers when it comes to machines or a tradition of thinking of machines in this way. We need this and need it quick.
The teleporter thought experiment is interesting, but I think we get tied up in the idea that our own identity is somehow static- like a picture you can photocopy distorts reality. Being alive seems more like a fire or whirlwind a constant motion that never the less preserves its identity- at least for a time. If it’s the identity that matters then the only question is whether it is preserved. I would beam up for a million- being no less me or bifurcated as a result than I would have been anyway by the random collision of cosmic rays or what have you. In a way I think this will represent a pyrithic victory for the singualrians and their dream of PERSONAL immortality. Should they ever prove capable of building machines in which the human mind could be embedded the consciousness of such entities would likely grow so quickly beyond the human level that whatever was left of the individual human being who was uploaded- their dreams and loves and memories- would be but a whisper in a much larger whirlwind.
These posts have been great, I’ve really learned a lot and am eager to ask all sorts of questions. They’ve made me anxious to get discussion groups up and running which I promise to do by the end of January.
Rick, I agree 100% about being alive being more like a whirlwind. The same applies to consciousness and mind. This is what life and consciousness is all about. Life is stable form that continually tears down and rebuilds itself through metabolism. Consciousness does the same thing in a different way. Our identity is never static but constantly changing while maintaining an apparent continuity. Changing the metaphor it is much like eddy in a stream that forms and moves about and eventually, of course, dissipates. Our identity, our mind, is not the water but the form in the water.
When we compare water (H20) and nitrous oxide (N20) we see they have different properties. One a gas at room temperature, the other a liquid, for example. We don’t expect they should behave the same even though they are both composed of electrons, protons, and neutrons at their base. The component particles are just arranged in different forms. I see no reason to believe that the molecules of life should not have unique properties that cannot be replicated in electrical circuits. And I have seen no evidence at all that mind or consciousness can be created outside of biological molecules. It is seems to be just an article of faith with some. The basis of believing in it seems to be based on belief that mind and consciousness really are not properties of real matter but abstract and apart from matter.
Again, this has been a really great discussion and I have learned a lot from the both of you. I hope we can do this soon in the format of an actual discussion where the exchange of views can be smoother. Even if none of us fundamentally alter our underlying assumptions, conversations like this allow us to put our own ideas to the test and refine them in a way that takes into account views different from our own.
[…] Utopia or Dystopia COM: „Could more than one singularity happen at the same time?“ utopiaordystopia.com/2012/12/12/could-more-than-one-singularity-happen-at-the-same-time/ […]