Iamus Returns

A reader, Dan Fair, kindly posted a link to the release of the full album composed by the artificial intelligence program, Iamus, on the comments section of my piece Turing and the Chinese Room Part 2 from several month back.

I took the time to listen to the whole album today (you can too by clicking on the picture above). Not being trained as a classical musician, or having much familiarity with the abstract style in which the album was composed makes it impossible for me to judge the quality of the work.

Over and above the question of quality, I am not sure how I feel about Iamus and “his”composition. As I mentioned to Dan, the optimistic side of me sees in this the potential to democratize human musical composition.

Yet, as I mentioned in the Turing post, the very knowledge that there is no emotional meaning being conveyed behind the work leaves it feeling emotionally dead and empty for me compared to to another composition composed, like those of Iamus, in honor of Alan Turing, this one created by a human being, Amanda Feery, entitled Turing’s Epitaph  that was gracefully shared by fellow blogger Andrew Gibson.

One way or another it seems, humans, and their ability to create and understand meaning will be necessary for the creations of machines to have anything real behind them.

But that’s what I think. What about you?

8 comments on “Iamus Returns

  1. Salvatore Fiore says:

    Say I write (according to the well-known rule of English grammar -noun verb adverb-): “The killer was behind”. Did I frighten you? Sorry, I didn’t mean that. In fact, I didn’t mean anything, I just chose four words more or less at random. Meaning and emotion, are somehow packed into words, and combinations of words. As it is the case for musical building blocks. It is then no surprise that feelings can arouse if a musician plays a computer-composed work. Programmers have only to instruct it with semantic information: “killers frighten”, “daisys don’t”, and some well-known grammar rules.

    • Rick Searle says:

      Great point!

      I know a few simple rules are all that is needed to produce music (or a sentence) with emotive impact. The things that move human beings are not infinitely diverse and show regular patterns for most of us.

      What confronts me though is not surprise that Iamus could be moving- it stems more from the fact that the program has no desire to communicate emotion to me- as a musician or writer of sentences does. Perhaps I should look for this intention in the programer- that is, it is the emotional life of the programmer that Iamus reveals. It just seems so distant.

      • Salvatore Fiore says:

        Who created the sunsets? They always move something inside of me.
        We respond to perceived stimuli, sometimes with physical actions, sometimes with emotions. The physical world (computers included) do generate such stimuli that move us. It’s not a privilege of human beings (although a few are very good at that).
        That’s my point.

      • Rick Searle says:

        Oh, this does go round and round. There is a difference, for me, of something causing me to feel something and another person communicating a feeling TO me. It is not so much how I feel as how they – the communicating being felt- when they “spoke” to me. A completely manufactured recording of a baby crying is something different than a recording of a real baby crying let alone an actual baby crying within ear shot of me.

  2. jjhiii24 says:

    Even a completely random, nonsensical sequence of musical notes may, at some point, result in a section of sounds that evokes a response in a CONSUMER of the composition, but there is much more to composing music than learning HOW to do it. You could become technically proficient at any number of complex tasks and not possess a single ounce of creative fervor or musical passion. A Mozart or an Andrew Lloyd Weber will not arise inexorably from mimicry.

    Ultimately, the dedicated efforts of humans to produce an artificial intelligence may result in a device that compares favorably to the capacity of the human brain and it may confer significant benefits to human endeavors, but there is a huge difference between form and content; between structure and substance; and even a really excellent simulation will very likely never produce a truly excellent substitution.

    What we often fail to consider is that human biological evolution took millions of years to produce a cognitive and creative sentient being, which included hundreds of thousands of years of experience and expansion into realms that did not require any additional neural circuits, but did require creativity, innovation, adaptation, and numerous human qualities that will not arise inexorably from any artificial process in my view.

    I recommend two links on this subject:

    http://m.guardiannews.com/science/2012/oct/03/philosophy-artificial-intelligence?cat=science&type=article

    http://www.kurzweilai.net/ibm-simulates-530-billon-neurons-100-trillion-synapses-on-worlds-fastest-supercomputer

    • Rick Searle says:

      Hi John,

      Thanks for the links. In part I agree with your critical assessment of the prospects of AI, and the position laid out in the Guardian article that we will need to actually understand how the mind works- have a fully developed philosophy of the mind before we can built an artificial general intelligence- AGI- to rival our own.

      I do wonder if at the end of the day our inability to replicate our own form of intelligence will really matter all that much. There was an article in this Saturday’s New York Times about an approach to AI called Deep Learning
      that seems to be rapidly freeing up all sorts of bottlenecks AI has had since its inception:

      http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html?smid=tw-share

      On your point about how we underestimate how long it took evolution to create human intelligence,on one level I completely agree with you. Yet in my view it seems something at first glance counterintuitive has emerged out of AI over the past few generations: the things that are for humans in an evolutionary sense oldest have proven hardest to replicate in machines- things like walking without bumping into things, recognizing colors, distinguishing one object from another. While the things that are newer: mathematics, playing chess, recognizing speech and words, “speaking”, composing music have proved easiest to do.

      Just like a jet airplane is not a replication of a living bird, is far less complex than a living thing and does not share in its generalist intelligence for things like caring for its young, we might not need an AGI that shares our own complexity to “compose” music as good as Mozart or Andrew Lloyd Weber.
      And that’s what kind of saddens me this potential disconnection of the things we have always thought were the peak of our intelligence from the animal nature that grounded them.

  3. Dan Fair says:

    There other intelligences. With all respect, I don’t think that we, humans, are unique in that. Animals show intelligent behavior, only that they don’t have the skills (and conformation) to manipulate objects and communicate, and then to evolve a cultural background.

    Find here a long interview on other ways of understanding intelligence:
    http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

    Computers can perform extremely complex computations in decent times. Intelligence (sorry, this is how I see it) is also about computing information. Higher functions of the human brain developed in the very last period of our evolution. Computer science has made its way from the Turing machine to cloud computing in some 75 years… I’m not surprised if more and more sophisticated tools help us in creative tasks, or if they even start to create by themselves. There is no competition here, as I see it, just emergence from human activity.

    • Rick Searle says:

      Hi Dan,

      Glad you joined the conversation. It’s odd you bring up the issue of emergence; I have a post planned for this week about the physicists Neil Turok’s views of quantum computers as the next stage in the evolution of intelligence. He sees this form of intelligence as not only emerging from the human and the digital, but as being dependent on human intelligence for purpose in an analogous way to how the analog human brain serves the purposes of the human beings “digital” genes.

      There is a part of me that looks forward to the development of new forms of intelligence alongside our own and sees the potential for new forms of artistic expression and thought never seen before. Yet, I do think competition is an issue we need to start having serious discussions about. I don’t think its necessarily paranoid to question whether or not there are dangers in the bridges we are hell bent on crossing.

      What are your views, for instance, of Martin Ford who thinks the AI advances we have now are already posing challenges to employment?:

      https://utopiaordystopia.com/2011/11/28/i-robot-meet-i-need-a-job/

      For me, the issue is making these advances in ways that empower human beings more than they aim to create our more intelligent replacement. Is your view the same or do you think that we should see ourselves as a bridge to a new form of intelligence, a next stage in evolution? Something different?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s