Tuesday, July 24, 2012

When robots start telling each other stories...

About 6 years ago the late amazing Richard Gregory said to me, with a twinkle in his eye, "when your robots start telling each other stories, then you'll really be onto something". It was a remark with much deeper significance than I realised at the time.

Richard planted a seed that's been growing since. What I didn't fully appreciate then, but do now, is the profound importance of narrative. More than we perhaps imagine. Narrative is, I suspect, a fundamental property of both human societies and individual human beings. It may even be a universal property of all advanced societies of sentient social beings. Let me try and justify this outlandish claim. First, take human societies. We humans love to tell each other stories. Whether our stories are epic poems, love songs; stories told with sound (music), or movement (dance), or with stuff (sculpture or art). Stories about what we did today, or on our holidays, stories made with images (photos, or movies); true stories or fantasies, or stories about the Universe that strive to be true (science), or very formal abstract stories told with mathematics, stories are everywhere. Arguably human culture is mostly stories.

Since humans started remembering stories and passing them on orally, and more recently with writing, we have had history: the more-or-less-true grand stories of human civilisation. Even the many artefacts of our civilisation are kinds of stories. They are embodied stories, which narrate the process by which they were designed and made; the plans and drawings which we use to formally record those designs are literally stories which tell how to arrange and join materials in space to fashion the artefact. Project plans are narratives of a different kind: they tell the story of the future steps that must be taken to achieve a goal. Computer programs are stories too. Except that they contain multiple narratives (bifurcated with branches and reiterated with loops), whose paths are determined by input data, which are related over and over at blinding speed within the computer. 

Now consider individual humans. There is a persuasive view in psychology that each of us owes our identity, our sense of self, to our personal life stories. The physical stuff that makes us, the cells of our body, are regenerated and replaced continuously, so that there's very little of you that existed 5 years ago. (I just realised the fillings in my teeth are probably the oldest part of me!) Yet you are still you. You feel like the same you 10, 20 or in my case 50 years ago - since I first became self-aware. I think that it's the lived and remembered personal narrative of our lives that provides us with the feeling, the illusion if you like, of a persistent self. This is I think why degenerative brain diseases are so terrifying. They appear to eat away that personal narrative so devastatingly that the person is ultimately lost, even while their physical body continues living.

So I was tremendously excited to be invited to a cross-disciplinary workshop on Narrative and Complex Systems at the York Centre for Complex Systems Analysis a couple of weeks ago, co-organised by York Professors of English Richard Walsh, and Computer Science Susan Stepney. For the first time I found myself in a forum in which I could share and debate ideas on narrative.

In preparing for the workshop I realised that perhaps the idea of robots telling each other stories isn't as far fetched as it first appears. Think about a simple robot, like the e-puck. What does the story of its life consist of? Well, it is the complete history of all of the movements, including turns, etc, punctuated by interactions with its environment. Because the robot and its set of behaviours is simple, then those interactions are pretty simple too. It occurred to me that it is perfectly possible for a robot to remember everything that has ever happened to it. Now place a number of these robots together, in a simple 'society' of robots, and provide them with the mechanism to exchange 'life stories' (or more likely, fragments of life stories). This mechanism is something we already developed in the Artificial Culture project - it is social learning by imitation. These robots would be telling each other stories.

But, I hear you ask, would these stories have any meaning? Well, to start with I think we must abandon the notion that they would necessarily mean anything to us humans. After all, these are robots telling each other stories. Ok, so would the stories mean anything to the robots themselves, especially robots with limited 'cognition'? Now we are in the interesting territory of semiotics, or - to be more accurate - robosemiotics. What, for instance, would one robot's story signify to another? That signification would I think be the meaning. But I think to go any further we would need to do the robot experiment I have outlined here.

And what would be the point of my proposed robot experiment? It is, I suggest, this:
to explore, with an abstract but embodied model, the relationship between the narrative self and shared narrative, i.e. culture.
By doing this experiment would we be, as Richard Gregory suggested, really onto something?

16 comments:

  1. This is a fascinating blogpost. The implication, I take it, is that if robots can start telling stories about themselves -- on the basis of being able to recount information from the full history of their experiences -- they will take a step towards a characteristic human behaviour.

    The relationship between the self-narrativizing human subject and its society is strongly hierarchical, though, and narratives come from (are imposed by) society, rather than building up from individuals. Society offers narratives about gender, sexuality, politics, economics, religion, and so on, and the individual subject then aims for some degree of accommodation with those available narratives that it thinks will please both itself and its chosen others. This is why humans narrate their actions and behaviours differently in different contexts, depending on whether they want to please a lover, a friend, a class of students, or whatever.

    If your experiment allows robots to share their own stories with each other, and so construct a set of shared stories, it might enable the robots to develop a sense of how their experience tallies with that of others. But that would operate in the reverse direction from the normal relationship between social and personal human narratives, i.e. with stories going up from the individual to the group, rather than (as for humans) down from the group to the individual. Aside from complicated questions about desire and so on (humans have particular, personal aims in mind when they tell stories about their subjective existence, feel hurt or delighted, and so on, when their stories have their effect; it's not clear how robots could), I wonder whether this reversal of narrative direction could somehow be theorized within the project, or whether it needs to be.

    ReplyDelete
    Replies
    1. Hugely interesting comments - thanks!

      Re your 1st para - in fact the objective would not be to make robots behave more like humans. Instead, I'm interested in modelling the relationship between individual and collective narrative - in the abstract. I know its probably an outrageous idea, but I'm suggesting that narrative, in social species, might be universal - so even though the detail and meaning would vary hugely between robot-robot narrative, and human-human narrative (or for that matter animal-animal or alien-alien narratives), at some abstract level there should be common properties. It is those properties I'm interested in.

      I agree entirely with the point in your 2nd para - in human culture there is a vast cultural superstructure that moderates our narratives. I think we have to look to very young children to see narrative interactions uncoloured by culture. In the experiment I suggest there would be no cultural superstructure - at least to start with - and everything will be very minimal. The culture - I suggest - emerges out of the complex dynamics of the narratives. But when it has (robot culture of course - and probably pretty inscrutable), there will be feedback and the process you describe might well emerge. It would be amazing if we observed emergent cultural norms.

      Your point about the reversal of narrative direction is very interesting - and one I'll have to think about before I can offer a response. As you say, it might be very interesting to build into the research hypothesis.

      Delete
    2. Just to confirm I have understood, does the following describe a physical example of robots exchanging narrative?

      Three lawn mowing robots are programmed to move around the lawn in a largely random manner.

      The robots are fitted with touch sensors and wireless communications equipment.

      The robots are programmed to slow down as they approach an object to limit damage to the robot and lawn furniture.

      When a robot encounters an unexpected object, it transmits it's speed, direction and expected location at the time of the collision.
      If it slows down for an expected collision but the collision does not occur, it transmits the time, and location of the failed expected collision and either deletes the object location from its database or flags it as currently obsolete.
      (This is the passing on of a portion of the robots personal narrative to its peers)

      When a robot receives a report of a collision with the above information, it adds it to its personal database of object locations.
      When a robot receives a report of a failed collision, it flags that object location as possibly obsolete. After receiving a set number of similar reports, it either deletes the object location from its database or flags it as currently obsolete.
      (This is the robots society taking the experience of one of its members into account.)

      If the robots did not interact directly, but instead reported back to a central computer that then transmitted only proven reports back to the independent robots, would this not be replicating the normalising action of a culture?

      For example, one of the robots encounters an object and transmits the time, speed, direction and location of the collision the central computer and stores that in its personal database.
      The central computer then stores the same information as possibly correct, but instructs the reporting robot to check itself and its position to make sure it has not made a mistake and waits for confirmation of either the original robot successfully confirming its sensors and location detection equipment is operating correctly or from another robot before transmitting the confirmed location of the object to all robots.

      If a robot slows down in the expectation of a collision but it never occurs it would flag the object record in it's personal database as obsolete and transmit this to the central computer.
      The central computer would then flag the record as possibly obsolete and instructs the reporting robot to check itself and its position to make sure it has not made a mistake and waits for confirmation of either the original robot successfully confirming its sensors and location detection equipment is operating correctly or from another robot before transmitting an instruction to discard the objects obsolete location record to all robots.

      Would this be a fair representation of personal narrative vs. societal/cultural expectation?

      Delete
  2. I can see the relationship regarding the learning by imitating and storytelling. One thing I am concerned with in that experiment is that all results depend on evolving new movements from imitating eachother. As an analogy it is like the game where one person at the head of a line tells a story to the next and so on and the story is often somewhat changed by the time the last person relates it.
    That kind of unreliable communication while great for entertainment would not be much help to human advancement. So I wonder if studying the resultant movements arising from noisy or inaccurate sensors is any different than studying the line game I mentioned.
    While storytelling can be purely the passing of knowledge I think the line "When robots start telling eachother stories..." implies some sort of consciousness and ability to consciously create the story. Anything less would be selling your friends idea for less.

    ReplyDelete
  3. Quoting Dawkins, those stories are, in fact, Memes (very much alla TED moto: "Ideas worth spreading").

    When an artificial intelligence stars recognizing part of his experience (his personal history / some other IA experience that he has captured) as something worth of being transmitted to other entities (IMHO) then you have some sort of Culture (which is, the sum of experiences of the individuals of a determinate group).

    ReplyDelete
  4. I am a software developer who was raised on 3 continents, and experienced first hand what happens when one's individual narrative does not match/please the narrative of a given society. I find this very intriguing...

    Two thoughts that come to mind:
    1. Individuals tend to seek out other individuals whose narratives are similar to theirs. New cultures can emerge from this process.
    2. In my experience, individuals also tend to minimize the amount of interaction they have with others whose narratives are very dissimilar from theirs. As a result a collection of mixed narratives quickly polarizes to smaller collections of similar narratives... think of a public school cafeteria...

    It would be interesting to measure the length and frequency of narrative exchange between robots (or software programs for that matter) when the two behavio(u)rs described above are programmed into them, and when they are not.

    @jpehs - Sometimes individuals with a strong enough narrative can influence/change/birth the narrative of a society.

    ReplyDelete
  5. I'm sorry but while I find the article interesting on an abstract level, I don't think it is possible to achieve any relevant results from this kind of experiment on robots. You speak about robots as if they were intelligent, rational beings, while they are just machines. Humans make them intelligent on specific tasks. For robots to be able to understand "narrative" they would have to be equiped with an intelligent brain that is able to analyze that data and instruct itself, thus improving it's future performance on executing the tasks it's been programmed to do. This is called Artificial Intelligence and it is a big research field, but I fail to understand the point of

    exploring "with an abstract but embodied model, the relationship between the narrative self and shared narrative, i.e. culture."

    You can already see this in action by analyzing the way modern software works today. Take an antivirus program as an example (there are many others like search engines, firewalls, etc). While it's running in a computer, it monitors and analyzes the variations of files, network connections, programs, etc and it detects abnormal behaviour. Through experience it's able to instruct itself to stop threats in a more efficient way and it also receives in an automatized way (from a central server) new knowledge about potential viruses or threats. The antivirus program is composed by many different "robots" which share data with each other - or narrative as you call it - and a "brain" which puts all this data together and makes some sense of it, determining more optimized ways of operation for the whole system.

    So what is this all about?

    ReplyDelete
  6. I think your idea is fantastically interesting; I'll eagerly await the results. As for Richard Gregory's remark, perhaps 'really being onto something' would be when the robots start telling stories that actually mean something to us:. "A funny thing happened to me on the way to the charging station today..."

    ReplyDelete
  7. It is as well you offer such thought provoking ideas now. We are just at the dawn of semi-sentient robotics with the work of Honda et al. When we cease to use monkeys to assist disabled humans and start teaching robots the Issac Asimov laws of humanity, then we will need to assure them that their robot species is valued and not a serfdom.
    Thank you for your writings.

    ReplyDelete
  8. "When I was your age, I had to shift 128-bits BOTH ways" said the grandfather robot.

    ReplyDelete
  9. If the robots tell each other their complete histories, eventually they will spend all of their time talking and not getting anything done.

    ReplyDelete
  10. Intelligence, self-awareness and ego are not the same. Just because a robot has intelligence and self-awareness does not mean that it has an ego. Ego requires self-purpose.

    ReplyDelete
  11. A more telling task would be for the robot to write a story and then do a critique of it's own work.

    ReplyDelete
  12. Awesome article!

    I really dig the idea. It's funny to think about how something so simple, like a story, can hold so much power. A story about how one robot has learned from a mistake can be taught to other robots, so they can avoid it. And unlike humans, they will remember this story, and not repeat the same mistakes of others. It's very possible that we could watch the robot society cognitively evolve at an incredible rate. As they are learning from each other.

    And to Anonymous who stated:
    "If the robots tell each other their complete histories, eventually they will spend all of their time talking and not getting anything done."

    What if some robots did spend all their time talking? Would they really get nothing done? Let's look at the Roman Era, and prior. Some people from these eras would gather in a 'conversation circle' or salon. Here they would discuss ideas about anything. Eventually these salons evolved into 'intellectual salons' where focus was spent on more intellectual gains. Some great ideas came about from just talking, although I can't think of any off the top of my head.

    So if the robots are sitting around talking, they would be learning from each other, and any others that interact with it momentarily. And what is stopping that robot from taking on a role of a teacher. Other robots can talk with this bot, and learn from it, and because it spends so much time talking, there is a good chance this robot will have a great deal of knowledge to pass off.

    It may not be physically moving boxes, or shaving my beard, but it would be accomplishing a great deal.

    ReplyDelete
  13. I used to tell the "joke":
    Imagine letting several Furbies (toys that "learn") listen to C-SPAN broadcasts of the US Congress for several months, and then let them "discuss" among themselves. I wonder what form of government they will "develop".

    ReplyDelete