Although Siri can be quite helpful and at times pleasant to communicate with she still lacks the flexibility of a human duologue partner making conversing with her have an unnatural flow. In fact Siri – like other robots – is lacking in her turn taking ability. To communicate somewhat smoothly with robots one needs to understand how they operate and communicate accordingly with them. One needs to: Say the wake-up command. Wait until it claims to be ready. Now speak a single sentence without any pauses or hesitation. If you change your mind half way you MUST finish anyway and start with step 1 again. If you change your mind half way you MUST finish anyway and start with step 1 again. If it misunderstands anything you MUST finish anyway and start with step 1 again. If it needs clarification you MUST finish anyway and start with step 1 again. Whatever you want you must ALWAYS finish anyway and start with step 1 again. However most people do not understand how robots operate and process information. When ‘the Average Joe’ tries to use our Amazon Echo he wants to be able to: Pause to think about the rest of the […]
In most typical dialogue, humans use turntaking naturally – one person waits for a natural break in the other person’s dialogue to start speaking or answering the other. Computers and robots, however, are not great at this – generally, they are actually very bad, as we have all experienced when calling a phone-based automatic speech recognition system and have ended up screaming ‘No, that wasn’t what I meant’ to the automaton. Natural turntaking takes a lot of practice to get right and has to be naturally adaptive depending on the person you speak to. The CoCoMaps project takes existing turntaking technology created by IIIM and CMLabs and merges it with the Cognitive Map architecture that allows the robot (or computer) to not only do natural turntaking with one person, but broadens this out to work with two or more humans and two or more robots. We call this multi-participant turntaking. The technology will allow communication between humans and robots to flow much more naturally and provide enhanced information to the dialogue planning part of the system.
Simple robots operate through a perception-action cycle, or loop, which allows them to make decisions and form a response based on the information they receive on each cycle. Adding layers in a robot’s perception-action cycle enables it to “think outside the loop”, supporting parallel information processing on different levels simultaneously, thus making it generally more responsive and generally – other things being equal – smarter. The design of such layers is done in what we call a robot’s cognitive architecture. A cognitive architecture is typically a mixture of software and hardware which facilitates the perception of input from the environment (cameras, microphones, other sensors) and the processing thereof to produce responses that can be executed by the robot’s on-board motors, speakers, and screens. It must be able to process this input in real-time, in light of the current set of contexts (goals, tasks, expectations), and plan a suitable set of soft-real-time outputs (visuals, speech, motor commands). The bottom layer of a cognitive architecture deals with physical inputs and outputs and the top layer deals with decision making. By adding layers in between processing and reacting can progress incrementally, with increased foresight – a broadening of the time and data that […]
Ever tried conversing with a robot on the other end of a telephone customer service line? Now try adding a slight delay due to indeterminism and prepare for a frustrating interruption followed by a back and forth between incorrect options, cheerfully pursued by the robot. Our recently-launched project CoCOMaps aims to make robots better at processing the information needed to for better social skills and ultimately make them smarter and more pleasant to interact with. The perception of intelligence in a machine is linked to how ‘human’ its behavior seems. But by that measurement computers are frankly still quite dumb. Providing the ability for a robot or a computer to respond (in soft-real-time) to interruptions, or to partially understand the topic of a dialogue and ask for more targeted clarification, will make the system more useful – and certainly much more pleasant to interact with. CoCoMaps proposes to build a strong foundation for robots to acquire sufficient dialogue skills to be competent participants in human-robot dialogue without annoying protocols or unexpected breakdowns – breakdowns that might find an interesting place in a sci-fi novel but are all but welcome when you’re trying to get a job done.
Today the CoCoMaps project officially launched! We are very exited to set off on this journey after months of preparation. The aim of the project is to endow robots with the skills to communicate with humans in real time using natural dialogue. If successful these improved communicational skills of the robots will enable much more sophisticated human-robot collaborations and open up doors for robots to work alongside humans in human-centric environments, such as factories, hospitals and the entertainment business. The possibilities are endless. Stay tuned for updates on the project progress!