Project CoCoMaps Blog

  • Final Report

    The CoCoMaps project has now been completed and all the deliverables and milestones have been met. We have produced the final project report which is available here. You can also still catch the project videos and all the reports on this page. We thank the ECHORD++ project and the EU for their support without which none of this would have been possible.

    (0)
  • Demo 3 Video and Report

    The video and the report for Demo 3 are now available to see.

    (0)
  • Demo 3

    The fourth and final demonstration of the project was completed successfully today. The two robots collaborated with two humans to via dialogue complete a set of tasks specified by the humans. Information needed to complete the tasks was dynamically detected and extracted by asking the humans to provide it. The video and the report will be posted here when ready.

    (0)
  • CoCoMaps Platform v1.0.90 Released

    We have today released all the open source parts of the CoCoMaps project. Please visit the download page for viewing it. Do remember that the project runs inside the Psyclone Platform, so you will need this as well from, it can be downloaded here.

    (0)
  • Psyclone Platform v2.0.1 Released

    We have today released the final version of the Psyclone platform, now officially out of Beta. Please visit the download page and grab either the source code or the binary release. Work will continue on Psyclone beyond the end of the project and we will keep this page updated with new versions.

    (0)
  • Demo 2 Video and Report

    The video and report for Demo 2 are now available to see. 

    (0)
  • Demo 2

    We have completed Demo 2 of the project and the report and video are almost ready for publication. The robots behaved very well and worked together nicely to complete the tasks. They start up searching for humans to speak to (like in Demo 1) and once a human has been found one of the robots strikes up a conversation to collaborate on a task which the other robot will be helping with while the conversation is in progress. We have found a number of issues which didn’t directly affect Demo 2, but which we definitely need to fix for Demo 3, such as the tasks getting out of sync and roles playing up. Luckily the cameras and the vision code works much better now. 

    (0)
  • Last Push

    We are heading into the last two months of the project, scheduled to finish on 31 March 2018. We have completed two demonstrations (Demo 0 and Demo 1) and are working hard on the final elements for the final two (Demo 2 and Demo 3). Our speech interaction is improving day by day and the robot’s 2D and 3D vision is now ready.  

    (0)
  • A new look

    Our robots are now boasting of a new and longer neck for a raised camera position. It makes them look like giraffes, but it’s part of a set of improvements being implemented for our next demo. They also include testing a new camera with better handling of light, implementing better on-board face detection and lowering the volume of network traffic.    

    (0)
  • Demo 1 – Video

    Demo 1 went well with the robots successfully identifying their human partners – and the video editing is finally complete. In this video you can watch the full demo.

    (0)
  • Demo 1 – Early data analysis results

    Demo 1 produced a lot of data and we are working our way through it now. Preliminary figures suggest that the CCMCatalog enabled the robots to collaborate far more efficiently when working together and that the robots were able to detect and track humans in the scene. Stay tuned for more results…  

    (0)
  • Showtime

    The stage has now been set for our robots to show off their capabilities so far. The robots are to map out an area which they scan for humans using search patterns. Each human detected will be identified using facial recognition. All information is instantly shared with the other robot via the CCMCatalog. Let’s see how they’ll do!

    (0)
  • Robot jitters before first demo

    Preparation of our robots is now in full swing to get them up and running for our first demo. The on-board cameras all need to run at the same time which we have been testing as well as getting the robots to communicate through the CCMCatalog. The first trial run went pretty well but afterwards the processing server decided to (literally) blow up. This called for some time being spent on finding a replacement server after already having lost precious time due to prolonged robot battery delivery discussion with the vendor.

    (0)
  • Planning the first demo

    The CCMCatalog is almost done and can handle the sharing and negotiation of observations and tasks between the two robots. Face detection has gone in and the new body and legs detector is being tested now. Basic navigation is working now and we are working on the operator interface in the PsyProbe web interface. 

    (0)
  • New cool features in Psyclone 2.0: Python integration

    Psyclone 2.0 already supports user authored native C++ modules running in a mixed Windows/Linux cluster environment. Now we have added native Python 2.7 and 3.5 support so our users can create either inline or separate modules written entirely in Python. As part of the CoCoMaps project we are now integrating with ROS (Robot Operating System) via Python.  

    (0)
  • New Psyclone 2.0 platform going Open Source soon

    Later this year we plan to release version 2.0 of the Psyclone platform. It is a complete rewrite of Psyclone 1.5 with all the old features intact and with massive performance improvements, merging of discrete and streaming data messaging, introduction of drumbeat signals for simulations and a completely new web interface, PsyProbe 2.0. You can read more about Psyclone 1 at: https://cmlabs.com/products.  

    (0)
  • CoCoMaps at the Hannover Messe 2017

    The CoCoMaps team is in Hannover demonstrating our turn-taking technology for robot-human natural conversations. You can find us in Hall 17 at the ECHORD stand C70. We are here with three other cool robotics projects and attracting a lot of attention.

    (0)
  • Robot-human turn-taking now running on the Psyclone 2 platform

    We now have turn-taking for two participants (one robot and one human) up and running on the Psyclone 2 platform. We use Nuance for speech recognition and speech generation and can handle interruptions and long pauses. Now we just need to work on starting and ending the conversation 🙂

    (0)
  • CoCoMaps invited to exhibit at Hannover Messe 2017

    The CoCoMaps project has been invited to exhibit our technology at the Hannover Messe 2017 in Germany. We will be showcasing our turn-taking technology running on the Psyclone 2 platform.

    (0)
  • CoCoMaps at CES 2017

    The CoCoMaps team is networking at CES 2017 in Las Vegas, meeting up with a number of potential vendors, partners and customers.

    (0)
  • How Would Siri, Google Assistant Improve With Better Turntaking?

    Although Siri can be quite helpful and at times pleasant to communicate with she still lacks the flexibility of a human duologue partner making conversing with her have an unnatural flow. In fact Siri – like other robots – is lacking in her turn taking ability. To communicate somewhat smoothly with robots one needs to understand how they operate and communicate accordingly with them. One needs to: Say the wake-up command. Wait until it claims to be ready. Now speak a single sentence without any pauses or hesitation.  If you change your mind half way you MUST finish anyway and start with step 1 again.  If you change your mind half way you MUST finish anyway and start with step 1 again. If it misunderstands anything you MUST finish anyway and start with step 1 again. If it needs clarification you MUST finish anyway and start with step 1 again.  Whatever you want you must ALWAYS finish anyway and start with step 1 again. However most people do not understand how robots operate and process information. When ‘the Average Joe’ tries to use our Amazon Echo he wants to be able to: Pause to think about the rest of the […]

    (0)
  • What is Turntaking in Dialogue?

    In most typical dialogue, humans use turntaking naturally – one person waits for a natural break in the other person’s dialogue to start speaking or answering the other. Computers and robots, however, are not great at this – generally, they are actually very bad, as we have all experienced when calling a phone-based automatic speech recognition system and have ended up screaming ‘No, that wasn’t what I meant’ to the automaton. Natural turntaking takes a lot of practice to get right and has to be naturally adaptive depending on the person you speak to. The CoCoMaps project takes existing turntaking technology created by IIIM and CMLabs and merges it with the Cognitive Map architecture that allows the robot (or computer) to not only do natural turntaking with one person, but broadens this out to work with two or more humans and two or more robots. We call this multi-participant turntaking. The technology will allow communication between humans and robots to flow much more naturally and provide enhanced information to the dialogue planning part of the system.  

    (1)
  • Making Robots Smarter With a Better Cognitive Architecture

    Simple robots operate through a perception-action cycle, or loop, which allows them to make decisions and form a response based on the information they receive on each cycle. Adding layers in a robot’s perception-action cycle enables it to “think outside the loop”, supporting parallel information processing on different levels simultaneously, thus making it generally more responsive and generally – other things being equal – smarter. The design of such layers is done in what we call a robot’s cognitive architecture. A cognitive architecture is typically a mixture of software and hardware which facilitates the perception of input from the environment (cameras, microphones, other sensors) and the processing thereof to produce responses that can be executed by the robot’s on-board motors, speakers, and screens. It must be able to process this input in real-time, in light of the current set of contexts (goals, tasks, expectations), and plan a suitable set of soft-real-time outputs (visuals, speech, motor commands). The bottom layer of a cognitive architecture deals with physical inputs and outputs and the top layer deals with decision making. By adding layers in between processing and reacting can progress incrementally, with increased foresight – a broadening of the time and data that […]

    (0)
  • Why are Robots still so Dumb?

    Ever tried conversing with a robot on the other end of a telephone customer service line? Now try adding a slight delay due to indeterminism and prepare for a frustrating interruption followed by a back and forth between incorrect options, cheerfully pursued by the robot. Our recently-launched project CoCOMaps aims to make robots better at processing the information needed to for better social skills and ultimately make them smarter and more pleasant to interact with. The perception of intelligence in a machine is linked to how ‘human’ its behavior seems. But by that measurement computers are frankly still quite dumb. Providing the ability for a robot or a computer to respond (in soft-real-time) to interruptions, or to partially understand the topic of a dialogue and ask for more targeted clarification, will make the system more useful – and certainly much more pleasant to interact with. CoCoMaps proposes to build a strong foundation for robots to acquire sufficient dialogue skills to be competent participants in human-robot dialogue without annoying protocols or unexpected breakdowns – breakdowns that might find an interesting place in a sci-fi novel but are all but welcome when you’re trying to get a job done.

    (0)
  • And so the Journey begins

    Today the CoCoMaps project officially launched! We are very exited to set off on this journey after months of preparation. The aim of the project is to endow robots with the skills to communicate with humans in real time using natural dialogue. If successful these improved communicational skills of the robots will enable much more sophisticated human-robot collaborations and open up doors for robots to work alongside humans in human-centric environments, such as factories, hospitals and the entertainment business. The possibilities are endless. Stay tuned for updates on the project progress!    

    (0)
  • CoCoMaps – Teaming up with the Robots

    The start of the CoCoMaps project on 1 September 2016 is drawing near – here is a link to a great introduction article by IIIM: CoCoMaps – Teaming up with the Robots

    (0)
  • CMLabs and IIIM joins ECHORD++ project CoCoMaps

    A joint proposal by CMLabs and IIIM for the project CoCoMaps has been accepted by the ECHORD++ EU consortium for funding starting from September 2016.

    (0)
Project updates   Follow Us on Facebook Follow Us on Twitter


Project Pages

Back to main project page

Comments are closed.