If we know exactly what an object is and its pose, a lot of opportunities arise. The following is an incomplete list of upgrades that would make Liatris a powerful development library.

  • Improvements in the multi-touch capabilities of resistive touch screens, which can be activated by any material, may provide a solution to remove the metal object requirement.
  • Add pitch and roll to the API’s grasp values.
  • The API should include alternative grasp plans in case a collision would be inevitable for the primary plan.
  • Define maximum and minimum grip forces for specific objects.
  • The API could include the object’s weight, allowing the robot to know if the object is empty or not by comparing the object’s weight to the expected weight.
  • Kivy also supports on_touch_move() and on_touch_up(), which would allow humans to move objects on the touch screen and keep Liatris in sync.
  • Multiple companies are working on touch screens that can also read RFID tags.
  • In the perfect world, the LP would also be the RFID tag. This would allow Liatris to identify which point is the LP and therefore eliminate one of the two OPs.
  • Allowed paths can be defined per object. For example, a rule could state that you should never turn a cup on its side when moving it. Alternatively, a knife blade should always face towards the robot. The API should be able to define a wide collection of rules.
  • Cameras or LiDAR can be used to determine the state of objects that have multiple possible states. Determining if a pot has a lid on it or not can be done via cameras after the exact pose and object are known, based on rules specified in the API.
  • The API could specify a maximum velocity to move delicate objects or those carrying liquids.
  • When an object is grasped, it should be added to MoveIt as a collision object attached to the gripper. One option is attach_mesh() from MoveIt’s planning scene interface. This will avoid collisions by the object while it’s being moved. Similarly, the RFID antenna’s CAD model should be attached to the right gripper.
  • The workspace could have many capacitive touch screens covering all work spaces. The exact location and identity of each touch screen would be passed to the local network, allowing a mobile robot to navigate to the specific touch screen and interact with objects on it. This means that any robot, potentially one of many in this workspace, would know exactly where everything is in the building. The result would be a true “internet of things” experience with humans and robots working together.