Robot operating system (ROS) the complete reference (volume 1) ( TQL )

720 134 1
Robot operating system (ROS)  the complete reference (volume 1) ( TQL )

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Studies in Computational Intelligence 625 Anis Koubaa Editor Robot Operating System (ROS) The Complete Reference (Volume 1) Studies in Computational Intelligence Volume 625 Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: kacprzyk@ibspan.waw.pl About this Series The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution, which enable both wide and rapid dissemination of research output More information about this series at http://www.springer.com/series/7092 Anis Koubaa Editor Robot Operating System (ROS) The Complete Reference (Volume 1) 123 Editor Anis Koubaa Prince Sultan University Riyadh Saudi Arabia and CISTER/INESC-TEC, ISEP Polytechnic Institute of Porto Porto Portugal ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-319-26052-5 ISBN 978-3-319-26054-9 (eBook) DOI 10.1007/978-3-319-26054-9 Library of Congress Control Number: 2015955867 Springer Cham Heidelberg New York Dordrecht London © Springer International Publishing Switzerland 2016 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made Printed on acid-free paper Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www.springer.com) Preface ROS is an open-source robotic middleware for the large-scale development of complex robotic systems Although the research community is quite active in developing applications with ROS and extending its features, the amount of references does not translate the huge amount of work being done The objective of the book is to provide the reader with a comprehensive coverage of the Robot Operating Systems (ROS) and the latest related systems, which is currently considered as the main development framework for robotics applications There are 27 chapters organized into eight parts Part I presents the basics and foundations of ROS In Part II, four chapters deal with navigation, motion and planning Part III provides four examples of service and experimental robots Part IV deals with real-world deployment of applications Part V presents signal-processing tools for perception and sensing Part VI provides software engineering methodologies to design complex software with ROS Simulations frameworks are presented in Part VII Finally, Part VIII presents advanced tools and frameworks for ROS including multi-master extension, network introspection, controllers and cognitive systems I believe that this book will be a valuable companion for ROS users and developers to learn more about ROS capabilities and features January 2016 Anis Koubaa v Acknowledgements The editor would like to acknowledge the support of King Abdulaziz City for Science and Technology (KACST) through the funded research project entitled “MyBot: A Personal Assistant Robot Case Study for Elderly People Care” under the grant number 34-75, and also the support of Prince Sultan University vii Acknowledgements to Reviewers The Editor would like to thank the following reviewers for their great contributions in the review process of the book by providing a quality feedback to authors Yasir André S De Bence Alfredo Huimin Dirk Walter Roberto Joao Ugo Rafael Andreas Timo Maram Timm Fadri Péter William Jenssen William Markus Chienliang Mohamedfoued Steven Javed Oliveira Magyar Soto Lu Thomas Fetter Lages Guzman Fabro Cupcic Bekrvens Bihlmaier Röhling Alajlan Linder Furrer Fankhauser Woodall Chang Morris Achtelik Fok Sriti Peters Prince Sultan University Universidade Tecnológica Federal Paraná PAL Robotics Freescale Semiconductors National University of Defense Technology Open Source Robotics Foundation Universidade Federal Rio Grande Sul Robotnik Universidade Tecnológica Federal Paraná SHADOW ROBOT COMPANY LTD University of Antwerp Karlsruhe Institute of Technologie (KIT) Fraunhofer FKIE Al-Imam Mohamed bin Saud University Social Robotics Lab, University of Freiburg Autonomous Systems Lab, ETH Zurich ETH Zurich Open Source Robotics Foundation Gaitech International Ltd Undefined Autonomous Systems Lab, ETH Zurich University of Texas at Austin Al-Imam Muhammad Ibn Saud Islamic University Open Source Robotics Foundation ix Contents Part I ROS Basics and Foundations MoveIt!: An Introduction Sachin Chitta Hands-on Learning of ROS Using Common Hardware Andreas Bihlmaier and Heinz Wörn 29 Threaded Applications with the roscpp API Hunter L Allen 51 Part II Navigation, Motion and Planning Writing Global Path Planners Plugins in ROS: A Tutorial Maram Alajlan and Anis Koubâa A Universal Grid Map Library: Implementation and Use Case for Rough Terrain Navigation Péter Fankhauser and Marco Hutter 73 99 ROS Navigation: Concepts and Tutorial 121 Rodrigo Longhi Guimarães, André Schneider de Oliveira, Jỗo Alberto Fabro, Thiago Becker and Vinícius Amilgar Brenner Localization and Navigation of a Climbing Robot Inside a LPG Spherical Tank Based on Dual-LIDAR Scanning of Weld Beads 161 Ricardo S da Veiga, Andre Schneider de Oliveira, Lucia Valeria Ramos de Arruda and Flavio Neves Junior Part III Service and Experimental Robots People Detection, Tracking and Visualization Using ROS on a Mobile Service Robot 187 Timm Linder and Kai O Arras xi xii Contents A ROS-Based System for an Autonomous Service Robot 215 Viktor Seib, Raphael Memmesheimer and Dietrich Paulus Robotnik—Professional Service Robotics Applications with ROS 253 Roberto Guzman, Roman Navarro, Marc Beneto and Daniel Carbonell Standardization of a Heterogeneous Robots Society Based on ROS 289 Igor Rodriguez, Ekaitz Jauregi, Aitzol Astigarraga, Txelo Ruiz and Elena Lazkano Part IV Real-World Applications Deployment ROS-Based Cognitive Surgical Robotics 317 Andreas Bihlmaier, Tim Beyl, Philip Nicolai, Mirko Kunze, Julien Mintenbeck, Luzie Schreiter, Thorsten Brennecke, Jessica Hutzl, Jörg Raczkowsky and Heinz Wörn ROS in Space: A Case Study on Robonaut 343 Julia Badger, Dustin Gooding, Kody Ensley, Kimberly Hambuchen and Allison Thackston ROS in the MOnarCH Project: A Case Study in Networked Robot Systems 375 João Messias, Rodrigo Ventura, Pedro Lima and João Sequeira Case Study: Hyper-Spectral Mapping and Thermal Analysis 397 William Morris Part V Perception and Sensing A Distributed Calibration Algorithm for Color and Range Camera Networks 413 Filippo Basso, Riccardo Levorato, Matteo Munaro and Emanuele Menegatti Acoustic Source Localization for Robotics Networks 437 Riccardo Levorato and Enrico Pagello Part VI Software Engineering with ROS ROS Web Services: A Tutorial 463 Fatma Ellouze, Anis Koubâa and Habib Youssef rapros: A ROS Package for Rapid Prototyping 491 Luca Cavanini, Gionata Cimini, Alessandro Freddi, Gianluca Ippoliti and Andrea Monteriù HyperFlex: A Model Driven Toolchain for Designing and Configuring Software Control Systems for Autonomous Robots 509 Davide Brugali and Luca Gherardi 714 T Becker et al the factories definitions Lines and define user interface property files and line tells if we will be using the LIDA user interface or not Finally, line 11 defines a logging properties file, for setting up the system logs With all these files created we have the following project structure: • myagent – src—Folder to store all the source code of the agent – lib—Folder to store the library dependencies – configs—Folder to store the configuration files basicAgent.xml factoryData.xml guiPanels.properties guiCommands.properties logging.properties We will be editing these files through this tutorial, but you will get a example version of each file together with the LIDA Framework, now lets go to the Java source code In order to get our agent running we need to define a static main method to serve as the entry point of our system We will create a file called Run.java with a proper class, to handle our emphmain method Listing 1.2 Run.java package myagent; import edu.memphis.ccrg.lida.framework.initialization AgentStarter; public class Run { public static void main(String[] args ){ AgentStarter.main(args); } } In line 7, the method AgentStarted.main is invoked in order to start the LIDA Framework agent, this method will read the configuration files and set the agent running Our agent won’t run right now, because there are some code and configurations missing, lets create them Earlier in this chapter, it was said that the LIDA Framework’s Environment module is problem dependent and we need to implement this module now LIDA Bridge comes with an implemented environment module that will provide an environment almost ready to use in our ROS/LIDA navigator Although the LIDA Bridge environment module provides us a ROS Environment, we still need to specify some specific settings for our robot We will create a class called MyRobot that inherits from lidabridge.ROSEnvironment Inside this class, we will set up the ROS Topics we will be subscribing and publishing within our system, as well as the actions LIDA Bridge—A ROS Interface to the LIDA … 715 that, when decided to be taken by LIDA, will affect the ROS system We will all this customization in the constructor method of our main class, because all of this configurations must be ready before our agent starts The code bellow shows the example of the Environment module, inherited from ROSEnvironment with all the customizations needed to our example The first thing that appears in our constructor (line 21) is the rosSetAddr method from the ROSEnvironment class It sets the URI (Uniform Resource Indicator) which points to the ROS Bridge node, so it can communicate with it, through web sockets Lines 13–18 define several objects, each one representing a ROS Topic that is advertised or subscribed by the agent The PoseStampedTopic class that can manage a ROS topic with the PoseStamped message type The StringTopic class that can manage a ROS topic with the String message type Line 13 defines a PoseStampedTopic named robotPosition, that will be used to handle the position of our robot Line 23 instantiates the object robotPosition, of the PoseStampedTopic class, that has the following arguments: the ROS topic, an alias to be used by LIDA Bridge, and a TopicAccessType that tells if the topic will be published or subscribed Besides the robotPosition we create for our example, we have other three topics for managing positions, mainGoal that handles the final goal that the robot must pursuit, the goal topic, that handles temporary positions that LIDA uses to send the goal for the ROS Navigation Stack, and finally the person topic representing the position of the person (or any other mobile obstacle) Lines 17 and 18 create two objects of the StringTopic class, the status topic will be used to publish the last action taken by the agent, and the message topic will publish a simple message depending on the action taken The StringTopic constructor has the same parameters of the PoseStampedTopic Lines 26–40 instantiate all the other topics, just like the robotPosition In lines 46–65 two Actions are created and added to the Actions list of the ROSEnvironment An Action is a LIDA Bridge class used to define what the agent must when a decision is taken by LIDA The actions are created in this example with two anonymous classes that inherits from the Action class The constructor takes two parameters, the first is the ROSEnvironment object related to this action, and the second is the LIDA command that is result of LIDA’s action selection mechanism (commands will be discussed later in this chapter) Every Action must override the doWork method In this example the two actions correspond to the algorithm.goal and algorithm.stop commands The first is selected when LIDA decides the robot must continue going to the goal Line 49 sets a simple message, and line 51 sets the goal topic with the mainGoal value, as the goal topic is sent every cycle to the Navigation Stack, the robot will continue going to the goal Line 52 sets the status of the robot as goal, which tells LIDA that the last action taken by the robot was to move towards the goal The second Action sets the robot position to the current robot position, so the robot will stop 716 T Becker et al Listing 1.3 MyRobot.java 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 package myagent; import java.util.Random; import import import import import lidabridge.Action; lidabridge.ROSEnvironment; lidabridge.topic.PoseStampedTopic; lidabridge.topic.StringTopic; lidabridge.topic.TopicAccessType; public class MyRobot extends ROSEnvironment { private private private private private private PoseStampedTopic robotPosition; PoseStampedTopic mainGoal; PoseStampedTopic goal; PoseStampedTopic person; StringTopic status; StringTopic message; public MyRobot() { setRosAddr("ws://localhost:9090"); robotPosition = new PoseStampedTopic("odom", "agentpose", TopicAccessType.SUBSCRIBED); this.getTopics().add(robotPosition); mainGoal = new PoseStampedTopic("/goal", "maingoal", TopicAccessType.SUBSCRIBED); this.getTopics().add(mainGoal); goal = new PoseStampedTopic("/move_base_simple/ goal", "goal", TopicAccessType.ADVERTISED); this.getTopics().add(goal); person = new PoseStampedTopic("/person_pose", "person", TopicAccessType.SUBSCRIBED); this.getTopics().add(person); message = new StringTopic("/message", "message", TopicAccessType.ADVERTISED); this.getTopics().add(message); LIDA Bridge—A ROS Interface to the LIDA … 41 status = new StringTopic("/status", "status", TopicAccessType.ADVERTISED); status.setValue("stop"); this.getTopics().add(status); 42 43 44 45 46 47 48 49 50 51 52 53 // Actions // Action for the robot follow the main goal this.getActions().add(new Action(this, "algorithm.goal") { @Override public void doWork() { message.setValue("Following torwards the goal"); 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 717 goal = mainGoal; status.setValue("goal"); } }); // Action for the robot stop this.getActions().add(new Action(this, "algorithm.stop") { @Override public void doWork() { message.setValue("Stop "); goal.setX(robotPosition.getX()); goal.setY(robotPosition.getY()); goal.setOrientation(robotPosition getOrientation()); status.setValue("stop"); } }); } } Now thanks to the LIDA Bridge library our environment communication is ready, and we just need to set up the standard LIDA settings The next thing we need to to accomplish this task is to implement our perception codelets These codelets will monitor the environment and if it recognizes a particular situation it will activate the corresponding nodes in the Perceptual Associative Memory The following code shows a codelet to activate the node ofar (obstacle is far) or onear (obstacle is near) in PAM, depending on the distance of the robot from the person in our environment In the init method (lines 21–27) 718 T Becker et al the nodes are taken from the PAM using the getNode method by their names The detectLinkables method is responsible for executing the detection and the activation of the nodes Lines 32 and 35 read the values of the position of the person and the agent respectively, then a simple euclidian distance is computed and if the distance is less than 1.2 meters, the onear node is excited (line 50) otherwise the ofar node is excited (line 53) Listing 1.4 ObstacleDistanceDetector.java 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 package myagent.featuredetectors; import java.util.HashMap; import java.util.Map; import lidabridge.topic.PoseStampedTopic; import edu.memphis.ccrg.lida.pam.PamLinkable; import edu.memphis.ccrg.lida.pam.tasks DetectionAlgorithm; import edu.memphis.ccrg.lida.pam.tasks MultipleDetectionAlgorithm; public class ObstacleDistanceDetector extends MultipleDetectionAlgorithm implements DetectionAlgorithm { private PamLinkable near; private PamLinkable far; private PamLinkable obstaclenode; private final String modality = ""; private Map detectorParams = new HashMap(); @Override public void init() { super.init(); near = (PamLinkable) pam.getNode("onear"); far = (PamLinkable) pam.getNode("ofar"); obstaclenode = (PamLinkable) pam.getNode ("obstacle"); } @Override public void detectLinkables() { detectorParams.put("mode", "person"); LIDA Bridge—A ROS Interface to the LIDA … 33 PoseStampedTopic obstacle = (PoseStampedTopic) sensoryMemory.getSensoryContent(modality, detectorParams); 34 35 36 detectorParams.put("mode", "agentpose"); PoseStampedTopic agent = (PoseStampedTopic) sensoryMemory.getSensoryContent(modality, detectorParams); 37 38 39 40 41 if ((obstacle == null) || (agent == null)) return; // Compute the euclidian distance between the agent and the obstacle 42 43 44 double distance; Double dx = Math.pow(obstacle.getX() - agent getX(), 2); Double dy = Math.pow(obstacle.getY() - agent getY(), 2); distance = Math.sqrt(dx + dy); 45 46 47 48 49 50 51 52 53 54 55 56 57 58 719 pam.receiveExcitation(obstaclenode, 1); if (distance < 1.2 ) pam.receiveExcitation(near, 1); else pam.receiveExcitation(far, 1); } } One more perceptual codelet is needed to our example to detect the the status of the robot, and excite the nodes corresponding to the two possible actions of the robot, stop and goal You can it yourself or download the full code of this example in the LIDA Bridge page The last piece of code needed is a main class to start our agent, this is accomplished by invoking the main method of the AgentStarter class, as shown in Listing 1.5 This class reads all the configuration from the XML files and runs the LIDA Framework with the proposed setup 720 T Becker et al Listing 1.5 Run.java package myagent; import edu.memphis.ccrg.lida.framework.initialization AgentStarter; public class Run { public static void main(String[] args ){ AgentStarter.main(args); } } Finally, now we need to set up things in LIDA’s XML files In lidaConfig properties file we defined a basicAgent.xml This file is the core of LIDA’s agent declaration, where we can tell LIDA how every module will work The full documentation of the file can be found on LIDA project documentation and a example file in also be found at LIDA Bridge’s page Here we will focus only on what is specifically associated to the problem proposed to our example Listing 1.6 basicAgent.xml (Environment) 10 11 12 13 14 15 16 17 18 19 myagent.MyRobot 1 defaultTS lidabridge.modules.ROSSensoryMemory Environment defaultTS SensoryMemoryBackgroundTask 5 In the modules section we need to specify the environment module we developed, to so in the Environment module configuration we set the class property to our MyRobot class, then when the agent start it will use our class as the environment module Another custom module that must be configured is the SensoryMemory LIDA Bridge—A ROS Interface to the LIDA … 721 module, because we will be using the LIDA Bridge’s sensory memory To this you must set the class as lidabridge.modules.ROSSensoryMemory as shown in Listing 1.6 In the Perceptual Associative Memory module configuration, we define which will be the nodes that will represent the perceptual model of the environment; We so with the nodes section in the PerceptualAssociativeMemory module configuration Listing 1.7 shows the section nodes filled with nodes: obstacle, onear, ofar, robot, rstop, rgoal The node obstacle represent the person moving in the environment, the nodes onear and ofar represent the distance of the agent from the obstacle (as we defined before in the Obstacle DistanceDetector), the node robot as the name says represent the robot itself, the nodes rstop and rgoal represent the two possible states, the robot has stopped or is moving towards the goal Still in the PAM configuration, we have a section named links, this section defines which are the links between the nodes in PAM, and they are represented using the following notation: : In Listing 1.7 we defined four links: obstacle:near, obstacle:far, robot:rstop, robot:rgoal These nodes and links defined here will be enough for our simple example In order to monitor the sensory data and excite the PAM nodes, we developed two codelets or feature detectors The PAM configuration also as to include those feature detectors information The perception codelets are defined by adding a TASK attribute for each codelet Listing 1.7 basicAgent.xml (PerceptualAssociativeMemory) 10 11 12 13 14 15 16 17 edu.memphis.ccrg.lida.pam PerceptualAssociativeMemoryImpl TransientEpisodicMemory 0.5 0.6 0.8 1 1 0.5 obstacle, onear, ofar, robot, rstop, rgoal 722 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 T Becker et al obstacle:onear, obstacle:ofar, robot:rstop, robot:rgoal defaultTS ObstacleDistanceDetector 3 onear, ofar RobotStatusDetector 3 rgoal, rstop, rexcuse edu.memphis.ccrg.lida.pam BasicPamInitializer The attention module is where the Attention Codelets are defined, just like the perceptual codelets in the Perceptual Associative Memory, the Attention Codelets in the Attention Module are defined by TASKS In this example we set up three attention codelets: • AttentionCodelet-01—observes the nodes robot, ofar and rstop; • AttentionCodelet-02—observes the nodes robot, rgoal and onear; • AttentionCodelet-03—observes the nodes robot, rstop and obear; In Listing 1.8 is shown the example of the Attention Module configuration Listing 1.8 basicAgent.xml (AttentionModule) edu.memphis.ccrg.lida attentioncodelets.AttentionCodeletModule Workspace GlobalWorkspace LIDA Bridge—A ROS Interface to the LIDA … 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 723 NeighborhoodAttentionCodelet 1.0 -1.0 0.5 defaultTS NeighborhoodAttentionCodelet 5 robot, ofar, rstop 5 1.0 1.0 NeighborhoodAttentionCodelet 5 robot, rgoal, onear 5 1.0 1.0 724 31 32 33 34 35 36 37 38 39 40 41 42 T Becker et al NeighborhoodAttentionCodelet 5 robot, rstop, onear 5 1.0 1.0 In the Procedural Memory configuration we define the action schemes used by the action selection mechanism (Listing 1.9) Listing 1.9 Action schemes syntax |()()||()() Listing 1.10 shows the configuration of the module, where the parameters named scheme.1a and scheme.2a define the action schemes of our procedural memory Listing 1.10 basicAgent.xml (ProceduralMemory) edu.memphis.ccrg.lida proceduralmemory.ProceduralMemoryImpl 14 conditionDecay 0.1 1.0 LIDA Bridge—A ROS Interface to the LIDA … 10 11 12 13 14 15 725 1.0 edu.memphis.ccrg.lida.proceduralmemory SchemeImpl Move|(robot, ofar, rstop)()|action.seguir|(rgoal)()|0.2 Stop|(robot, rgoal, onear)()|action.parar|(rstop, ofar, omoving)()|0.2 defaultTS edu.memphis.ccrg.lida proceduralmemory BasicProceduralMemoryInitializer myagent.initializers ProceduralMemoryInitializer > Finally we will set up the Sensory Motor Memory, in which we will map an action (given by the action schemes) to a command (the ones we used to define out actions in the environment class) Listing 1.11 shows the configuration of the SensoryMotorMemory Listing 1.11 basicAgent.xml (SensoryMotorMemory) edu.memphis.ccrg.lida sensorymotormemory BasicSensoryMotorMemory Environment 1 action.stop, algorithm.stop action.move, algorithm.move action.releasePress ,algorithm.releasePress > defaultTS edu.memphis.ccrg.lida sensorymotormemory 726 T Becker et al BasicSensoryMotorMemoryInitializer 10 These are the main configurations that must be set up in order to run LIDA A complete set of default configuration files can be found inside the LIDA Framework Now with our robot configured to read and publish the proper topics we can run our agent and monitor all the LIDA modules with the LIDA GUI The following topics are published by the robot and subscribed by the agent, they will provide the data that LIDA will use to create the perception model of the environment: • /pose • /person • /goal While the agent executes, the following topics are published by the agent: • /status • /move_base_simple/goal • /message The status and message topics are used to show actions are being selected by the agent, the move_base_simple/goal is the topic used by the Navigation Stack and it points to the position the robot must go We can see in Fig the rqt interface, showing all the topics used by the agent in the ROS system Figure presents the LIDA Framework Interface showing the graph build by LIDA’s current situational model, which represents the actual state of the environment in the form of a graph Fig RQT Interface showing the topics managed by the agent LIDA Bridge—A ROS Interface to the LIDA … 727 Fig LIDA GUI showing the current situational model graph Conclusions The main points in developing a LIDA powered robot are due to the its refined attentional system, its learning capabilities and association potential The core feature of the conscious system is the attentional system which is the implementation of the Global Workspace Theory This feature makes our system more similar to a biological brain in some ways, and it can rapidly change its tasks in the conscious and unconscious contexts just like a human or animal does when a novel or unexpected situation is experienced The LIDA’s graph representation of the situations is a powerful tool, in our example (due to the space limitations for this chapter) the graph which represent the perceptions is very simple and may somehow not enough to show LIDA’s association capabilities The various memory systems working together gives LIDA a high association power, which can infer new information as long as it experiences diverse environment situations LIDA Bridge is a very helpful tool because the LIDA Framework is a general purpose cognitive system, it does not have any ROS integration nor any robot specific settings This simple but useful tool can ease the process of building LIDA controlled robots, and allow for an easier starting point to an exciting new area of study: conscious robotics 728 T Becker et al References G Landis, Teleoperation from Mars orbit: a proposal for human exploration Acta Astronautica 61(1), 59–65 (2008) E Guizzo, How Googles self-driving car works (2011), http://spectrum.ieee.org/automaton/ robotics/artificial-intelligence/how-google-self-driving-car-works L Iocchi, J Ruiz-del-Solar, T van der Zant, Advances in domestic service robots in the real world J Intell Robot Syst 76(1), 3–4 (2014) B.M Faria, L.P Reis, N Lau, A Survey on Intelligent Wheelchair Prototypes and Simulators, in New Perspectives in Information Systems and Technologies vol (Springer International Publishing, Switzerland, 2014), pp 545–557 J Snaider, R McCall, S Franklin, The LIDA framework as a general tool for AGI, in The Proceedings of the Fourth Conference on Artificial General Intelligence (AGI-11) (2011) B.J Baars, S Franklin, Consciousness is computational: the LIDA model of global workspace theory Int J Mach Conscious 1(1), 23–32 (2003) B.J Baars, A Cognitive Theory of Consciousness (Cambridge University Press, Cambridge, 1988) F Crick, C Koch, A framework for consciousness Nat Neurosci 6(2):119–126 (2003) D.R Hofstader, M Mitchell, The Copycat Project: A model of mental fluidity and analogymaking, in Advances in Connectionist and Neural Computation Theory Volume 2: Analogical Connections, ed by K Holyoak, J Barnden (Ablex Publishing Corporation, Norwood, 1994), pp 31–112 10 P Maes, How to the right thing Connect Sci J 1, 291–323 (1989) 11 P Kanerva, Sparse Distributed Memory (MIT Press, Cambridge, 1988) 12 J.V Jackson, Idea for a mind ACM SIGART Bull xx(101):23–26 (1987) 13 S Franklin, A Graesser, O Brent, H Song, N Aregahegn, Virtual mattie—an intelligent clerical agent 14 J Newman, B.J Baars, S.-B Cho, A neural global workspace model for conscious attention Neural Netw 10(7), 1195–1206 (1997) 15 S Franklin, A Kelemen, L Mccauley, IDA: a cognitive agent architecture, in IEEE Conference on Systems, Man and Cybernetics, pp 2646–2651 (1998) 16 O.G Selfridge, Pandemonium: a paradigm for learning, ed by D.V Blake, A.M Uttley, Proceedings of the Symposium on Mechanisation of Thought Processes, London, pp 511–529 (1959) 17 A.S Negatu, Cognitively inspired decision making for software agents: integrated mechanisms for action selection, expectation, automatization and non- routine problem solving Ph.D thesis, The University of Memphis (2006) 18 R Capitanio, R.R Gudwin, A conscious-based mind for an artificial creature, in Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, Odense, Denmark, pp 616–623 (2010) 19 A.S Negatu, S Franklin, An action selection mechanism for “conscious” software agents Cogn Sci Q 2, 363–386 (2002) ... group.getCurrentState () - >copyJointGroupPositions(group.getCurrentState () ->getRobotModel () - >getJointModelGroup(group.getName () ) , group_variable_values); group_variable_values[0] = -1.0; group.setJointValueTarget(group_variable_values);... search and rescue (the DARPA robotics challenge), unstructured autonomous pick and place (with industrial robots like the UR 5), mobile manipulation (with the PR2 and other robots), process tasks... series at http://www.springer.com/series/7092 Anis Koubaa Editor Robot Operating System (ROS) The Complete Reference (Volume 1) 123 Editor Anis Koubaa Prince Sultan University Riyadh Saudi Arabia

Ngày đăng: 29/04/2020, 14:59

Từ khóa liên quan

Mục lục

  • Preface

  • Acknowledgements

  • Acknowledgements to Reviewers

  • Contents

  • Part I ROS Basics and Foundations

  • MoveIt!: An Introduction

    • 1 Introduction

    • 2 A Brief History

    • 3 MoveIt! Architecture

      • 3.1 Collision Checking

      • 3.2 Kinematics

      • 3.3 Motion Planning

      • 3.4 Planning Scene

      • 3.5 3D Perception

      • 3.6 Trajectory Processing

      • 3.7 Using This Tutorial

      • 3.8 Installing MoveIt!

      • 4 Starting with MoveIt!: The Setup Assistant

        • 4.1 Start

        • 4.2 Generating the Self-Collision Matrix

        • 4.3 Add Virtual Joints

        • 4.4 Planning Groups

        • 4.5 Robot Poses

Tài liệu cùng người dùng

Tài liệu liên quan