You can acquire knowledge from Wikipedia, books, articles, manuals,
conversations with knowledgeable people, etc. You will then learn when, by
experience, you "wire" this knowledge into your brain. If trials are hard and
time consuming, your wiring will basically be a long (painful) line. But when
trials are cheap and fast, the wiring will spread wider as you are free to
explore many possibilities.
In motion planning, your first-hand experiences will mainly come from software,
so you should get a good development environment. There are several options
available today; the one I have been using during my PhD is OpenRAVE. In my opinion, it is very good for learning as the
scripting level is in Python, which allows you to explore your objects
interactively. (Core functionalities themselves are implemented in C++ for
better performances.) For instance, in IPython:
robot.<TAB><TAB> gives the list of methods available for robot;
env.SetViewer? provides documentation on the SetViewer() function of env.
To execute the script, run ipython -i myscript.py. The -i flag
tells IPython to spaw an interactive shell at the end. To get a shorter
command, you can also add the following two lines at the end of the Python
And then make the script executable by chmod +x myscript.py. This way,
the interactive shell will always spawn at the end, and you can call your
script directly by:
In this script, we define three objects: the environment env, the
viewer and the robot. The environment object is your main interface with
OpenRAVE, from which you will access robots, viewer, drawing primitives, etc.
It is defined in an XML file.
We define the double pendulum in double-pendulum.xml using OpenRAVE's
For more complex robots, you can also import COLLADA (1.5) models. The robots
consists of three bodies (Base, Arm0 and Arm1)
connected by two circular joints. We also add some mass information for inverse
dynamics, as well as some colors for readability:
<?xml version="1.0" encoding="utf-8"?><Robotname="Pendulum"><RotationAxis>0 1 0 90</RotationAxis><!-- makes the pendulum vertical --><KinBody><Masstype="mimicgeom"><density>100000</density></Mass><Bodyname="Base"type="dynamic"><Translation>0.0 0.0 0.0</Translation><Geomtype="cylinder"><rotationaxis>1 0 0 90</rotationaxis><radius>0.03</radius><height>0.02</height><ambientColor>1. 0. 0.</ambientColor><diffuseColor>1. 0. 0.</diffuseColor></Geom></Body><Bodyname="Arm0"type="dynamic"><offsetfrom>Base</offsetfrom><!-- translation and rotation will be relative to Base --><Translation>0 0 0</Translation><Geomtype="box"><Translation>0.1 0 0</Translation><Extents>0.1 0.01 0.01</Extents><ambientColor>1. 0. 0.</ambientColor><diffuseColor>1. 0. 0.</diffuseColor></Geom></Body><Jointcircular="true"name="Joint0"type="hinge"><Body>Base</Body><Body>Arm0</Body><offsetfrom>Arm0</offsetfrom><weight>4</weight><axis>0 0 1</axis><maxvel>3.42</maxvel><resolution>1</resolution></Joint><Bodyname="Arm1"type="dynamic"><offsetfrom>Arm0</offsetfrom><Translation>0.2 0 0</Translation><Geomtype="box"><Translation>0.1 0 0</Translation><Extents>0.1 0.01 0.01</Extents><ambientColor>0. 0. 1.</ambientColor><diffuseColor>0. 0. 1.</diffuseColor></Geom></Body><Jointcircular="true"name="Joint1"type="hinge"><Body>Arm0</Body><Body>Arm1</Body><offsetfrom>Arm1</offsetfrom><weight>3</weight><axis>0 0 1</axis><maxvel>5.42</maxvel><resolution>1</resolution></Joint></KinBody></Robot>
After writing the two XML files and running the Python script, the GUI should
pop up and display the pendulum, yet from above. To move the camera in front of
You can switch between camera and interaction by pressing the Escape key of
your keyboard. Alternatively, click on the red-arrow icon (first icon from top)
in the right-hand panel to activate interaction mode.
In interaction mode, left-click on an object to select it. A cube appears
around it, which you can use to perform two operations:
Translation: click on a face of the cube and drag it around. It will
translate the object in the two directions corresponding to the face of the
Rotation: click on an edge of the cube and drag it around. It will rotate
the object along the rotation axis parallel to the selected edge and passing
through the center of the cube.
The control cube will vanish if you click again on the object.
Interaction mode is also used to manipulate the joints of a robot directly.
Doing Ctrl + left click on a robot's link selects the parent joint of the
link (that is to say, the joint connecting the link to its parent in the
kinematic chain). A cylinder will then appear at the joint, which you can turn
around to rotate the joint. You can also vary the size of the cylinder by
ctrl-cliking it. Here is what you should see by selecting the link Arm1
of the pendulum:
The text area at the top-left corner of the 3D viewer displays relevant
information on the selected object. Here, it says that the pointer is on the
link Arm1 at the world-frame coordinates (x, y, z) = (0.10,
0.07, 0.27), with the surface normal at this point n = (1., 0., 0.).
The second line tells us that we selected the robot Pendulum, and the
third gives us information on the selected joint: name (Joint1), index
(1) and angle in radians and degrees. The joint index tells you the position of
the joint coordinate in the DOF vectors, which we will use later on to
manipulate the robot's configuration.
Let us go back to our Python script. All functions to manipulate the robot
model are accessible via the robot object. First, to check its degree of
OpenRAVE also calls "DOF" the generalized coordinates of the system, that
is to say here the joint angles of the pendulum. The vector q of
generalized coordinates is called "DOF values", the vector q˙
of generalized velocities is called "DOF velocities", etc. Initially, the two
joint angles are zero, so:
To set the pendulum to a different configuration, for instance q=(π/4,π/4), do:
You will see the robot updated to a different pose in the GUI. This operation,
the geometric update of all links of the robot from the joint-angle vector, is
called forward kinematics.
You now have your first environment set up and you know how to visualize and
update your robot from a joint-angle vector by forward kinematics. Next, you
will want to compute your joint-angle vectors to achieve a particular goal, for
instance: how to put the tip of the robot at a specific location in space? The
answer to this question is called inverse kinematics (IK).
For robotic arms, OpenRAVE provides a closed-form symbolic IK solver that you
can call via the inversekinematics module.
With symbolic IK, you first spend time compiling the IK solution to your robot
model into a C++ program. Then, you can compile and execute this program at
runtime to solve for inverse kinematics faster than with any other numerical
method. See Rosen Diankov's PhD thesis for details.
Unfortunately, symbolic IK only applies to robots with up to 6-7 degrees of
freedoms. For large-DOF mobile robots such as humanoids, the state of the art
is to use multi-task inverse kinematics
based on quadratic programming (QP) or hierarchical quadratic programming
(HQP). A ready-to-use QP-based IK solver for OpenRAVE is provided in the