2.3 Manipulation control
In , a computed torque control law is used to track a desired trajectory for
the object's center of mass and to maintain the contacts with no slipping, pro-
ducing nonzero contact forces which lie within the friction cones: rolling contacts
can also be included in this framework. The proposed control law presents three
main components: delation of nonlinear terms, introduction of proportional and
derivative feedback terms in order to have a decoupled system, and a term which
is projected in the null space of the grasp matrix with the effect of regulating the
internal forces in order to avoid the slippage.
A computed torque control law with force feedback for compensating uncer-
tainties of the system is presented in . The commanded joint torques consist
of two parts: one to move the fingers and the other one to grasp the object. The
desired internal forces can be found by optimizing the friction angles, in order to
avoid slipping, so that fingers can grasp and manipulate the object in a stable
fashion along a desired trajectory. Moreover, a history-based method based on
the force feedback is proposed to compensate the model of the system.
With the assumptions that the fingers are not in a singular position, the con-
tacts between the fingers and the object are rigid point contacts, and that the
geometry of each body is known, a computed torque law is derived in , includ-
ing kinematics of both rolling and sliding contacts. The commanded joint torques
compensate the whole dynamics of the system and, furthermore, a PID controller
is employed for the asymptotic tracking of the object's pose, while a PI controller
is used for the asymptotic tracking of the desired internal forces.
As pointed out in , much of human dexterity and adaptability to the changes
in the task or in the environment is due to the ability of using tactile information
to control the process: hence, human manipulation is event-driven. In the same
just mentioned work , the difficulty of a manipulation control with a hand
equipped with tactile sensors is studied. These sensors can be useful to acquire
information about contact and object's surface. When contact conditions or task
requests change, the control law has to change too and, further, the controller
should have smooth transitions between each change of phase. In the mentioned
work, different control laws (force control, position control, stiffness control) have
been implemented in orthogonal movements directions, and the switching between
these controls has been made according to the change of state of the tactile sensors.
Hence, it is clear that the main role in event-driven control is played by sensors.
In , two figures (Figure 2.1 and Figure 2.2) are drawn to show, respectively,
the use of the touch sensing in manipulation and the control architecture for the
integration of this tactile sensing.
In the event-driven approach, the detection of events is obviously a crucial
step and, in theory, for each of them a proper sensor should be used. In