p c 。t //: (5) and p y q y =f 2 0, for any two imaged object points

P D  Pc C  P

The tensor  (of  degree  2) Mc D A。t /A。t /−1 can p  and  q。  The  centroid  dynamics  is  given  by  the

following first-order motion field expression [1]:

P

be  characterized  in  terms  of  differential  invariants

[13]。 p 。t / D −f =zc0pxc =c 0 −f  pyc cV cno ;

Pc 0−f =zc pyc =c f   0  −pxc (8)

3。 Hybrid visual servoing

In this section, a hybrid state space representation of camera–object interaction is first derived, and then a robust control law is synthesized。

3。1。  State representation

According to the first-order spatial structure of the motion field of Eq。 (5), the dynamic evolution of any image patch enclosing the object has six degrees of freedom, namely the velocity centroid coordinates vc — accounting for rigid translations of the whole patch — and the entries of the 2 2 tensor Mc — related to changes in shape of the patch [13]。 Let us choose as the state of the system the 6-vector

x D [pxc ; pyc ;   − '; p; q; c]T; (6)

which is a hybrid vector, since it includes both im-age space 2D information and 3D orientation and dis-tance parameters。 Notice that the choice of − ' is due to the fact that this quantity is well defined also in the fronto-parallel configuration, which is a singularity of orientation representation for the angles  ; , and '。 We demonstrate below that the state space repre-sentation of camera–object interaction can be written asxP D B。x/cV cno; (7)

where the notation aV bnc stands for relative twist screw of frame hbi with respect to frame hci ex-pressed in frame hai。 The system described by Eq。(7) is a driftless, input-affine nonlinear system, where cV cno D cV cna − cV ona is the relative twist screw

of camera and object。 cV cna D [cvTcna ; c!Tcna ]T is the control input and cV ona D [cvTona ; c!Tona ]T is a distur-bance input, and hai is an arbitrary reference frame。

Assuming that the object is almost centered in the vi-sual field, and sufficiently far from the camera plane, it 

摘要:在本文中,视觉伺服问题通过耦合非线性控制理论与机器人使用的视觉信息的方便表示来解决。基于线性相机模型的视觉表示是非常紧凑的以符合主动视觉要求。假设精确的模型和状态测量,设计的控制律被证明确保在Lyapunov感觉的全局渐近稳定性。还表明,在有界不确定性的存在下,闭环行为的特征在于全局吸引子。通过选择包括图像空间(2D)信息和3D对象参数的混合视觉状态向量,在控制级解决了使用线性相机模型产生的众所周知的姿态模糊性。阐述了避免相机校准的在线视觉状态估计的方法。模拟和实时实验验证了系统收敛和控制鲁棒性方面的理论框架。 ©1999 Elsevier Science B。V。保留所有权利。文献综述

上一篇:薄壁矩形混凝土填充管(RCFT)柱英文文献和中文翻译
下一篇:气动系统中的冷凝英文文献和中文翻译

超精密自由抛光的混合机...

旋转式伺服电机的柔性电...

ZigBee-RFID混合网络的节电英文文献和中文翻译

模糊PLC系统的伺服机构英文文献和中文翻译

人工神经网络的电液比例...

H∞滤波器视觉伺服系统英文文献和中文翻译

机器视觉维护检测与跟踪...

基于Joomla平台的计算机学院网站设计与开发

浅论职工思想政治工作茬...

上海居民的社会参与研究

浅谈高校行政管理人员的...

STC89C52单片机NRF24L01的无线病房呼叫系统设计

酵母菌发酵生产天然香料...

AES算法GPU协处理下分组加...

压疮高危人群的标准化中...

提高教育质量,构建大學生...

从政策角度谈黑龙江對俄...