0
Research Papers

“Adult” Robot Enabled Learning Process in High Precision Assembly Automation

[+] Author and Article Information
Hongtai Cheng

Institute of Mechatronics Engineering,
Department of Mechanical
Engineering and Automation,
Northeastern University,
3-11 Wenhua Road,
Heping District, Shenyang,
Liaoning 110819, China
e-mail: chenght@me.neu.edu.cn

Heping Chen

Robotics Laboratory,
Ingram School of Engineering,
Texas State University San Marcos,
601 University Dr.,
San Marcos, TX 78666
e-mail: hc15@txstate.edu

1Corresponding author.

Manuscript received February 5, 2013; final manuscript received November 18, 2013; published online January 16, 2014. Assoc. Editor: Xiaoping Qian.

J. Manuf. Sci. Eng 136(2), 021011 (Jan 16, 2014) (10 pages) Paper No: MANU-13-1047; doi: 10.1115/1.4026084 History: Received February 05, 2013; Revised November 18, 2013

Typical robot teaching performed by operators in industrial robot applications increases the operational cost and reduces the manufacturing efficiency. In this paper, an “adult” robot enabled learning method is proposed to solve such teaching problem. This method uses an “adult” robot with advanced sensing and decision-making capabilities to teach “child” robots in manufacturing automation. A Markov Decision Process (MDP) which aims to correct the “child” robot's tool position is formulated and solved using Q-Learning. The proposed algorithm was tested using a mobile robot platform with an in-hand camera (adult) to teach an industrial robot (child) to perform a high accuracy peg-in-hole process. The experimental results demonstrate very robust and stable performance. Because the calibration between the “adult” and “child” robots is eliminated, the flexibility of the proposed method is greatly increased. Hence it can be easily applied in industrial applications where a robot with limited sensing capabilities is installed.

FIGURES IN THIS ARTICLE
<>
Copyright © 2014 by ASME
Your Session has timed out. Please sign back in to continue.

References

Dong, W., Li, H., and Teng, X., 2007, “Off-Line Programming of Spot-Weld Robot For Car-Body in White Based on ROBCAD,” International Conference on Mechatronics and Automation, pp. 763–768.
Connolly, C., 2009, “Technology and Applications of ABB RobotStudio,” Ind. Robot: Int. J., 36(6), pp. 540–545. [CrossRef]
Malhotra Rajiv, R. N. V., and Jian, C., 2010, “Automatic 3d Spiral Toolpath Generation for Single Point Incremental Forming,” ASME J. Manuf. Sci. Eng.132, pp. 1–10. [CrossRef]
Jing, W., and Newman, W. S., 2002, “Improving Robotic Assembly Performance Through Autonomous Exploration,” IEEE International Conference on Robotics and Automation, pp. 3303–3308.
Dune, C., Marchand, E., and Leroux, C., 2007, “One Click Focus With Eye-in-Hand/Eye-to-Hand Cooperation,” IEEE International Conference on Robotics and Automation, pp. 2471–2476.
Lippiello, V., Siciliano, B., and Villani, L., 2005, “Eye-in-Hand/Eye-to-Hand Multi-camera Visual Servoing,” IEEE Conference on Decision and Control, European Control Conference, pp. 5354–5359.
Hu, J., and Chang, Y., 2011, “Calibration of an Eye-to-Hand System Using a Laser Pointer on Hand and Planar Constraints,” IEEE International Conference on Robotics and Automation, pp. 982–987.
Kulpate, C., Mehrandezh, M., and Paranjape, R., 2005, “An eye-to-Hand Visual Servoing Structure for 3d Positioning of a Robotic Arm Using One Camera and a Flat Mirror,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1464–1470.
Chang, R. J., Lin, C. Y., and Lin, P. S., 2011, “Visual-Based Automation of Peg-in-Hole Microassembly Process,” ASME J. Manuf. Sci. Eng., 133(4), pp. 1–12. [CrossRef]
Ruf, A., Tonko, M., Horaud, R., and Nagel, H. H., 1997, “Visual Tracking of an End-Effector by Adaptive Kinematic Prediction,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 2, pp. 893–899.
Muis, A., Member, S., and Ohnishi, K., 2005, “Eye-to-Hand Approach on Eye-in-Hand Configuration Within Real-Time Visual Servoing,” IEEE/ASME Trans. Mechatron., 10(4), pp. 404–410. [CrossRef]
Kundu, A., Krishna, K., and Sivaswamy, J., 2009, “Moving Object Detection by Multi-View Geometric Techniques from a Single Camera Mounted Robot,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4306–4312.
Hu, G., MacKunis, W., Gans, N., Dixon, W., Chen, J., Behal, A., and Dawson, D., 2009, “Homography-Based Visual Servo Control With Imperfect Camera Calibration,” IEEE Trans. Autom. Control, 54(6), pp. 1318–1324. [CrossRef]
Davison, A., Reid, I., Molton, N., and Stasse, O., 2007, “Monoslam: Real-Time Single Camera Slam,” IEEE Trans. Pattern Anal. Mach. Intell., 29(6), pp. 1052–1067. [CrossRef] [PubMed]
Marshall, M., Matthews, M., Hu, A.-P., McMurray, G., and Lipkin, H., 2012, “Uncalibrated Visual Servoing for Intuitive Human Guidance of Robots,” IEEE International Conference on Robotics and Automation, pp. 4463–4468.
Goncalves, P., Mendonca, L., Sousa, J., and Pinto, J., 2008, “Uncalibrated Eye-to-Hand Visual Servoing Using Inverse Fuzzy Models,” IEEE Trans. Fuzzy Syst., 16(2), pp. 341–353. [CrossRef]
Murao, T., Kawai, H., and Fujita, M., 2009, “Passivity-Based Control on Dynamic Visual Feedback Systems With Movable Camera Configuration,” Electron. Commun. Japan, 92(6), pp. 286–294. [CrossRef]
Hvilshj, M., Bgh, S., Madsen, O., and Kristiansen, M., 2009, “The Mobile Robot “Little Helper”: Concepts, Ideas and Working Principles,” IEEE Conference on Emerging Technologies & Factory Automation, pp. 3–6.
Bogh, S., Hvilshj, M., Kristiansen, M., and Madsen, O., 2012, “Identifying and Evaluating Suitable Tasks for Autonomous Industrial Mobile Manipulators (AIMM),” Int. J. Adv. Manuf. Technol., 61, pp. 713–726. [CrossRef]
Sutton, R. S., and Barto, A. G., 1998, Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.
Barto, A. G., and Mahadevan, S., 2003, “Recent Advances in Hierarchical Reinforcement Learning,” Discrete Event Dyn. Syst., 13(1–2), pp. 41–77. [CrossRef]
Wiering, M., and Schmidhuber, J., 1998, “Fast Online q(λ),” Mach. Learn., 33, pp. 105–115. [CrossRef]
Chen, H., Wang, J., Zhang, G., Fuhlbrigge, T., and Kock, S., 2008, “Robotic Soft Servo for Industrial High Precision Assembly,” IEEE Conference on Robotics, Automation and Mechatronics, pp. 24–29.

Figures

Grahic Jump Location
Fig. 1

Conceived scenes of using one mobile robot to teach a group of robots in a production line. The mobile robot holding a camera is a teacher. When a stationary robot needs help, it moves there and performs a teaching task

Grahic Jump Location
Fig. 2

Structure of the proposed single 2D camera based robot teaching method

Grahic Jump Location
Fig. 3

The View Cone concept. The view cone is a cone connecting the camera origin to the target. Objects in the cone will block the target

Grahic Jump Location
Fig. 4

Coordinate projection from the robot tool coordinate frame, target coordinate frame to the image 2D coordinate frame. Ch and Ct are the target and robot tool coordinate frame. ch and ct are the projected target and robot tool coordinate frame

Grahic Jump Location
Fig. 5

View Cone Block Modeling. The projected tool can be described by its blocking property and the gap pixels

Grahic Jump Location
Fig. 6

Flow chart of robot training process

Grahic Jump Location
Fig. 7

Flow chart of the robot teaching system. Several subtasks are integrated together to accomplish the robot teaching task

Grahic Jump Location
Fig. 8

The platform for automatic robot teaching with camera-in-mobile configuration. The “child” robot performs a high precision assembly task

Grahic Jump Location
Fig. 9

Captured image from the camera. It is a 2D image with no depth information. The goal is to insert the peg into the second hole

Grahic Jump Location
Fig. 10

Sample points of the initial robot tool positions. They cover the range of ±10 mm around the real hole location

Grahic Jump Location
Fig. 11

Snapshot of the experiments. In the last figure, the tool has been inserted into the hole

Grahic Jump Location
Fig. 12

The recorded robot tool trajectories starting from three different positions

Grahic Jump Location
Fig. 13

Distribution of the final robot tool location (the robot tool coordinates when the robot teaching process ends) with Δxh = 0.5 mm,Δyh = 0.5 mm,Δz = 0.5 mm. They fall in the range of ±0.3 mm

Grahic Jump Location
Fig. 14

Distribution of the final robot tool location (the robot tool coordinates when the robot teaching process ends) with Δxh = 0.25 mm,Δyh = 0.25 mm,Δz = 0.25 mm. They fall in the range of ±0.3 mm

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In