This code implements an intelligent grasp functionality for the ArmPi pro robot. It allows the robot to detect and grasp colored objects (red, green, and blue) placed in front of it using computer vision and robotic arm control.
- visual_processing_node.py: This file likely contains the computer vision code for detecting and tracking the colored objects.
- intelligent_grasp_node.py: This is the main file that coordinates the computer vision and robotic arm control for the intelligent grasp functionality.
- intelligent_grasp_a.py: grasping block to the right side of ArmPi pro robot
- intelligent_grasp_b.py: Grasp the blocks with apriltag marker #1, #2, and #3 Usage:
- Make sure the ArmPi pro robot is properly set up and connected to your computer.
- Run the
visual_processing_node.py
script to start the computer vision module. - Run the
intelligent_grasp_node.py
script (or one of the alternate versions) to start the intelligent grasp functionality. - Place a red, green, or blue object in front of the robot's camera.
- The robot will detect the object, move its arm above it, grasp it, and then place it at a designated location based on the object's color.
- To stop the functionality, terminate the scripts or send the appropriate service call.
- The code uses ROS (Robot Operating System) for communication and coordination between different components.
- It utilizes computer vision techniques to detect and track colored objects.
- PID controllers are used for smooth and accurate arm movement.
- The robot arm is controlled using inverse kinematics calculations.
- RGB LEDs and a buzzer are used for visual and auditory feedback.
- Various service calls are available for starting, stopping, and controlling the functionality.
- ROS (Robot Operating System)
- OpenCV (Open Source Computer Vision Library)
- NumPy
- ArmPi pro robot hardware and drivers
Please refer to the code comments and ROS documentation for more detailed information and customization options.