Our research results have been applied in practical projects such as visual servoing, intelligent mobile robots, autonomous driving, and multi robot scheduling.
![]() |
In the actual industrial production process, this project designs the laser cutting control algorithm based on visual servo for the laser cutting task in which the environment of the production line changes, thus affecting the precise cutting. For the flexible transformation of strip steel laser cutting production line, this project adopts visual servoing to realize the precise tracking of the preset cutting trajectory, and ultimately controls the robotic arm to achieve higher servo precision in the process of performing laser cutting tasks, which effectively solves the application problems in the production process. |
![]() |
In the production line and other application scenarios, the mobile assembly of the robotic arm is the key application of its execution of the task process. This project is based on vision algorithm technology, judging the current spatial position of the target, and guiding the robotic arm to realize the grasping task according to the vision servo technology, and then placing the target object according to its ideal position and attitude, and finally completing the assembly control of the production line process. This project applies vision servo technology to the assembly process of the actual production line, which improves the assembly efficiency and assembly precision. |
![]() |
For the gangue and coal transported on the conveyor belt, the project completed the identification of coal waste, and realized the flexible separation of dynamic objects by means of pulling. In the specific scenario of the project, the conveyor belt on the production line moves at a certain speed to transport the coal and dirt mixture. Its large load (40-200kg) makes the traditional grabbing strategy ineffective, and the task safety forces the coal to be avoided when sorting gangue. Dynamic obstacles introduce complex space-time and geometric constraints. The main indexes are gangue identification rate, systematic waste selection rate and so on. This project extracts visual geometry information to plan the initial time optimal trajectory, constructs a spatio-temporal and geometric joint constrained manifold mapping, and designs a joint optimization algorithm under multi-constraints and multi-objectives. |
![]() |
Dense modeling of real-time local environments with low-cost rgbd camera remains a challenging problem. Limited computing resources of the mobile platform, texture lackness, and blur problem caused by fast camera movement bring it difficulty for mobile robots to obtain a real-time dense perception of the local and global environment with high quality. We propose a 3D dense mapping algorithm for the local and global environment based on global 3D reconstruction. The adaptability of localization in low-texture plane environment is improved by providing ICP with initial value from RGBD localization method. Calculation efficiency is improved by device-host swap mechanism and point cloud regularization. Our framework is proved to obtain real-time dense mapping of local environment and global-consistent environment of high quality. |
![]() |
Compared to LiDAR, cameras have lower cost as well as provide richer sensor information. However, practical challenges such as illumination variations and motion blurring often cause problems in the practical operation of vslam systems, such as sparse image features and feature mis-matching. We have implemented a binocular camera on a cleaning robot to achieve high precision estimation of the robot's own position and the surrounding environment. The video shows the results of our experiments under different lighting conditions, and it can be seen that our algorithm can still perform effective feature extraction and work normally under very dim lighting conditions. |
![]() ![]() |
This system integrates deep learning-based localization and mapping algorithms and multi-sensor fusion-based perception algorithms to improve the accuracy of localization and mapping and the robustness of environment perception for the challenges of large scenes, sparse and semi-open features, and drastic illumination changes to localization and perception. We constructed a deep learning-based multi-sensor fusion robot localization, mapping, and perception system using 2D LiDAR, 3D LiDAR, IMU, wheel encoder, front-view and rear-view cameras, GPS, and controller. Meanwhile, we designed an autonomous motion control system to realize unmanned high-precision operation of the trailer. |
![]() ![]() |
The wild grassy environment has problems such as feature sparsity and large motion perturbation, which pose greater challenges to the localization and map building of mobile robots. To address these problems, we constructed a multi-sensor fusion framework, which fuses camera, IMU, wheel tachometer, and LiDAR multi-sensor information to enhance the robustness and accuracy of the SLAM system. Meanwhile, we designed a localization building algorithm for multi-sensor fusion of robots in field environments. In addition, this project deploys target detection and semantic segmentation for detecting mobile robots in grassy environments in grassy scenes, realizes pruning optimization of perception algorithms and lightweight design of network structure, and accelerates the deployment of TensoRT based on the NVIDIA TX2 NX embedded device; realizes real-time operation of algorithms, and this project achieves good application results. |
![]() |
Based on the wheel-footed robot platform, this project develops localization and navigation algorithms and target tracking algorithms for wheel-footed robots. Firstly, the project designs a point cloud map construction based on LiDAR and a multi-sensor fusion localization algorithm to achieve accurate localization of the robot in indoor environments, and at the same time designs an accurate and robust obstacle sensing algorithm as well as an autonomous path planning and navigation algorithm for the wheeled robot, which is seamlessly integrated with the robot system. This project realizes autonomous navigation and localization of the robot in indoor and outdoor (static and dynamic obstacles) and indoor (stairs) and outdoor (streets, steps, slopes, uneven terrain) 3D environments, and provides a perceptual system guarantee for robot movement and operation demonstration tasks. |
![]() ![]() |
This project is dedicated to designing a mobile operation intelligent robot algorithm with integrated movement-perception-operation. Firstly, in terms of mobile localization, this project fuses multi-sensor information to build a high-precision 3D point cloud map, which ensures the robustness of localization in complex and degraded scenes, and then it will realize real-time obstacle avoidance in dynamic scenes according to the sensing information and design a navigation module to achieve centimeter-level navigation accuracy according to the localization information. Then, this project designs a vision-based object perception algorithm, which can detect and segment spatial objects in real time, estimate the 3D position of spatial objects, and generate a grasping strategy. Finally, this project designs the planning and control algorithm of two-arm collaboration, which takes into account the spatial position of the robot and the target position to generate a reasonable grasping path, and realizes the close cooperation between the robotic arm and the dexterous hand to complete the mobile operation task together. |
![]() |
For the long-term use of the vehicle surround view camera module, the initial calibration parameters may be changed due to bumps, collisions or maintenance, and the synthesized bird's-eye view will be misaligned during operation. This application does not rely on the calibration scene, but only uses the ground texture features to complete the calibration of the external parameters of the surround view camera; the introduction of the "two-camera error" model, the use of sparse texture features, and the optimization of the model through the minimization of photometric error; the introduction of the "point and line fusion" algorithm, making full use of the characteristics of different features to achieve the camera's objective. The "point and line fusion" algorithm is introduced, which makes full use of the characteristics of different features to achieve the iterative optimization of the external parameters of the camera coarse-to-fine. |
![]() |
The 4D visual automated annotation project is an application related to automated driving in our lab. The project is based on the surround view video, IMU, GPS and other data of driving, to carry out 3D reconstruction of the scene, output lane lines, traffic signs, obstacles and other elements of the labeling information, and at the same time to generate local maps for the aggregation of multiple trips. We developed a road reconstruction algorithm based on the fusion of display grid and implicit coding, the following video demonstrates the color, semantic and height reconstruction effect of the road, we can see that the road lane lines and arrows and other reconstruction is clear. |
![]() ![]() |
This project realizes NeRF-based dynamic object and static background reconstruction and scene reorganization of complex autonomous driving scenes based on surround-view pinhole and fisheye cameras, 3D detection annotations, and LiDAR point cloud information. This application outputs a 3D scene reconstruction tool chain: using real data-based to realize scene reconstruction and data synthesis; improving target detection by synthesizing unconventional scene data that is difficult to collect from real data. The video shows the static road background reconstruction effect that has been completed now; the picture shows the foreground vehicle editing effect that has been completed now, and the congestion scene generation, vehicle panning, and vehicle duplication show a better reconstruction effect. |
![]() ![]() |
The Autonomous Valet Parking project aims to achieve perception, SLAM, and planning in underground parking lot environments based on sensor data such as cameras, IMUs, vehicle speed, and ultrasonic radar. In our project, we have implemented a multi-task perception network based on bird's eye view to look around the camera for target detection, and a semantic mapping framework based on factor graph optimization and parking management, as well as a semantic localization framework based on multi-sensor fusion for semantic mapping and autonomous localization in underground parking lot environments. Meanwhile, planning and control algorithms for autonomous parking and pilotage and obstacle avoidance are designed to achieve comfortable and smooth autonomous driving of vehicles in the parking lot area and automatic parking in the parking space. |
![]() |
Based on the wheel-footed robot platform, this project develops localization and navigation algorithms and target tracking algorithms for wheel-footed robots. Firstly, the project designs a point cloud map construction based on LiDAR and a multi-sensor fusion localization algorithm to achieve accurate localization of the robot in indoor environments, and at the same time designs an accurate and robust obstacle sensing algorithm as well as an autonomous path planning and navigation algorithm for the wheeled robot, which is seamlessly integrated with the robot system. This project realizes autonomous navigation and localization of the robot in indoor and outdoor (static and dynamic obstacles) and indoor (stairs) and outdoor (streets, steps, slopes, uneven terrain) 3D environments, and provides a perceptual system guarantee for robot movement and operation demonstration tasks. |
![]() ![]() |
This project develops an intelligent scheduling system based on multi-robot path planning to realize load balancing and efficient transportation of unmanned intelligent clusters for large-scale four-way shuttle clusters in three-dimensional dense warehousing scenarios. Firstly, the traditional static planning is improved into triggered dynamic planning, and further combined with intelligent multi-robot path planning and task allocation methods, the scheduling system for large-scale intelligent clusters is developed to realize the efficient transportation of unmanned intelligent clusters. This project supports dynamic vehicle selection, avoids deadlock situation under no failure of hardware and software system, and realizes efficient task allocation and efficient planning of multi-vehicle system. |
![]() |
In large-scale industrial scenarios, robots are of different shapes and sizes, and the complexity of scheduling algorithms often grows exponentially with the number of robots. Therefore, this project investigates the scheduling algorithms for heterogeneous multi-robots to improve the stability and operational efficiency of the system, taking real-world scenarios such as smart warehousing and smart manufacturing as examples. The overall goal of this project is to develop an efficient multi-robot scheduling algorithm for smart warehousing and smart manufacturing systems, which can quickly solve the vehicle selection and path planning problems in large-scale scenarios and heterogeneous robot clusters. The algorithm is also tested and deployed in simulation and production environments. |