Passage-Aware Structural Mapping for RGB-D Visual SLAM
Introduced a passage-aware structural mapping method for RGB-D Visual SLAM to effectively detect doors and traversable openings.
Ali Tourani, Miguel Fernandez-Cortizas, Saad Ejaz et al.
Introduced a passage-aware structural mapping method for RGB-D Visual SLAM to effectively detect doors and traversable openings.
Ali Tourani, Miguel Fernandez-Cortizas, Saad Ejaz et al.
MoT-HRA framework learns human-intention priors from large-scale demonstrations, enhancing motion plausibility and control robustness in robotic manipulation.
Yifan Xie, YuAn Wang, Guangyu Chen et al.
Radar-KISSICP and Radar-IMU improve trajectory estimation in off-road environments.
Shaunak Kolhe, Peng Jiang, Maggie Wigness et al.
ACO-MoE recovers 95.3% performance under dynamic perturbations, enhancing visual RL robustness.
Zhengru Fang, Yu Guo, Fei Liu et al.
Integrating data-driven computational design with feedback-driven co-robotic fabrication for material reuse in architecture.
Arash Adel, Daniel Ruan, Ruxin Xie
Generate complex path vector fields using Score-Induced Guiding Vector Field (SGVF) to enhance robotic navigation.
Zirui Chen, Shiliang Guo, Shiyu Zhao
GCImOpt learns efficient goal-conditioned policies by imitating optimal trajectories, significantly improving control task success rates and efficiency.
Jon Goikoetxea, Jesús F. Palacián
GazeVLA learns human intention to enhance robotic manipulation, significantly outperforming baseline methods.
Chengyang Li, Kaiyi Xiong, Yuan Xu et al.
RedVLA identifies physical safety risks in VLA models through a two-stage process, achieving an ASR of 95.5%.
Yuhao Zhang, Borong Zhang, Jiaming Fan et al.
LeHome simulation environment achieves high-fidelity manipulation of deformable objects in household scenarios using PBD and FEM.
Zeyi Li, Yushi Yang, Shawn Xie et al.
VLA Foundry: A unified framework for training Vision-Language-Action models, enhancing multi-task tabletop manipulation policies.
Jean Mercat, Sedrick Keh, Kushal Arora et al.
Mask World Model predicts semantic masks instead of pixels, enhancing robust robot policy learning, excelling in LIBERO and RLBench.
Yunfan Lou, Xiaowei Chi, Xiaojie Zhang et al.
MATCH method improves peg-in-hole task success rate by 35% under high noise, reducing average force by 30%.
Hunter L. Brown, Geoffrey Hollinger, Stefan Lee
RAPIDDS framework enhances human-robot teaming efficiency through multi-cycle spatio-temporal adaptation, significantly improving plan fluency and user preference.
Alex Cuellar, Michael Hagenow, Julie Shah
Gesture recognition using OpenCLIP visual learning model improves AcoustoBot swarm interaction accuracy to 87.8%.
Alex Lin, Lei Gao, Narsimlu Kemsaram et al.
The ESKF-PRE-VMPC framework reduces RMSE by 52.63% and 75.04% in UAV pipeline inspection without wind.
Wen Li, Hui Wang, Jinya Su et al.
LiveVLN breaks the stop-and-go loop in vision-language navigation, reducing waiting time by up to 77.7%.
Xiangchen Wang, Weiye Zhu, Teng Wang et al.
DAG-STL framework achieves zero-shot trajectory planning under Signal Temporal Logic (STL) constraints, significantly enhancing complex task planning capabilities.
Ruijia Liu, Ancheng Hou, Xiao Yu et al.
Enhancing glass surface reconstruction using depth prior improves robot navigation accuracy.
Jiamin Zheng, Jingwen Yu, Guangcheng Chen et al.
Relative state estimation using event-based propeller sensing with error under 3%.
Ravi Kumar Thakur, Luis Granados Segura, Jan Klivan et al.