
【#Tech24H】ByteDance’s Seed team recently released GR-Dexter, the world’s first integrated framework that extends vision-language-action models to high-degree-of-freedom dexterous hands, overcoming three major challenges: the curse of dimensionality, perceptual blind spots, and data scarcity. Controlled by a 56-degree-of-freedom dual-arm system (21 degrees of freedom per hand), it accomplishes long-sequence tasks such as vacuuming and dividing bread. The key to enabling precise tactile feedback for this dexterous hand is the high-density tactile electronic skin, which features full curved-surface coverage, ultra-high precision, and a wide measurement range to meet diverse requirements. This latest breakthrough by ByteDance’s GR-Dexter marks a critical step toward robots evolving from task execution to human-like manipulation.









