(last edited March 18th, 2026)
Cloth representations for robotic manipulation
Cloth state representations in robotic manipulation span a spectrum from low-level sensory observations (images and point clouds) to structured geometric models (meshes and graphs) and high-level abstractions (semantic or topological descriptors). Earlier work relied heavily on explicit geometric representations and physics-based models, while more recent approaches increasingly adopt learning-based representations, particularly graph-based models that capture local interactions and enable dynamics prediction. At the same time, there is a clear trend toward compact and task-oriented representations, such as semantic keypoints and topological descriptors, which reduce dimensionality while preserving manipulation-relevant information. These emerging representations aim to bridge perception and planning, enabling more scalable and generalizable cloth manipulation systems.
Summary of Cloth State Representations for Robotic Manipulation
| Representation Type | State Description | Typical Data Structure | ApproxDim | Typical Tasks / Best Use Cases | Advantages | Limitations | Representative Papers |
| Observation-based / Raw sensory | State defined directly by sensory observations (RGB or RGB‑D images) | Image tensor | O(HW) pixels | End‑to‑end policy learning, reinforcement learning, imitation learning | Simple pipeline, compatible with deep learning | Hard to interpret, sensitive to visual variability | [1], [2], [3], [15], [16], [25] |
| Point-cloud representation | Cloth represented as a set of 3D surface points from depth sensors | Point set P={p_i} | O(N) points | Grasp detection, geometric perception, manipulation policy learning | Dense geometric information, directly acquired from sensors | No connectivity information, partial observations | [4], [5], [26] |
| Dense geometric (mesh / particle models) | Cloth represented as a discretized surface with connectivity between vertices | Mesh or particle system | O(V) vertices | Physics simulation, dynamics learning, model‑based planning | Physically grounded representation | High dimensionality, difficult to estimate from partial observations | [6], [7], [18], [27] |
| Graph-based representation | Cloth elements represented as nodes connected by edges for message passing | Graph G=(V,E) | O(V)+O(E) | Learning cloth dynamics, predictive models, manipulation planning | Captures local interactions, scalable learning | Requires graph construction and training data | [8], [9], [28] |
| Sparse geometric feature representation | State described using a small set of salient geometric features | Keypoints / feature vector | O(K), K << V | Grasp point selection, unfolding tasks, manipulation primitives | Low dimensional, easier for grasp planning | May lose global cloth structure | [10], [22], [29] |
| Semantic state representation | Cloth classified into discrete task‑relevant states | Discrete labels | O(1) | Task monitoring, manipulation stage recognition, high‑level planning | Very compact representation | Limited geometric detail | [11], [24] |
| Configuration-space / topological representation | Compact descriptors capturing global cloth configuration using topology or boundary features | Reduced coordinate representation | O(1) | State classification, manipulation planning, configuration reasoning | Compact and invariant representation | Hard to derive general coordinates | [12], [13], [14] |
References
[1] J. Matas, S. James, A. Davison. Sim‑to‑Real Reinforcement Learning for Deformable Object Manipulation. CoRL, 2018.
[2] Y Tsurumine, Y Cui, E Uchibe, T Matsubara. Deep reinforcement learning with smooth policy update: Application to robotic cloth manipulation. Robotics and Autonomous Systems, 2019.
[3] R. Hoque, D. Seita, A. balakrishna, A. Ganapathi, A. K. tanwani, n. Jamali, K. Yamane, S. iba, K. Goldberg, VisuoSpatial Foresight for physical sequential fabric manipulation. Autonomous Robots, 2021.
[4] J. Schulman, A. Lee, J. Ho, and P. Abbeel. Tracking Deformable Objects with Point Clouds.
IEEE International Conference on Robotics and Automation (ICRA), 2013.
[5] Garcia-Camacho, I., Borras, J., Calli, B., Norton, A., & Alenya, G. (2022). Household cloth object set: Fostering benchmarking in deformable object manipulation. IEEE Robotics and Automation Letters, 7(3), 5866-5873..
[6] M. Cusumano-Towner, A. Singh, S. Miller, J. F. O’Brien, P. Abbeel.
Bringing Clothing into Desired Configurations with Limited Perception.
ICRA, 2011.
[7] D. Baraff, and A. Witkin. “Large steps in cloth simulation.” Seminal Graphics Papers: Pushing the Boundaries, Volume 2. 2023. 767-778.
[8] Ma, X., Hsu, D., & Lee, W. S. (2022, May). Learning latent graph dynamics for visual manipulation of deformable objects. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 8266-8273)
[9] Lin, X., Wang, Y., Huang, Z., & Held, D. Learning visible connectivity dynamics for cloth smoothing, CoRL, 2022.
[10] Triantafyllou, D., Mariolis, I., Kargakos, A., Malassiotis, S., & Aspragathos, N. A geometric approach to robotic unfolding of garments. Robotics and Autonomous Systems, 75, 233-243, 2016.
[11] Tzelepis, G., Aksoy, E. E., Borràs, J., & Alenyà, G. Semantic State Estimation in Robot Cloth Manipulations Using Domain Adaptation from Human Demonstrations. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. 2024.
[12] Strazzeri, F., & Torras, C. (2021). Topological representation of cloth state for robot manipulation: Deriving the configuration space of a rectangular cloth. Autonomous Robots, 45(5), 737-754..
[13] Coltraro, F., Fontana, J., Amorós, J., Alberich-Carramiñana, M., Borràs, J., & Torras, C. (2023). A representation of cloth states based on a derivative of the gauss linking integral. Applied Mathematics and Computation, 457, 128165.
[14] A. Kamat, J. Borràs, C. Torras. CloSE: A Compact Shape‑ and Orientation‑Agnostic Cloth State Representation. ICRA 2026.
[15] Seita, D., Jamali, N., Laskey, M., Tanwani, A. K., Berenstein, R., Baskaran, P., … & Goldberg, K. (2019, October). Deep transfer learning of pick points on fabric for robot bed-making. In The International Symposium of Robotics Research (pp. 275-290)..
[16] Seita, D., Florence, P., Tompson, J., Coumans, E., Sindhwani, V., Goldberg, K., & Zeng, A. (2021, May). Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporter networks. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4568-4575).
[18] Bridson, R., Marino, S., & Fedkiw, R. (2005). Simulation of clothing with folds and wrinkles. In ACM SIGGRAPH 2005.
[22] Y Deng, Y., & Hsu, D. (2025, May). General-purpose clothes manipulation with semantic keypoints. In 2025 IEEE International Conference on Robotics and Automation (ICRA) (pp. 13181-13187). IEEE.
[24] Doumanoglou, A., Kargakos, A., Kim, T. K., & Malassiotis, S. (2014, May). Autonomous active recognition and unfolding of clothes using random decision forests and probabilistic planning. In 2014 IEEE international conference on robotics and automation (ICRA) (pp. 987-993)
[25] Mo, K., Xia, C., Wang, X., Deng, Y., Gao, X., & Liang, B. (2022). Foldsformer: Learning sequential multi-step cloth manipulation with space-time attention. IEEE Robotics and Automation Letters, 8(2), 760-767.
[26] De Gusseme, V. L., Lips, T., Proesmans, R., Hietala, J., Lee, G., Choi, J., … & Wyffels, F. (2025). A dataset and benchmark for robotic cloth unfolding grasp selection: The ICRA 2024 Cloth Competition. The International Journal of Robotics Research,
[27] Yoon, K. I., & Lim, S. C. (2025). Real-to-sim high-resolution cloth modeling: Physical parameter optimization using particle-based simulation with robot manipulation data. Journal of Computational Design and Engineering, 12(8), 29-44.
[28] Zhou, C., Xu, H., Hu, J., Luan, F., Wang, Z., Dong, Y., … & He, B. (2025). SSfold: Learning to fold arbitrary crumpled cloth using graph dynamics from human demonstration. IEEE Transactions on Automation Science and Engineering.
[29] Tabernik, D., Muhovič, J., Urbas, M., & Skočaj, D. (2024). Center direction network for grasping point localization on cloths. IEEE Robotics and Automation Letters, 9(10), 8913-8920.
