To deploy autonomous agents in any environment, it is a fundamental requirement that the agent is able to perceive its surroundings, both the status of the static scene and the states of moving entities, over time. My current research centers around 3D scene representation and active vision that plans the motion accounting for the visual-based perceived scene, featuring various applications, e.g. 3D scene exploration and reconstruction, 3D object search and classification.
While my primary research focus lies on static 3D scene modelling and scene representation learning, perceiving moving objects, in particular humans, and understanding their dynamics and behaviour patterns is equally important for developing a truly autonomous system. Recently I have also been involved in a set of projects working on visual object detection and tracking, metric inference, and video pattern analysis, which has further leveled up the span of my knowledge in developing autonomous systems.
All the projects are funded through either industrial/institutional sources, or covered by EU projects, including MARVEL and PROTECTOR.