Charig Yang

I am a fourth (final) year PhD student at the Visual Geometry Group (VGG) at University of Oxford, advised by Andrew Zisserman and Weidi Xie. I am also interning at Meta from June to December 2024, working on multimodal LLMs, and am expected to graduate by April 2025.

I am also part of Autonomous Intelligent Machines and Systems at University of Oxford, and am generously funded by EPSRC and AIMS.

I did my undergraduate in Engineering Science, also at Oxford. During which, I spent lovely summers at Japan Railways, Metaswitch, True, CP Group, and Oxford’s Engineering Department.

Prior to which, I was born and raised in the suburbs of Bangkok, Thailand.

Email  /  CV  /  Twitter  /  Github  /  Google Scholar

profile photo
Research

My PhD research focuses on exploring new applications of computer vision using `time' as self-supervision. I am particularly interested in (i) new applications previously unachievable in computer vision, (ii) simple and elegant solutions (to existing/new problems).

Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Charig Yang, Weidi Xie, Andrew Zisserman
ArXiv, 2024
project page / arXiv

Shuffling and ordering sequences reveals changes that are monotonic over time

Moving Object Segmentation: All You Need Is SAM (and Flow)
Junyu Xie, Charig Yang, Weidi Xie, Andrew Zisserman
ArXiv, 2024
project page / arXiv

SAM + Optical Flow works amazingly well in two ways: (i) flow as input to SAM (ii) flow as prompt to guide SAM's RGB input

It's About Time: Analog Clock Reading in the Wild
Charig Yang, Weidi Xie, Andrew Zisserman
CVPR, 2022
project page / arXiv

We show that neural networks can read analog clocks in unconstrained environments without manual supervision.

Self-supervised Video Object Segmentation by Motion Grouping
Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, Weidi Xie
Short: CVPR Workshop on Robust Video Scene Understanding, 2021 (Best Paper Award)
Full: ICCV, 2021
project page / arXiv

We use transformers to group independently moving parts into layers, resulting in self-supervised segmentation.

Betrayed by Motion: Camouflaged Object Discovery via Motion Segmentation
Hala Lamdouar, Charig Yang, Weidi Xie, Andrew Zisserman
ACCV, 2020
project page / arXiv

We consider the task of camouflaged animal discovery and present a large-scale video camouflage dataset.

Teaching

2022-23: A2 (second-year) Electronic and Information Engineering, B14 (third-year) Information Engineering Systems, C18 (fourth-year) Computer Vision and Robotics
2021-22: B14 (third-year) Information Engineering Systems
2020-21: P2 (first-year) Electronic and Information Engineering, A1 (second-year) Mathematics, C19 (fourth-year) Machine Learning

You can find my summary notes for all P and A modules, and some B modules here.


Template gratefully stolen from here.