Charig Yang
I am a PhD student at the Visual Geometry Group (VGG) at University of Oxford, advised by Andrew Zisserman and Weidi Xie, where I work on computer vision, more specifically video understanding.
I am generously funded by EPSRC and AIMS CDT. I also spent half a year interning at Meta Reality Labs.
I did my undergraduate in Engineering Science, also at Oxford. During which, I spent lovely summers at Japan Railways, Metaswitch (now acquired by Microsoft), True, CP Group, and Oxford’s Engineering Department.
Prior to which, I was born and raised in the suburbs of Bangkok, Thailand.
Email  / 
CV  / 
Twitter  / 
Github  / 
Google Scholar
|
|
Research
My research focuses on computer vision.
I am particularly interested in: video understanding, self-supervised learning, segmentation, and new applications.
|
Learning from Time
Charig Yang
PhD Thesis
abstract / thesis (coming soon)
Submitted my thesis! This thesis explores novel methods in learning and using temporal signals in videos. I will be defending in April. My examiners will be Christian Rupprecht (Oxford) and Bill Freeman (MIT).
|
Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Charig Yang,
Weidi Xie,
Andrew Zisserman
ECCV, 2024 (Oral Presentation)
project page /
arXiv
Changes happen all the time, but only some are consistent over time. Ordering shuffled sequences reveals the latter.
|
Moving Object Segmentation: All You Need Is SAM (and Flow)
Junyu Xie,
Charig Yang,
Weidi Xie,
Andrew Zisserman
ACCV, 2024 (Oral Presentation)
project page /
arXiv
SAM + Optical Flow = FlowSAM.
|
It's About Time: Analog Clock Reading in the Wild
Charig Yang,
Weidi Xie,
Andrew Zisserman
CVPR, 2022
project page /
arXiv /
tweet (by Lucas Beyer) /
new scientist article
We solve a niche but fun problem of reading clocks (that 2025's VLMs still fails!). We circumvent manual supervision by exploiting the fact that time flows at a constant rate.
|
Self-supervised Video Object Segmentation by Motion Grouping
Charig Yang,
Hala Lamdouar,
Erika Lu,
Andrew Zisserman,
Weidi Xie
Short: CVPR Workshop on Robust Video Scene Understanding, 2021
(Best Paper Award)
Full: ICCV, 2021
project page /
arXiv
Motion can be used to discover moving objects in general. We introduce a self-supervised segmentation method by grouping motion into layers using a transformer.
|
Betrayed by Motion: Camouflaged Object Discovery via Motion Segmentation
Hala Lamdouar,
Charig Yang,
Weidi Xie,
Andrew Zisserman
ACCV, 2020
project page /
arXiv
Camouflaged animals are hard to see, only until they move. We present a method of discovering camouflages using motion, and a large-scale video camouflage dataset.
|
Teaching
2022-23: A2 (second-year) Electronic and Information Engineering, B14 (third-year) Information Engineering Systems, C18 (fourth-year) Computer Vision and Robotics
2021-22: B14 (third-year) Information Engineering Systems
2020-21: P2 (first-year) Electronic and Information Engineering, A1 (second-year) Mathematics, C19 (fourth-year) Machine Learning
You can find my summary notes for all P and A modules, and some B modules here.
|
Template gratefully stolen from here.
|
|