Charig Yang
I am a fourth (final) year PhD student at the Visual Geometry Group (VGG) at University of Oxford, advised by Andrew Zisserman and Weidi Xie. I am also interning at Meta from June to December 2024, and am expected to graduate by April 2025.
I am also part of Autonomous Intelligent Machines and Systems at University of Oxford, and am generously funded by EPSRC and AIMS.
I did my undergraduate in Engineering Science, also at Oxford. During which, I spent lovely summers at Japan Railways, Metaswitch, True, CP Group, and Oxford’s Engineering Department.
Prior to which, I was born and raised in the suburbs of Bangkok, Thailand.
Email  / 
CV  / 
Twitter  / 
Github  / 
Google Scholar
|
|
Research
My PhD research explores novel methods in learning and using temporal signals in videos.
I am particularly interested in: video understanding, self-supervised learning, segmentation, and new applications.
|
Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Charig Yang,
Weidi Xie,
Andrew Zisserman
ECCV, 2024 (Oral Presentation)
project page /
arXiv
Shuffling and ordering sequences reveals changes that are monotonic over time
|
Moving Object Segmentation: All You Need Is SAM (and Flow)
Junyu Xie,
Charig Yang,
Weidi Xie,
Andrew Zisserman
ACCV, 2024 (Oral Presentation)
project page /
arXiv
SAM + Optical Flow = FlowSAM
|
It's About Time: Analog Clock Reading in the Wild
Charig Yang,
Weidi Xie,
Andrew Zisserman
CVPR, 2022
project page /
arXiv
We show that neural networks can read analog clocks without manual supervision.
|
Self-supervised Video Object Segmentation by Motion Grouping
Charig Yang,
Hala Lamdouar,
Erika Lu,
Andrew Zisserman,
Weidi Xie
Short: CVPR Workshop on Robust Video Scene Understanding, 2021
(Best Paper Award)
Full: ICCV, 2021
project page /
arXiv
We use transformers to group independently moving parts into layers, resulting in self-supervised segmentation.
|
Betrayed by Motion: Camouflaged Object Discovery via Motion Segmentation
Hala Lamdouar,
Charig Yang,
Weidi Xie,
Andrew Zisserman
ACCV, 2020
project page /
arXiv
We consider the task of camouflaged animal discovery and present a large-scale video camouflage dataset.
|
Teaching
2022-23: A2 (second-year) Electronic and Information Engineering, B14 (third-year) Information Engineering Systems, C18 (fourth-year) Computer Vision and Robotics
2021-22: B14 (third-year) Information Engineering Systems
2020-21: P2 (first-year) Electronic and Information Engineering, A1 (second-year) Mathematics, C19 (fourth-year) Machine Learning
You can find my summary notes for all P and A modules, and some B modules here.
|
Template gratefully stolen from here.
|
|