Are ground truths really necessary for model accuracy evaluation?

Evaluating model accuracy is an indispensable lecture in machine learning. To do that, we need a test set comprised of test samples and their ground truths. Whilst standard datasets (e.g., ImageNet) satisfy this requirement, many real world scenarios only provide us with unlabeled test data, rendering common model evaluation methods infeasible. In this talk, I will introduce an important but under-explored problem, automatic model evaluation (AutoEval). Specifically, given a labeled training set and a model, we aim to estimate the model accuracy on unlabeled test datasets. We design an accuracy regression approach based on a meta-dataset: a dataset comprised of datasets generated from the original training set via various image transformations such as rotation, background substitution, etc. Using synthetic meta-dataset and real-world datasets in training and testing, respectively, we obtain reasonable and promising estimates of the model accuracy.
Meeting ID: 849 6108 8872
Password: 428180

Biography
Dr Liang Zheng is a Senior Lecturer, CS Futures Fellow and DECRA Fellow in the School of Computing, Australian National University. He obtained both his B.S degree (2010) and Ph.D degree (2015) from Tsinghua University, China. He is best known for his contributions in object re-identification, and his recent research interest is data-centric computer vision, where improving leveraging, analysing and improving data instead of algorithms are the of primary concern. He has been a co-organiser of the AI City workshop series at CVPR and is an Area Chair and Senior PC for CVPR, ECCV, ACM Multimedia, AAAI, IJCAI. He is an Associate Editor for IEEE T-CSVT.