Xinxiao Wu received her Ph.D. from the school of computer science, Beijing Institute of Technology in July 2010. From August 2010 to October 2011, She worked as a Post-PhD student research fellow in Nanyang Technological University, Singapore. She joined the School of Computer Science, Beijing Institute of Technology in 2012. She is currently a Professor. She has obtained Excellent PhD studental Dissertation Award from the Chinese Association for Artificial Intelligence. She has published many papers in top conferences and journals on computer vision and artificial intelligence: ICCV, CVPR, ECCV, AAAI, IJCAI, ACM MM, IJCV, IEEE TIP, IEEE TMM, IEEE TNNLS, IEEE TCSVT, IEEE TCYB. Her research work has been supported by many research grants as principal investigator, which includes the National Natural Science Foundation (NSFC), the Ministry of Education PhD studental Fund, and many school-enterprise projects, etc. She also servers on the editorial boards of IEEE Transactions on Multimedia. Her current research interests include machine learning, vision and language, multimedia video understanding.
Welcome students who are interested in vision and language, machine learning and artificial intelligence to join us!
2024-01-23
Yuheng Shi and Hanxi Lin's paper “Commonsense Knowledge Prompting for Few-shot Action Recognition in Videos” was accepted by IEEE Transactions on Multimedia (TMM). Congratulations!2023-07-26
Shuo Yang and Yongqi Wang's paper “Multi-modal Prompting for Open-vocabulary Video Visual Relationship Detection” was accepted by The 38th AAAI Conference on Artificial Intelligence (AAAI2024). Congratulations!2023-07-26
Yayun Qi's paper “Relational Distant Supervision for Image Captioning without Image-text Pairs” was accepted by The 38th AAAI Conference on Artificial Intelligence (AAAI2024). Congratulations!2023-07-26
Shuo Yang and Zirui Shang's paper “Probability Distribution Based Frame-supervised Language-driven Action Localization” was accepted by The 31st ACM International Conference on Multimedia (ACM MM2023). Congratulations!2023-07-17
Wentian Zhao's paper “Boosting Entity-aware Image Captioning with Multi-modal Knowledge Graph” was accepted by IEEE Transactions on Multimedia (TMM). Congratulations!2023-04-20
Shitong Shao and Huanran Chen's paper “Teaching What You Should Teach: A Data-Based Distillation Method” was accepted by International Joint Conference on Artificial Intelligence (IJCAI2023). Congratulations!2023-03-29
Yubo Zhu's paper “Topic-aware Video Summarization using Multimodal Transformer” was accepted by Pattern Recognition (PR). Congratulations!2023-03-17
Xiaofeng Ji's paper “Counterfactual Inference for Visual Relationship Detection in Videos” was accepted by IEEE International Conference on Multimedia and Expo (ICME2023). Congratulations!2023-02-28
Jin Chen's paper “Meta-causal Learning for Single Domain Generalization” was accepted by The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023). Congratulations!2023-01-19
Wentian Zhao and Yayun Qi won the second prize of “Ingenuity Cup” National Artificial Intelligence Innovation Application Competition! Congratulations!2023-01-06
Tong Li's paper “Sentimental Visual Captioning using Multimodal Transformer” was accepted by International Journal of Computer Vision (IJCV). Congratulations!2022-12-05
Mengxiao Tian's paper “Adaptive Latent Graph Representation Learning for Image-Text Matchin” was accepted by IEEE Transactions on Image Processing (TIP). Congratulations!2022-05-29
Wentian Zhao's paper “Learning Cooperative Neural Modules for Stylized Image Captioning” was accepted by International Journal of Computer Vision (IJCV). Congratulations!2022-04-21
Shuo Yang's paper “Entity-Aware and Motion-Aware Transformers for Language-driven Action Localization” was accepted by International Joint Conference on Artificial Intelligence (IJCAI2022). Congratulations!2022-03-07
Hanxi Lin's paper “Adaptive Recursive Circle Framework for Fing-grained Action Recognition” was accepted by IEEE International Conference on Multimedia and Expo (ICME2022). Congratulations!2021-12-01
Jin Chen and Xiaofeng Ji's paper “Adaptive Image-to-video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning” was accepted by 36th AAAI Conference on Artificial Intelligenc (AAAI2022). Congratulations!2021-09-29
Wentian Zhao's paper “Multi-modal Dependency Tree for Video Captioning” was accepted by Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS2021). Congratulations!2021-03-13
Jin Chen's paper “Sequential Instance Refinement for Cross-domain Object Detection in Images” was accepted by IEEE Transactions on Image Processin (TIP). Congratulations!2021-03-12
Jingyi Hou and Yayun Qi's paper “跨语言知识蒸馏的视频中文字幕生成” was accepted by 《计算机学报》. Congratulations!2021-03-07
Tong Li's paper “Image Captioning with Inherent Sentiment”was accepted by IEEE International Conference on Multimedia and Expo (ICME2021 Oral). Congratulations!2020-12-02
Jianwei Zhao and Ruiqi Wang's paper “Anticipating Future Relations via Graph Growing for Action Prediction” was accepted by 35th AAAI Conference on Artificial Intelligenc (AAAI2021). Congratulations!2020-12-02
Jin Chen's paper “Spatial-temporal Causal Inference for Partial Image-to-video Adaptation” was accepted by 35th AAAI Conference on Artificial Intelligence (AAAI2021). Congratulations!2020-11-24
Wentian Zhao's paper “Cross-domain Image Captioning via Cross-modal Retrieval and Model Adaptation” was accepted by IEEE Transactions on Image Processing (TIP). Congratulations!2020-11-20
Ruiqi Wang's paper “Spatial-Temporal Relation Reasoning for Action Prediction in Videos” was accepted by International Journal of Computer Vision (IJCV). Congratulations!2020-09-25
Jin Chen's paper "Domain Adversarial Reinforcement Learning for Partial Domain Adaptation" was accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS). Congratulations!2020-07-26
Jialu Chen's paper "Preserving Global and Local Temporal Consistency for Arbitrary Video Style Transfer" was accepted by ACM Multimedia 2020. Congratulations!