Chao Yang

杨超, Post-doctoral Researcher, Shanghai AI Lab.

prof_pic.png

I am Post-doctoral Researcher at Shanghai AI Lab (上海人工智能实验室), leading a fundamental Large Model Safety & Decision Intelligence research group. I am also co-working with Yu Qiao, Jing Shao and Yu Liu from Shanghai AI Lab closely.

Previously, I received my Ph.D. in Tsinghua University in 2022, advised by Prof. Fuchun Sun. I was fortunate enough to get the guidance of Prof. Huaping Liu and Prof. Wenbing Huang when pursuing my doctoral degree. I graduated from the Department of Computer Science and Technology, Tsinghua University with a master’s degree and from Sichuan University with a bachelor’s degree.

My research interest includes Large Language Model Safety, Multi-modal Large Model, and Robotic Embodied Intelligence. Some of my current research keywords can be found below:

  • Large Language Model: Alignment and Finetuning, LLM Attack and Defense.
  • Multimodal LLM: Vision and language Fusion, Video-QA, VQA.
  • Embodied Robotics: Robotic Manipulation, Reinforcement and Imitation Learning.

Please feel free to email me at yangchao9264 [at] 126 [dot] cn or my GMAIL: yangchaoemigmo [at] gmail [dot] com.

news

Apr 20, 2024 One Paper accepted by IJCAI 2024 Survey Track. :sparkles::sparkles:
Mar 13, 2024 One LLM safety survey paper accepted by NAACL 2024. :sparkles: :smile:
Feb 27, 2024 Two Papers accepted by CVPR 2024. :sparkles::sparkles:
Dec 9, 2023 One Offline RL Paper accepted by AAAI 2024. :sparkles::sparkles:

selected publications

*(Equal contribution), +(Corresponding author).

2024

  1. VideoDistill: Language-aware Vision Distillation for Video Question Answering
    Bo Zou*, Chao Yang*, Yu Qiao, and 2 more authors
    arXiv preprint arXiv:2404.00973, 2024
  2. LLaMA-Excitor: General Instruction Tuning via Indirect Feature Interaction
    Bo Zou*, Chao Yang*, Yu Qiao, and 2 more authors
    arXiv preprint arXiv:2404.00913, 2024
  3. Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!
    Zhanhui Zhou, Jie Liu, Zhichen Dong, and 4 more authors
    arXiv preprint arXiv:2402.12343, 2024
  4. Attacks, defenses and evaluations for llm conversation safety: A survey
    Zhichen Dong, Zhanhui Zhou, Chao Yang+, and 2 more authors
    arXiv preprint arXiv:2402.09283, 2024
  5. Safety of Multimodal Large Language Models on Images and Text
    Xin Liu, Yichen Zhu, Yunshi Lan, and 2 more authors
    arXiv preprint arXiv:2402.00357, 2024