Liang Ding (丁亮)

NLP Researcher


E-mail: liangding.liam@gmail.com
Twitter: @liangdingNLP
GitHub: @alphadl
Scholar: Google Scholar Profile
WeChat: Liam_DL1469


Biography

I work as the Chief Scientist at a generative AI research and product startup (raised 50ドルM+), spearheading the R&D of the foundation models and the product applications. Previously, I was a Research Scientist and led the NLP research group [photo] at JD Explore Academy, where I was a member of the Doctoral Management Trainee (DMT) program (a top-tier talent program in JD.com, Inc.). I received my Ph.D. from The University of Sydney, supervised by Prof. Dacheng Tao (IEEE/ACM Fellow).

I have published over 70 papers in top-tier AI/NLP venues (e.g., NeurIPS, ICLR, ACL, EMNLP, TPAMI), with a recent focus on large language models (training, alignment, evaluations, multilinguality, multimodality) and their agentic applications in the real world. My work has been recognized with several honors, including the WAIC SAIL Award (highest honor at World AI Conference), the JD Technology Golden Award (highest tech award at JD.com Inc.), and an ACL Best Paper Nomination. I also led the development of models that secured 1st place in world-renowned challenges, including SuperGLUE (surpassing human performance), GLUE, WMT (2019-2022), and IWSLT 2021.

I am an IEEE Senior Member (elevated in 2025). I actively serve the community as an Area Chair for major AI/NLP conferences, including NeurIPS, ACL, EMNLP, and NAACL. I was also recognized as a Distinguished Reviewer for ACM TWEB and an Outstanding Reviewer for KDD 2025. I served as a researcher at SIAS, Zhejiang University.

I am always open to collaborations!

📣 NEWS: I have several full-time and intern positions on frontier topics, e.g., RLHF, adaptive rubric reward modeling, long-horizon memory, and lifelong learning for Agentic AI. Please reach out if you're interested in joining.

(削除) 📣 I have several internship positions. Self-motivated students with experience in NLP and LLM are welcome. (削除ここまで)

News

  • Sept. 2025: Three papers about {adversarial robustness, compression, self-evolution learning on forgetting} of language models are accepted by NeurIPS 2025, congrats to my students and coauthors.
  • Aug. 2025: 🎉 Six papers about {knowledge editing, dynamic KV caching, safety, agent-initialization, agent-early-exiting} of language models are accepted by EMNLP 2025, congrats to my interns and coauthors.
  • Aug. 2025: Invited to serve as the Senior PC (meta reviewer) for AAAI 2026.
  • Jul. 2025: I have been elevated to an IEEE Senior Member.
  • Jul. 2025: One paper about Healthcare Copilot is accepted by Nature Partner Journal npj Artificial Intelligence.
  • Jun. 2025: One paper about contrastive decoding for code generation is accepted by IEEE Transactions on Fuzzy Systems.
  • May 2025: 🎉 Six papers about {enhancing in-context learning, domain alignment, multilingual synchronization, multimodal reasoning, multi-agent, and eye-tracking-based intervention} of language models are accepted by ACL 2025, congrats to my interns and coauthors.
  • May 2025: Two papers about multimodality (retrieval-augmented perception) and RLHF (mitigating reward hacking) of language models are accepted by ICML 2025, with one oral, congrats to my interns.
  • Apr. 2025: Invited to serve as the Area Chair for NeurIPS 2025 and EMNLP 2025.
  • Mar. 2025: Two papers about {human-centric autoML system & mixture-of-experts compression} for foundation models are accepted by Nature Partner Journal npj Artificial Intelligence and Transactions on Machine Learning Research, respectively.
  • Mar. 2025: One paper about recursively summarizing in building long memory in LLMs is accepted by Neurocomputing.
  • Feb. 2025: Two papers about medical language models are accepted by Neurocomputing and IEEE Transactions on Emerging Topics in Computational Intelligence, respectively.
  • Jan. 2025: A paper about improving the lexical choice of non-autoregressive translation is accepted by Computer Speech & Language.
  • Dec. 2024: Two papers about {multimodal benchmark on high-resolution images and complex reasoning} of language models are accepted by AAAI 2025, congrats to my interns.
  • Nov. 2024: Three papers about {distillation for translation, jailbreak defense, translation evaluation} of language models are accepted by COLING 2025, congrats to my interns.
  • Oct. 2024: Invited to serve as the Area Chair for NAACL 2025.
  • Oct. 2024: One survey about efficient training of large models is accepted by ACM Computing Surveys, check out the [paper]&[media coverage by 新智元].
  • Sept. 2024: One Paper about mitigating reward hacking in RLHF is accepted by NeurIPS 2024, congrats to my intern Yuchun.
  • Sept. 2024: Four Papers about {catastrophic forgetting, distillation for CodeGen, speech modality expansion, and watermark} of language models are accepted by EMNLP 2024, congrats to my interns and coauthors.
  • Sept. 2024: One paper about data augmentation for multilabel classification is accepted by IEEE Transactions on Multimedia.
  • Aug. 2024: One paper about understanding multimodal alignment for MLLM is accepted by ACM ToMM.
  • Jul. 2024: One paper about multimodal fusion for MLLM is accepted by ACM MM 2024.
  • Jul. 2024: One paper about orthogonal optimizer for MoE is accepted by ECAI 2024.
  • Dec. 2023: Invited to serve as the Area Chair for EMNLP 2024.
  • May. 2024: 🎉 Ten papers about {alignment, in-context learning, compression, evaluation, safety, and downstream adaptations} of language models are accepted by ACL 2024.
  • Mar. 2024: One paper about sparse graph Transformer is accepted by Neural Networks.
  • Mar. 2024: Two papers about {prompt transfer tuning & sheared backpropagation tuning of foundation models} are accepted by IEEE Transactions on Knowledge and Data Engineering and CVPR 2024, respectively.
  • Feb. 2024: Two papers about {prompt bias in LMs and multimodal translation dataset} are accepted by COLING 2024.
  • Dec. 2023: Invited to serve as the Area Chair for NAACL 2024, ACL 2024, & EACL 2024.
  • Dec. 2023: Two papers about {Seq2Seq LM pretraining & alleviating exposure bias for DiffModel} are accepted by IEEE Transactions on Knowledge and Data Engineering and AAAI 2024, respectively.
  • Oct. 2023: One paper about training LM with adaptive sharpness-aware optimizer is accepted by Neural Networks.
  • Oct. 2023: Five Papers about {high (data & model) efficiency, cross-modal alignment in speech translation, LLM quantization, and ChatGPT for machine translation} are accepted by EMNLP 2023, congrats to my interns and coauthors.
  • Sept. 2023: One paper about parameter-efficient knowledge distillation is accepted by IEEE Transactions on Multimedia.
  • Jul. 2023: Three papers about {cross-modal contrastive learning, knowledge alignment, and federated optimizer} of model training are accepted by ECAI 2023, IEEE TASLP, and TPAMI, respectively.
  • May. 2023: 🎉 Nine papers about {training, evaluation, robustness, and downstream adaptation} of the large model are accepted by ACL 2023, two oral papers and one best paper nomination, congrats to my interns and coauthors.
  • May. 2023: Two papers about GNN sparse training and healthcare dataset are accepted by IEEE Transactions on Neural Networks and Learning Systems and Information Fusion, respectively.
  • Apr. 2023: One paper about flatter optimization for fedML is accepted by ICML 2023 (oral).
  • Mar. 2023: We release reports to better understand and harness the power of ChatGPT on language understanding (NLU), machine translation (MT), and MT evaluation. Enjoy it!
  • Mar. 2023: 🥂 I lead the R&D of the Vega series Large Language Models (织女系列自然语言大模型), which won the 2022 Technology Golden Award ("京东集团技术金项奖", the highest tech award at JD.com, Inc.), see internal media coverage.
  • Feb. 2023: One paper about knowledge-grounded multiview learning is accepted by IEEE Transactions on Knowledge and Data Engineering, congrats to my intern Qihuang.
  • Jan. 2023: Invited to serve as the Session Chair for AAAI 2023.
  • Jan. 2023: One paper about federated learning is accepted by ICLR 2023.
  • Jan. 2023: One paper about dynamic contrastive distillation is accepted by IEEE Transactions on Multimedia, congrats to my intern Jun.
  • Nov. 2022: One paper about memory-efficient pipeline parallelism of mixture-of-experts is accepted by IPDPS 2023, congrats to my intern Zheng.
  • Nov. 2022: Invited talk at China National Computer Congress 2022 (CNCC'22), check out the schedule.
  • Nov. 2022: One paper about simultaneous translation is accepted by AAAI 2023, congrats to my intern Hexuan.
  • Oct. 2022: 🏆 Our Vega v2 got 1st place on one of the most difficult general language understanding leaderboards - SuperGLUE! Check out the tech report.
  • Oct. 2022: Invited talk about Towards Efficient NLP Foundation Models -- Pretrain, Downstream Adaptation, and Beyond at Nankai Univ. and Univ. Chinese Academy of Sciences.
  • Oct. 2022: Two papers are accepted by EMNLP 2022, congrats to my interns Qihuang and Shwai.
  • Sep. 2022: 📖 Co-authored "White Paper on Artificial Intelligence Generated Content" is published, check out the [Chinese version]&[media coverage].
  • Aug. 2022: Two papers are accepted by COLING 2022, congrats to my interns Changtong and Bing.
  • Aug. 2022: 🥂 Our Project "Super Deep Learning of JD Explore Academy" won 2022 SAIL (Superior AI Leader/ 卓越人工智能引领者) Award Top30 at World Artificial Intelligence Conference, see media coverage.
  • Jul. 2022: 🏆 Our Vega-MT ranked 1st (Chinese<=>English, German<=>English, Czech<=>English, English=>Russian), 2nd (Russian=>English, Japanese=>English), and 3rd (English=>Japanese) in General Translation Task in WMT 2022.
  • Apr. 2022: One paper is accepted by NAACL 2022.
  • Apr. 2022: One paper is accepted by SIGIR 2022, congrats to my interns Jun and Fei.
  • Mar. 2022: One paper is accepted by CVPR 2022.
  • Feb. 2022: Submitted Ph.D. thesis "Neural Machine Translation with Fully Information Transformation", containing sufficient (adequate translation) & efficient (fast translation) information transformation.
  • Feb. 2022: One paper is accepted by ACL 2022.
  • Jan. 2022: 🏆 Our Vega v1 got 1st place on The General Language Understanding Evaluation (GLUE) benchmark! Check out the [tech report]&[media coverage].
  • Dec. 2021: Invited to serve as the Area Chair for ACL 2022.
  • Dec. 2021: Our Vega (织女) achieved the SOTA performance in two tasks @GLUE, surpassing human performance.
  • Aug. 2021: Two papers are accepted by EMNLP 2021 and its findings.
  • Aug. 2021: We organize a course "Advanced topics of AI" at the School of Gifted Young, USTC. I am the lecturer of NLP part.
  • Jul. 2021: 🏆 Ranked 1st in Swahili-English Speech Translation Task in IWSLT 2021.
  • Jul. 2021: Two papers are accepted by IWSLT 2021.
  • May. 2021: Three papers are accepted by ACL 2021 and its findings.
  • Mar. 2021: Invited to serve as the Session Chair for SDM 2021
  • Jan. 2021: Two papers are accepted by ICLR 2021.
  • Jan. 2021: One paper is accepted by ICASSP 2021.
  • Sept. 2020: One paper is accepted by COLING 2020.
  • Sept. 2020: Two papers are accepted by WMT 2020.
  • Sept. 2020: One paper is accepted by EMNLP 2020.
  • Aug. 2020: 🏆 Ranked 2nd in German-English Chat Translation Task in WMT 2020.
  • Apr. 2020: One paper is accepted by ACL 2020.
  • Apr. 2019: 🏆 Ranked 1st in Finnish-English News Translation Task in WMT 2019.

Publications

† indicates intern/student under my supervision. ✉️ means corresponding author.

Competitions and Shared Tasks

  • SuperGLUE Benchmark, ranked 1st with an average score of 91.3 (since Oct. 8 2022).
  • WMT 2022, ranked 1st on Chinese<=>English, German<=>English, Czech<=>English, and English=>Russian, 2nd on Russian=>English and Japanese=>English, and 3rd on English=>Japanese General Translation Tasks, respectively.
  • GLUE Benchmark, ranked 1st with an average score of 91.3 (since Jan. 1 2022).
  • IWSLT 2021, ranked 1st on Swahili-English speech translation task.
  • WMT 2020, ranked 2nd on German-to-English chat translation shared task.
  • Tencent AI Innovation Competition, ranked 3rd on "input tips in human-computer interaction translation" problem.
  • WMT 2019, ranked 1st on Finnish-to-English news translation shared task.
  • CWMT 2017, ranked 3rd on the Japanese-to-Chinese patent translation shared task.

Professional Services

  • Area Chair: NeurIPS 2025, ACL (2022-2025), EMNLP (2022-2025), NAACL (2024-2025), EACL (2023-2025), ACL Rolling Review.
  • Program Committee (Selected): ICLR, ICML, NeurIPS, KDD (2025 Outstanding Reviewer), CVPR, ICCV, ECCV, AAAI, IJCAI.
  • Journal Reviewer (Selected): Nature Communications, AIJ, TACL, IEEE TKDE, IEEE TMM, ACM TWEB (Distinguished Reviewer), CL, IEEE TNNLS, IEEE/ACM TASLP, Neural Networks.
  • Membership: IEEE (Senior Member 2025-; Member 2020-2025), ACL (2020-).

Selected Awards

  • 2025: Senior Member, IEEE.
  • 2025: Outstanding Reviewer, KDD.
  • 2023: Best Paper Nomination, ACL.
  • 2023: Technology Golden Award (京东集团技术金项奖), JD.com, Inc. (Highest technical award at JD.com).
  • 2022: SAIL Award (Superior AI Leader), World Artificial Intelligence Conference (WAIC).
  • 2022: Distinguished Reviewer, ACM Transactions on the Web.
  • 2018: Outstanding Graduate, Beijing Municipal Education Commission.
  • 2013: National Scholarship, Ministry of Education of China.

*Last updated on 10/2025.

AltStyle によって変換されたページ (->オリジナル) /