Ying Zhang's Homepage

About Ying Zhang (张莹)


I am a postdoc in the Natural Language Understanding Team, RIKEN Center for Advanced Intelligence Project (AIP), under the supervision of Prof. Kentaro Inui at Tohoku University.

Thank you for your interest in my profile! I am working in natural language processing, to track my newest publications, you can also visit my ORCID, Semantic Scholar, or Google Scholar.

Newest Publications

  1. • Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang*, and Manabu Okumura. “Community-Invariant Graph Contrastive Learning,” In Forty-first International Conference on Machine Learning (ICML 2024).[preprint]

  2. • Dongyuan Li, Ying Zhang*, Yusong Wang, Kataro Funakoshi, and Manabu Okumura. “Active Learning with Task Adaptation Pre-training for Speech Emotion Recognition,” Journal of Natural Language Processing (JNLP 2024).[preprint]

  3. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “Bidirectional Transformer Reranker for Grammatical Error Correction,” Journal of Natural Language Processing (JNLP 2024), Volume 31, Issue 1. [paper]

  4. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “Bidirectional Transformer Reranker for Grammatical Error Correction,” In Findings of the Association for Computational Linguistics: ACL 2023, pp. 3801-3825, long paper. [paper, bib, presentation, poster, code]

  5. Ying Zhang*, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, and Manabu Okumura. “Generic Mechanism for Reducing Repetitions in Encoder-Decoder Models,” Journal of Natural Language Processing (JNLP 2023), Volume 30, Issue 2, pp. 401-431. [paper, bib, code]

  6. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “A Language Model-based Generative Classifier for Sentence-level Discourse Parsing,” In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), pp. 2432-2446, long paper. [paper, bib, presentation, poster, code]

  7. Ying Zhang*, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, and Manabu Okumura. “Generic Mechanism for Reducing Repetitions in Encoder-Decoder Models,” In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP 2021), pp. 1606-1615. [paper, bib, presentation, code]

Research Interests


My research focuses on natural language processing that utilizes computers to understand the text as human beings. Specifically, I am passionate about developing general functions and mechanisms that can be easily applied to pretrained large language models across various natural language generation tasks to enhance results.

For instance, I am particularly interested in addressing the issue of text degeneration. This involves training deep neural networks to assign higher probabilities for natural and grammatical texts compared to bland and repetitive texts when using likelihood as a decoding objective (Holtzman et al. 2020). It is included in my Ph.D. dissertation: Addressing Text Degeneration of Discriminative Models with Re-ranking Methods.

Academic/Career


Thank you for your interest in my work!

If you have any question about our paper or code, please feel free to contact with me via