Lili Mou, PhD

Calendar

Email:
LMOU [dot] ualberta [at] ca
doublepower [dot] mou [at] gmail [dot] com
Note: Please send to at most one email address of mine.


Dr. Lili Mou is an Assistant Professor at the Department of Computing Science, University of Alberta. He is also an Alberta Machine Intelligence Institute (Amii) Fellow and a Canada CIFAR AI (CCAI) Chair. Lili received his BS and PhD degrees in 2012 and 2017, respectively, from School of EECS, Peking University. After that, he worked as a postdoctoral fellow at the University of Waterloo and a research scientist at Adeptmind (a startup in Toronto, Canada). His research interests include deep learning applied to natural language processing as well as programming language processing. He has publications at top conferences and journals, including AAAI, ACL, CIKM, COLING, EMNLP, ICASSP, ICLR, ICML, IJCAI, INTERSPEECH, NAACL-HLT, NeruIPS, and TACL (in alphabetic order). He also has tutorials presented at EMNLP-IJCNLP'19 and ACL'20.

Admitting

I am admitting all-level students, postdocs, as well as visiting scholars.
MSc and PhD application should be addressed to the University portal.

Requirements for all students: See here for detailed requirements.

New Initiatives for MSc Applicants:
To consider a wider range of prospective students, I am starting a new initiative for interviewing MSc applicants.
As a first step, please describe a project that you have done with the following requirements: Please read a good example and a bad example.
Your files may be reviewed by the PI and current students.

Teaching

Please read before contact CMPUT 272: Formal Systems and Logic [Winter 2021]

CMPUT 651: Deep Learning for NLP [Winter 2021][Fall 2019]

CMPUT 466/566: Machine Learning [Fall 2020] [Winter 2020] [Fall 2021]

CMPUT 463/563: Probabilistic Graphical Models [Fall 2021]


I allow eClass guest access with UofA accounts for all my courses. No email request is needed for accessing eClass.


Independent Study:
I feel happy to offer the Independent Study course for both undergraduate and graduate students.
A student interested in such a research course should write me a letter of motivation.
The project idea could come from either the student or the instructor, and ideally both.

Seminars

In our seminars, we discuss machine learning theories, algorithms,
and applications (with special interest in NLP). We start from the foundations,
and move to the frontiers.

Please visit here for contents.

Publications

Copyright announcement: Copyrights of published papers might be held by publishers. All rights are reserved for other materials, including drafts, slides, and source code. Whenever not conflicting with copyright laws, I permit free use for non-commerical purposes. Please cite my papers if you use them for research. In particular, if a paper is accompanied with source code, the URL is available in the paper. Please notice, however, there's no guarantee that my code is executable in your environment.

Useful links: Complete list, Google Scholar, DBLP

Book

  • Lili Mou, Zhi Jin. Tree-Based Convolutional Neural Networks: Principles and Applications, Springer, 2018. [url]

New

  • Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li and Wenhan Chao. Formality style transfer with shared latent space. To appear In COLING, 2020.

  • Kashif Khan, Gaurav Sahu, Vikash Balasubramanian, Lili Mou, Olga Vechtomova. Adversarial learning on the latent space for diverse dialog generation. To appear In COLING, 2020.

  • Jingjing Li, Zichao Li, Lili Mou, Xin Jiang, Michael Lyu, Irwin King. Unsupervised text generation by learning from search. In Advances in Neural Information Processing System (NeurIPS), 2020. [pdf]

  • Yixing Luan, Bradley Hauer, Lili Mou, Grzegorz Kondrak. Improving word sense disambiguation with translations. In Proceedings of the 2020 Conference on Empiricla Methods in Natural Language Processing (EMNLP), pages 4055--4065, 2020. [pdf]

  • Lili Mou, Olga Vechtomova. Stylized text generation: Approaches and applications. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL): Tutorial Abstracts, pages 19--22, 2020. [pdf, video]

  • Dhruv Kumar, Lili Mou, Lukasz Golab, Olga Vechtomova. Iterative edit-based unsupervised sentence simplification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 7918--7928, 2020. [pdf]

  • Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert. Discrete optimization for unsupervised sentence summarization with word-level extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 5032--5042, 2020. [pdf]

  • Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, Sen Song. Unsupervised paraphrasing by simulated annealing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 302--312, 2020. [pdf]

  • Nabiha Asghar,1 Lili Mou,1 Kira A Selby, Kevin D Pantasdo, Pascal Poupart, Xin Jiang. Progressive memory banks for incremental domain adaptation. In Proceedings of the International Conference on Learning Representations (ICLR), 2020. [pdf]

  • Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, Lu Zhang. TreeGen: A tree-based Transformer architecture for code generation. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), pages 8984--8991, 2020. [pdf]

  • Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, Sen Song. Finding decision jumps in text classification. Neurocomputing, vol 371, pages 177--187, 2020. [url]

Selected Refereed Papers

[Selection is NOT based on the venue or length of a paper.]
  • Bowen Li, Lili Mou, Frank Keller. An imitation learning approach to unsupervised parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL-short), pages 3485--3492, 2019. [pdf, slides (courtesy of BL)] (Best paper nomination)

  • Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova. Stochastic Wasserstein autoencoder for probabilistic sentence generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT short), pages 4068--4076, 2019. [pdf, slides]

  • Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li. CGMH: Constrained sentence generation by Metropolis-Hastings sampling. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), pages 6834--6942, 2019. [pdf]

  • Hareesh Bahuleyan,1 Lili Mou,1 Olga Vechtomova, Pascal Poupart. Variational attention for sequence-to-sequence models. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 1672--1682, 2018. Also presented at TADGM Workshop @ICML, 2018. [pdf, slides]

  • Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin. Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 2518--2526, 2017. Also presented in ICLR Workshop, 2017. [pdf, slides (courtesy of ZL), slides]

  • Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin. How transferable are neural networks in NLP applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 478--489, 2016. [pdf]

  • Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin. Compressing neural language models by sparse word representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 226--235, 2016. [pdf]

  • Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin. Convolutional neural networks over tree structures for programming language processing. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), pages 1287--1293, 2016. [pdf]

  • Lili Mou,1 Hao Peng,1 Ge Li, Yan Xu, Lu Zhang, Zhi Jin. Discriminative neural sentence modeling by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2315--2325, 2015. [pdf, slides]

  • Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1785--1794, 2015. [pdf, slides (courtesy of YX)]

1=equal contribution.

Academic Service

Primary reviewer

NAACL'16 (best reviewers), COLING'16, ACL'17, AAAI'18, NAACL'18, ACL'18, COLING'18 (PC and Mentor),
ACL'19, IJCAI'19, NeurIPS'19, Computer Speech & Language, TKDE

Selected Talks

  • Lili Mou, Olga Vechtomova. Stylized Text Generation: Approaches and Applications.
    Tutorial at ACL, 2020. [abs, url, slides, video]

  • Lili Mou. Search-Based Unsupervised Text Generation.
    Amii AI Seminar, 2020. [video, slides]

  • Lili Mou, Hao Zhou, Lei Li. Discreteness in Neural Natural Language Processing. Tutorial at EMNLP-IJCNLP, 2019. [slides I, II, III; videos I, II]


Guest Lectures

Guest Lecture of the "Deep Learning Techniques and Applications" course (11 May 2017):
Neural Networks in NLP: The Curse of Indifferentiability [slides: I, II, III]
Mini-Project Tutorial for Undergrad. Res. Opportunities Conf. @ U Waterloo (22 and 23 Sep 2017):
Adversarial Training and Security in Machine Learning [slides, code]
Guest Lecture of the "Text Analytics" course (4 July 2019):
Sampling and Stochastic Search for Text Generation [pdf]


Entertainment

Lili Mou's major hobbies include practicing calligraphy, watching yueju,
visiting traditional Chinese architectures, taking MOOCs, and many others.

Gallery