Lili Mou, PhD

------------------------------------------------------------------------
Note: I am moving my homepage from
http://sei.pku.edu.cn/~moull12 to https://lili-mou.github.io/
because the pku server is unavailable from time to time.
However, the html style remains unchanged, so that it is
familiar to you all my friends.
------------------------------------------------------------------------

Email: doublepower [dot] mou [at] gmail [dot] com


Lili Mou is currently a research scientist at AdeptMind Inc. (Toronto). Lili received his BS and PhD degrees in 2012 and 2017, respectively, from School of EECS, Peking University. Then, he worked as a postdoctoral fellow at the University of Waterloo, Canada. His research interests include deep learning applied to natural language processing as well as programming language processing. He has publications at top conferences and journals like AAAI, ACL, CIKM, COLING, EMNLP, ICML, IJCAI, INTERSPEECH, and TACL (in alphabetic order). [CV]

Seminars

In our seminars, we discuss machine learning theories, algorithms,
and applications (with special interest in NLP). We start from the foundations,
and move to the frontiers.

Please visit here for contents.

Selected Publications

Copyright announcement: Copyrights of published papers might be held by publishers. All rights are reserved for other materials, including drafts, slides, and source code. Whenever not conflicting with copyright laws, I permit free use for non-commerical purposes. Please cite my papers if you use them for research. In particular, if a paper is accompanied with source code, the URL is available in the paper. Please notice, however, there's no guarantee that my code is executable in your environment.

Useful links: Complete list, Google Scholar, DBLP

Book

  • Lili Mou, Zhi Jin. Tree-Based Convolutional Neural Networks: Principles and Applications, Springer, 2018. [url]

Papers

  1. Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang. A grammar-based structural CNN decoder for code generation. To appear in AAAI, 2019. [pdf]

  2. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li. CGMH: Constrained sentence generation by Metropolis-Hastings sampling. To appear in AAAI, 2019.

  3. Hareesh Bahuleyan,1 Lili Mou,1 Olga Vechtomova, Pascal Poupart. Variational attention for sequence-to-sequence models. In Proceedings of the International Conference on Computational Linguistics, pages 1672-1682, 2018. Also presented at TADGM Workshop @ICML, 2018. [pdf, slides]

  4. Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, Sen Song. Jumper: Learning when to make classification decision in reading. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), pages 4237--4243, 2018. [pdf, slides (courtesy of XL)]

  5. Zaixiang Zheng,1 Hao Zhou,1 Shujian Huang, Lili Mou, Xin-Yu Dai, Jiajun Chen, Zhaopeng Tu. Modeling past and future for neural machine translation. Transactions of the Association for Computational Linguistics (TACL), vol. 6, pp. 145--157, 2018. [pdf] (Presented at ACL-18)

  6. Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, Lili Mou. Affective neural response generation. In Proceedings of European Conference on Information Retrieval (ECIR), pages 154--166, 2018. [pdf]

  7. Bolin Wei,1 Shuai Lu,1 Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin. Why do neural dialog systems generate short and meaningless replies? A comparison between dialog and translation. arXiv preprint arXiv:1712.02250. [pdf]

  8. Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan. RUBER: An unsupervised method for automatic evaluation of open-domain dialog systems. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), pages 722--729, 2018. [pdf]

  9. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui. Order-planning neural text generation from structure data. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), pages 5414--5421, 2018. [pdf, slides (courtesy of LS)]

  10. Zhao Meng, Lili Mou, Zhi Jin. Towards neural speaker modeling in multi-party conversation: The task, dataset, and models. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI student poster), pages 8121--8122, 2018. [pdf]

  11. Zhao Meng, Lili Mou, Zhi Jin. Hierarchical RNN with static sentence-level attention for text-based speaker change detection. In Proceedings of the 2017 ACM Conference on Information and Knowledge Management (CIKM-short), pages 2203--2206, 2017. [pdf]

  12. Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin. Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 2518--2526, 2017. Also presented in ICLR Workshop, 2017. [pdf, slides (courtesy of ZL), slides]

  13. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, Dongyan Zhao. How to make context more useful? An empirical study on context-aware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL-short), volume 2, pages 231--236, 2017. [pdf]

  14. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pages 3349--3358, 2016. [pdf, slides]

  15. Yan Xu,1 Ran Jia,1 Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin. Improved relation classification by deep recurrent neural networks with data augmentation. In Proceedings of the 26th International Conference on Computational Linguistics (COLING), pages 1461--1470, 2016. [pdf]

  16. Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin. How transferable are neural networks in NLP applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 478--489, 2016. [pdf]

  17. Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin. Distilling word embeddings: An encoding approach. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management (CIKM-short), pages 1977--1980, 2016. Also presented in RL4NLP Workshop @ACL, 2016. [pdf]

  18. Yiping Song, Lili Mou, Rui Yan, Li Yi, Zinan Zhu, Xiaohua Hu. Dialogue session segmentation by embedding-enhanced TextTiling. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH), pages 2706--2710, 2016. [pdf]

  19. Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin. Compressing neural language models by sparse word representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 226--235, 2016. [pdf]

  20. Lili Mou,1 Rui Men,1 Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-short), volume 2, pages 130--136, 2016. [pdf]

  21. Xiang Li, Lili Mou, Rui Yan, Ming Zhang. StalemateBreaker: A proactive content-introducing approach to automatic human-computer conversation. In Proceedings of the Twenty-Fifth International Conference on Artificial Intelligence (IJCAI), pages 2845--2851, 2016. [pdf, Reported by UK Daily Mail, China Computer Federation (CCF), Peking University, and several other media]

  22. Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin. Convolutional neural networks over tree structures for programming language processing. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), pages 1287--1293, 2016. [pdf]

  23. Lili Mou,1 Hao Peng,1 Ge Li, Yan Xu, Lu Zhang, Zhi Jin. Discriminative neural sentence modeling by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2315--2325, 2015. [pdf, slides]

  24. Hao Peng,1 Lili Mou,1 Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin. A comparative study on regularization strategies for embedding-based neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP-short), pages 2106--2111, 2015. [pdf]

  25. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng and Zhi Jin. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1785--1794, 2015. [pdf, slides (courtesy of YX)]

  26. Yiyang Hao, Ge Li, Lili Mou, Lu Zhang, Zhi Jin. MCT: A tool for commenting programs by multimedia comments. In Proceedings of the 2013 International Conference on Software Engineering (ICSE-demo), pages 1339--1342, 2013. [pdf]

1=equal contribution.

Academic Service

Primary reviewer: NAACL'16 (best reviewers), COLING'16, ACL'17, AAAI'18, NAACL'18, ACL'18, COLING'18 (PC and Mentor), Computer Speech & Language, TKDE
Subreviewer: FSE'16, BigData'17, AAAI'17, Cognitive Computing, JCST


Teaching Experience

Co-Supervised Students

2014.7--2015.6
Hao Peng (undergraduate)
with publication at
EMNLP-15.
2015.7--2016.3
Rui Men (undergraduate)
with publication at
ACL-16.
2015.7--2017.6
Zhao Meng (undergraduate)
with publications at
EMNLP-16, CIKM-17, & AAAI-18 (student poster).
2015.7--
Bolin Wei (undergraduate
-> master student)
with arXiv preprint.
2016.1--2017.1
Yiping Song (PhD student)
with publication at
INTERSPEECH-16.
2017.9--
Hareesh Bahuleyan (master student)
with arXiv preprint.

Teaching Assistant

2012.9--2013.1   TA of Introduction to Computing (undergraduate course)
Lecture (Dec 2012): Minimax and Alpha-Beta Pruning [slides in Chinese]
2013.2--2013.6 TA of Java Programming (graduate course)
2013.9--2014.1 TA of Introduction to Programming Languages (undergraduate course)
TA of Introduction to Computing (MOOC)
2014.9--2015.1 TA of Introduction to Computing (MOOC)
2015.3--2015.6 TA of Deep Learning Techniques and Applications (graduate course)
2016.2--2016.6 TA of Deep Learning Techniques and Applications (graduate course)
Lecture (5 May 2016): Neural Networks for Natural Language Processing [slides]
 
Guest Lecture of "Deep Learning Techniques and Applications" course (11 May 2017):
Neural Networks in NLP: The Curse of Indifferentiability [slides: I, II, III]
Mini-Project Tutorial for Undergrad. Res. Opportunities Conf. @ U Waterloo (22 and 23 Sep 2017):
Adversarial Training and Security in Machine Learning [slides, code]

Entertainment

Lili Mou's major hobbies include practicing calligraphy, watching yueju,
visiting traditional Chinese architectures, taking MOOCs, and many others.

Gallery