Representation learning / deep learning
How should we represent semantics of words and sentences on computers? This has been an outstanding problem on Natural Language Processing (NLP) for long. Recently, Deep Learning was applied to NLP to learn vector representations of words from a large corpus and to compose vector representations of sentences from those of constituent words. In addition, we conduct research on grounding text to an existing knowledgebase, e.g., associating automatically mentions of people and organizations to Wikipedia articles.
Humans utilize the commonsense knowledge such as “pass a exam” -> “delighted” in our communications. We explore approaches for automatic acquisition of the commonsense knowledge from the large corpora such as the Web, Wikipedia, and SNS.
We can realize various intelligent applications such as quiz, translation, conversation, and opinion mining with the standardized process where computers yield an appropriate response for a given input.
Applications in the real world.
NLP has a broad range of applications including social listening from SNS posts, automatic generation/revision of text. Our laboratory are also interested in applying research outcomes to the real-world problem.
Examples: March 13, 2013 (P2), July 3, 2013 (P6), and July 26, 2013 (P9) in the Asahi Shimbun. Do not copy the figures without a prior authorization from the Asahi Shimbun (authorization number 17-6975).