Paper reading (2022)

このセミナーについて (About this seminar)

担当者が見つけた面白い研究を紹介するセミナーです.以下の内容で構成されます. In this seminar, presenters share interesting studies that they found. The seminar consists of the following elements.

  1. 論文紹介 (Paper presentation): 面白いと思う論文(基本的にLong Paper)を選んで,その内容を著者の代わりになったつもりで発表する. A presenter reads a paper (long paper is strongly preffered) that he or she finds interesting, and explains the content as if he or she was an author of the paper.
  2. 著者プレゼンの上映と解説 (Watching talk videos): 担当者は面白いと思う研究の発表動画を選び,その論文の内容を把握する.担当者の補足説明を聞きながら,参加者みんなで発表動画を鑑賞する. Finding an interesting video presentation, a presenter reads and understands the content of the paper. Attendees watch the presentation video of the author of the paper. The presenter is expected to explain supplementary information about the research.

前者 (1) はプレゼンテーションの練習を兼ねています.後者 (2) は「よい」プレゼンテーションを鑑賞しながら,英語でのプレゼンテーションやディスカッションに慣れることを狙っています. The former (1) aims at practicing presentations. The latter (2) aims at improving listening and discussion skills in English as we enjoy good presentations.

発表者は一人目は (1) を,二人目は (1) か (2) のどちらかを担当します. The 1st presentator take (1). The 2nd one can select (1) or (2).

座長(chair)は、セミナーの司会進行をおこないます。 chair person host the seminar. 座長の仕事を参考にして,円滑に議論が進むように心がけてください. Please contribute to active discussion refering to Chair’s Job.

発表時間 (Presentation Time)

  • Presentation: 15 ~ 20 minnutes
  • QA: 5 ~ 10 minutes

日時 (Date and time)

  • 3Q: 12:30~ (Mon)
  • 4Q: 12:30~ (Mon)

参加者 (Attendee)

  • 全員

:exclamation: 発表を登録するときは,著者名,発表年,タイトル,会議名/ジャーナル名,(巻,号,ページ番号など)を必ず記入してください. :exclamation: Please include author names, publication year, title, conference/journal name, (volume and page numbers) when you add an entry for presentation.


  • リモートミーティングでビデオプレゼンテーションを行う場合は,画面およびオーディオを共有してください.
  • When you stream the video presentation via remote meeting, please share the both screen and system audio.
    • For Zoom: You don’t have to do nothing special. Just share your screen.
    • For Microsoft Teams: Please activate Share the system audio feature.

今後の予定 (Planned Seminars)

2022-05-30(Mon) 12:30~

  • presenter
    • ma
    • muraoka
  • chair
    • maeda

2022-06-06(Mon) 12:30~

  • presenter
    • niwa
    • iida
  • chair
    • kuo

2022-06-13(Mon) 12:30~

2022-06-20(Mon) 12:30~

2022-06-27(Mon) 12:30~

2022-07-04(Mon) 12:30~

2022-07-11(Mon) 12:30~

2022-07-18(Mon) 12:30~

2022-07-25(Mon) 12:30~

2022-08-01(Mon) 12:30~

Past seminars

2022-04-18(Mon) 12:30~

  • presenter
    • liu
      • Neural Machine Translation with Monolingual Translation Memory. Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. ACL 2021.
      • [Paper] [Slide]
    • yang Attention Bottlenecks for Multimodal Fusion Arsha Nagrani, Shan Yang, Anurag Arnab, Aren Jansen, Cordelia Schmid, Chen Sun , NeurIPS 2021 paper Attention Bottlenecks for Multimodal Fusion.pdf (896.2 kB)
  • chair
    • ishikawa

2022-04-25(Mon) 12:30~

  • presenter
    • maeda
      • Data Augmentation of Incorporating Real Error Patterns and Linguistic Knowledge for Grammatical Error Correction, Xia Li and Junyi He, CoNLL 2021
      • [paper] [slide]
    • takase
  • chair
    • maruyama

2022-05-09(Mon) 12:30~

  • presenter
    • loem : Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting (EMNLP 2021)
      • Authors : Wangchunshu Zhou, Tao Ge, Canwen Xu, Ke Xu, Furu Wei
      • [Paper] [Slide]
    • kaneko
      • Reframing Human-AI Collaboration for Generating Free-Text Explanations (NAACL 2022)
      • Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi
      • Paper, Slide
  • chair
    • taniguchi

2022-05-16(Mon) 12:30~

  • presenter
    • Erick:
      • On Vision Features in Multimodal Machine Translation. ACL 2022.
      • Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, Jingbo Zhu.
      • paper , presentation
    • kuo
      • Testing the Ability of Language Models to Interpret Figurative Language (NAACL 2022)
      • Emmy Liu, Chenxuan Cui, Kenneth Zheng, Graham Neubig
      • paper, slides
  • chair
    • loem