Post-Event Report: ASPIRE Computer Vision Workshop at Tokyo

2026年4月28日 15時40分 公開


【Event Report】

On April 7, 2026, the “ASPIRE Computer Vision Workshop at Tokyo” was held at the Kuramae-Hall, Institute of Science Tokyo. Organized in collaboration with FunAI Lab (Nuremberg Tech) and VISLAB (University of Amsterdam), the workshop served as a platform to discuss both the “present” and the “future” of computer vision research.

The primary goal of this workshop was to bring together researchers from institutions in Japan, Germany, and the Netherlands to share their diverse research perspectives and approaches. By deepening discussions and fostering exchange on the future direction of the field, we believe the event successfully contributed to strengthening our international research network.

The program featured an inspiring lineup of speakers, beginning with a keynote by Yuki M. Asano (Nuremberg Tech), followed by presentations from the Science Tokyo Tech team, Go Irie (Tokyo University of Science), Yoshihiro Fukuhara (AIST / CADDi), Kuniaki Saito (OMRON SINIC X), and Noa Garcia (The University of Osaka). Additionally, the poster session showcased over 30 presentations, sparking vibrant and insightful discussions throughout the venue.

Despite being held on a weekday, approximately 100 participants attended the event. The high level of engagement further highlighted the immense potential for future breakthroughs in this field. 

Thank you all for being part of this workshop!!

■EVENT OVERVIEW

ASPIRE Computer Vision Workshop at Tokyo

●General Chair:

●Program Chair:

●organizer:
National Institute of Advanced Industrial Science and Technology (AIST)|産業技術総合研究所
Institute of Science Tokyo|東京科学大学
Tokyo University of Science|東京理科大学

●support:

The workshop will be conducted primarily in English.

[English]

SCHEDULE 2026.4.7(tue) 10:30-17:40
※18:15– Networking & Banquet.
FORMAT Hybrid (In-person and Online)
VENUE Kuramae-Hall, Kuramae-Kaikan Build.A,
Institute of Science Tokyo, Ookayama Campus
ADDRESS 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8550, Japan https://www.somuka.titech.ac.jp/ttf/contact/index.html (in Japanese only)
BANQUET Royal Blue Hall; next to the venue.

[Japanese]

開催日時 2026年4月7日(火) 10:30-17:40 ※18:15より懇親会予定。
形式 ハイブリッド開催(現地&zoom予定)
会場 くらまえホール; 東京科学大学 大岡山キャンパス 蔵前会館A館
住所 〒152-8550 東京都目黒区大岡山2丁目12-1 https://www.somuka.titech.ac.jp/ttf/contact/index.html
懇親会 ロイヤルブルーホール:くらまえホール隣の部屋です


■PROGRAM

time program
10:30 – 10:40
  • Opening remarks (Ryosuke Yamada)
10:40 – 11:20
  • Invited talk 1: [talk 30min・FAQ 10min]
  • Title:「Learning More from Less」

  • Yuki M. Asano
  • Full Professor, University of Technology Nuremberg
11:20 – 12:00
  • Invited talk 2: [talk 30min・FAQ 10min]
  • Title:「ASPIRE Research Project: Building Multimodal AI Foundation Models Under Limited Resources」

  • The presenters and titles are as follows:
  • Title:「Referring Expression Comprehension for Small Objects」

  • Takumi Hirose
  • Institute of Science Tokyo
  • Title:「Towards Building a Sign Language Foundation Model: Challenges, Progress and Future Directions」

  • Zhaoyi AN
  • Institute of Science Tokyo
  • Title:「Rethinking Positional Embeddings in NeRF: Generalization, Anti-Aliasing, and Beyond」

  • Mizuki Kojima
  • Institute of Science Tokyo
12:00 – 13:30 LUNCH (90min)
13:30 – 13:50
  • Invited talk 3-1: [talk 15min・FAQ 5min]
  • Title:「What AI Should See, Forget, and Preserve – Sensing and Learning for Safe and Trustworthy AI -」

  • Go Irie
  • Associate Professor, Tokyo University of Science
13:50 – 14:10
  • Invited talk 3-2: [talk 15min・FAQ 5min]
14:10 – 14:50
  • Invited talk 4: [talk 30min・FAQ 10min]
  • Title:「How Can We Efficiently Supervise Vision-Language Model?」

  • Kuniaki Saito
  • Researcher, OMRON SINIC X
14:50 – 15:30
  • Invited talk 5: [talk 30min・FAQ 10min]
  • Title:「Evaluation in Visual Recognition: What Are We Really Measuring?」

  • Noa Garcia
  • Associate Professor, The University of Osaka
15:30 – 15:40 Coffee Break (10min)
15:40 – 17:50
  • Poster Presentations
  1. Daichi Otsuka, “Building a 3D Spatial Understanding Model Using Multi-view Images and Language Information”
  2. Edgar Josafat Martinez-Noriega, Truong Thao Nguyen, Jason Haga, Yusuke Tanimura, and Rio Yokota, “A Tensorized Fractal Generation for Fast Image-Free Vision Transformer Pre-Training”
  3. Yuki Hirakawa, Takashi Wada, Ryotaro Shimizu, Takuya Furusawa, Yuki Saito, Ryosuke Araki, Tianwei Chen, Fan Mo, and Yoshimitsu Aoki, “Reference-Free Image Quality Assessment for Virtual Try-On via Human Feedback”
  4. Nakamasa Inoue, Kanoko Goto, Masanari Oi, Martyna Gruszka, Mahiro Ukai, Takumi Hirose, and Yusuke Sekikawa, “DISCODE: Distribution-Aware Score Decoder for Robust Automatic Evaluation of Image Captioning”
  5. Ryota Ishizaki and Go Irie, “Debiasing Vision-Language Models without Catastrophic Forgetting”
  6. Yusuke Kuwana, Takashi Shibata, Kiyoharu Aizawa, and Go Irie, “Discrete Prompt Search for Black-Box Unlearning”
  7. Hiroto Tsunoda, Hibiki Hariguchi, Yu Mitsuzumi, Akisato Kimura, Kiyoharu Aizawa, and Go Irie, “Illuminance Sensing for Human Action Recognition”
  8. Yuta Takahashi and Naoshi Kaneko, “Human Mesh Recovery”
  9. Kohei Torimi, Jyun-Ting Song, Kris Kitani, Yoshimitsu Aoki, and Takuma Yagi, “Assembly State Recognition Utilizing Latent Representations of 3D Reconstruction Models”
  10. Yukinori Yamamoto, Kazuya Nishimura, Tsukasa Fukusato, Hirokazu Nosato, Tetsuya Ogata, and Hirokatsu Kataoka, “FDIF: Formula-Driven Supervised Learning with Implicit Functions for 3D Medical Image Segmentation”
  11. Jona Ruthardt, Manu Gaur, Deva Ramanan, Makarand Tapaswi, and Yuki M. Asano, “Steerable Visual Representations”
  12. Zhiyang Li, Ruijiang Jin, Yusuke Sekikawa, and Nakamasa Inoue, “PyraMatch: Multi-Head Pyramid Scan for Mamba-Based Image Matching”
  13. Ryota Tazawa, “Online Temporal Action Interval Detection Using Large-Scale Pre-trained Video Models”
  14. Yutaro Koyama and Rei Kawakami, “Emotion Recognition with Intermediate Features of Vision Language Models”
  15. Mizuki Kojima, Rei Kawakami, and Masatoshi Okutomi, “Few-shot View Synthesis Based on Geometric and Semantic Consistency”
  16. Zhaoyi An and Rei Kawakami, “Teach me sign: stepwise prompting LLM for sign language production”
  17. Masaru Yajima, Yuma Shin, Rei Kawakami, Asako Kanezaki, and Kei Ota, “Touch2Insert: Zero-Shot Peg Insertion by Touching Intersections of Peg and Hole”
  18. Keio University, “Semantic Gesture Dataset via Video Generation”
  19. Hina Otake, Shinei Arakawa, Keitaro Tanaka, Yoshihiro Fukuhara, Hirokatsu Kataoka, and Shigeo Morishima, “FDSML: Formula-Driven Supervised Metric Learning with Parameter-Aware Triplet Loss”
  20. University of Technology Nuremberg, “Bitune: Leveraging Bidirectional Attention to Improve Decoder-Only LLMs”
  21. Takumi Hirose, Kanoko Goto, Mahiro Ukai, Shuhei Kurita, and Nakamasa Inoue, “Referring Expression Comprehension for Small Objects”
  22. Shashanka Venkataramanan, Valentinos Pariza, Mohammadreza Salehi, Lukas Knobel, Elias Ramzi, Spyros Gidaris, Andrei Bursuc, and Yuki M. Asano, “Franca: Nested Matryoshka Clustering for Scalable Visual Representation Learning”
  23. Yutaro Aze, “Dynamic Inference in Event-based Semantic Segmentation”
* In no particular order; including only authorized information.
17:50 – 18:00
  • Closing remarks (Daichi Otsuka)
18:15 –
  • Networking & Banquet [Registration Required]

The banquet will be held at “Royal Blue Hall” next to the venue. We will provide detailed directions during the closing remarks for those attending in person.

■RELATED WORKSHOP

Upcoming Related Event:

[ASPIRE Computer Vision Workshop at Oxford VGG]
DATE & VENUE: April 24, 2026 (GMT), Magdalen College, University of Oxford
More Info: https://aspire-oxfordcv-workshop-2026apr.limitlab.xyz/

Following “ASPIRE Computer Vision Workshop at Tokyo”, the next workshop will be held at the University of Oxford on April 24, 2026.
Through these workshops, we aim to foster meaningful connections among researchers from the UK, Japan, and many other countries. By providing opportunities for structured networking and informal networking opportunities, we hope to build long-lasting professional relationships that extend well beyond the conclusion of these events. We believe this will serve as a catalyst for sparking innovative ideas and new collaborations that will push the frontiers of the field.

■CONTACT

For inquiries regarding this event, please contact via the link below.

ReseearchPort
Kohei Hasebe
https://research-p.com/contactform/
 

関連記事
メンバーシップ登録

Membership Registration

博士課程在籍中の方 ポスドクの方 大学教員の方 企業研究者/技術者

ResearchPortでは、研究者・技術者の研究活動とキャリアを個別にサポートしています。
ご興味のある方は、ResearchPortメンバーシップにご登録ください。