Special Session

Keynote

Prof. Yoko Yamanishi (Kyushu University)

Date and time: Monday, 29th September, from 14:10 - 15:10
Title: Emergent functions of electrically induced bubbles

AbstractCell poration technologies offer opportunities not only to understand the activities of biological molecules but also to investigate genetic manipulation possibilities. Unfortunately, transferring large molecules that can carry huge genomic information is challenging. In this presentation, I will introduce electromechanical poration using a core–shell-structured microbubble generator, consisting of a fine microelectrode covered with a dielectric material. By introducing a microcavity at its tip, we could concentrate the electrical field with the application of electric pulses and generate microbubbles for electromechanical stimulation of cells. Specifically, the technology enables transfection with molecules that are thousands of kDa even into osteoblasts and Chlamydomonas, which are generally considered to be difficult to inject. Notably, we found that the transfection efficiency can be enhanced by adjusting the viscosity of the cell suspension, which was presumably achieved by remodeling of the membrane cytoskeleton. The applicability of the approach to a variety of cell types opens up numerous emerging gene engineering applications. In this presentation I will present the mechanism of electrically-induced bubbles and its wide range of application including electro-mechanical poration.

Speaker IntroductionYoko Yamanishi has received the Ph.D. degree from Imperial College London in 2003. She became Assistant Professor of Dept. of Bioengineering and Robotics, Tohoku University in 2006 and in 20082009, mainly have engaged in the research fields of BioMEMS and micro multiphase flow which apply to the Bio-medical Science and Engineering. She became Associate Professor of Dept. of Micro-Nano Systems Engineering, and Dept. of Mechanical Science & Engineering of Nagoya University and also a member of PRESTO JST during 2010-2012, and have started her work of electrically-induced bubble (needle-free bubble injector). She was Associate Professor of Dept. of Mechanical Engineering of Shibaura Institute of Technology during 2013-2015, and her work has been expanded to plasma-induced bubble and its biomedical applications. She moved to Kyushu University on April, 2016 and she became Professor of Dept. of Mechanical Engineering, Kyushu University and leading biomedical fluid engineering laboratory. She become a principal investigator of CREST JST and PM of moonshot.

Educational Session

Prof. Koon Ho Rha (NAVER Healthcare Lab, Korea)

Date and time: Monday, 29th September, from 10:30 - 12:00
Title: Digital Healthcare 2025 : Age of Genera ve AI

AbstractWith the advent of Open AI's ChatGPT in 2022 and Google's Bard in 2023, interest in digital healthcare is increasing not only in various fields of society but also in medical fields. The past of digital healthcare begins with the Electronic Health Record (EHR) of the 1980s. It was adopted in storage and prescription and developed until the early 2000s through the digitization of patient records. In the early 2000s, health information began to exchange online on the Internet, and with the spread of smartphones and mobile devices that began in the late 2000s, mobile health care (mHealth) began to provide a different aspect of health care. Through the application, individuals can monitor their health conditions and manage chronic diseases using mobile apps and wearable devices.

Current digital healthcare has the following representative aspects; 1. Remote Patient Monitoring 2. Digital Therapeutics 3. Health Information Exchange (HIE) and Interoperability (Interoperability) 4. Digital health companies

Generative AI in Healthcare will 1. Improvement of efficiency in healthcare delivery 2. Strengthen support for medical decision-making 3. Improving access to medical services 4. Human-centered Service Delivery
Currently, there are only few countries with generative AI including US, UK, China, Israel, France and South Korea. It is also a reality that the barriers to entry are very high as hundreds of top-tier engineers as well as enormous infrastructures are needed to implement such generative AI technology. Global perspective and current practices are introduced.

Assistant Prof. Tianqi Huang (Shanghai Jiaotong University, China)

Date and time: Monday, 29th September, from 10:30 - 12:00
Title: Light Field 3D Display for Medical Visualization

AbstractThe continued advancement of three-dimensional visualization technology presents new opportunities to revolutionize the representation of medical data. Light field 3D display technology stands out as a novel tool, paving a new path for enhancing depth perception, precision, and interactivity in medical imaging. This report delves into the application of light field displays within medical data visualization, with a specific focus on its transformative potential in diagnostics, therapeutics, and medical education. Initially, the fundamental principles of light field 3D displays are elucidated, detailing how they create an immersive visual experience through the capture and reproduction of light rays. Subsequently, the discourse shifts to the technology's practical applications in medical imaging. It particularly underscores how light field displays significantly augment the expressiveness of medical images by enriching depth perception and parallax effects when visualizing complex anatomical structures. Furthermore, this report presents targeted case studies and experimental results to substantiate the distinct advantages of light field displays over traditional 2D and standard 3D modalities. These advantages encompass more accurate spatial comprehension, heightened diagnostic accuracy, and a more immersive training experience for medical practitioners. Our research concludes that light field 3D display technology possesses considerable potential to elevate the standard of medical data visualization, promising significant improvements in patient outcomes and the optimization of clinical workflows.

Associate Prof. CHUI Chee Kong (National University of Singapore)

Date and time: Monday, 29th September, from 10:30 - 12:00
Title: Shaping Robotics Education: Curriculum Innovation and Hands-On Learning

AbstractIn contemporary engineering education, practical hands-on experience is essential to bridging theoretical concepts with real-world applications. This talk emphasizes an active learning approach through meaningful, practical project assignments designed to deepen students' understanding and engagement.

Drawing from my extensive teaching experience in mechatronics and robotics, I will share specific examples of successful student projects, such as simulating complex robot motions and developing image processing algorithms to recognize text on microchips. These projects enhance students' technical skills but also foster critical thinking, creativity, and problem-solving abilities essential for real-world engineering practice.

I will discuss my role as a member of the Academic Affairs Committee (AAC) and the Engineering Accreditation Board (EAB), representing the Control & Mechatronics Division in the Department of Mechanical Engineering. In these capacities, I contribute to training the next generation of AI-assisted robotics professionals through curriculum development and the implementation of various multi- and interdisciplinary robotics programs, including the Specialization in Robotics, MSc (Robotics), and the newly launched BEng (Robotics and Machine Intelligence).

Prof. Jackrit Suthakorn (Mahidol University, Thailand)

Date and time: Monday, 29th September, from 10:30 - 12:00
Title: Advancing Patient Care through Medical Robotics in Surgery, Rehabilitation, and Hospital Services

AbstractMedical robotics is helping improve patient care by increasing precision in surgery, supporting rehabilitation, and improving hospital services. This keynote presentation highlights recent developments in surgical robotics that allow for minimally invasive procedures with high accuracy, leading to better outcomes and shorter recovery times. Rehabilitation robots provide personalized support to help patients regain movement and independence. Hospital service robots assist with routine tasks, helping reduce the workload of healthcare staff.

By combining artificial intelligence, sensor technology, and biomechanical design, medical robotics systems are becoming safer, more efficient, and better suited to meet patient needs. These technologies support more accessible and effective care in a range of clinical settings.
The Center for Biomedical and Robotics Technology (BART LAB) plays a key role in this progress. BART LAB focuses on developing robotic solutions to solve real clinical problems. By working closely with hospitals and medical professionals, the research is aligned with practical needs. From the beginning, each system is designed with patient safety in mind and follows ISO standards and regulatory requirements.
All technologies developed at BART LAB undergo clinical trials to make sure they are safe and effective before being used in actual healthcare environments. This process helps move research from the lab to real patient care.

Medical robotics continues to advance in surgery, rehabilitation, and hospital operations. These systems are helping improve patient outcomes and healthcare efficiency, supporting the development of intelligent, practical, and patient-centered solutions

Rising Star Session

Assistant Prof. Minho Hwang(DGIST, Korea)

Date and time: Sunday, 28th September, from 15:00 - 17:00
Title: Flexible Endoscopic Robots and Surgical Task Automation

AbstractIn this talk, I will present two key topics:
1. Flexible surgical robotics and
2. AI-assisted control and automation.

First, while the da Vinci system is widely used, its rigid instruments pose limitations in navigating confined and curved anatomical regions such as the oral cavity, gastrointestinal tract, ENT, and gynecological areas. To overcome these challenges, we are developing flexible robotic systems like PETH and K-FLEX, which embed miniature robotic arms into flexible endoscopes to enhance dexterity and access.
Second, we aim to build AI-enabled surgical robots that support surgeons by automating repetitive subtasks̶such as suturing̶to improve precision, reduce fatigue, and enable remote procedures. I will share our labʼs recent progress in developing AI-driven control frameworks for such systems.

Associate Professor CHUA Chin Heng Matthew(National University of Singapore, Singapore)

Date and time: Sunday, 28th September, from 15:00 - 17:00
Title: Driving Next-Generation Robotic Surgery for Future Sustainable and Value-Driven Care in Singapore

AbstractThe increasing complexity of robotic-assisted minimally invasive surgery (MIS) necessitates a paradigm shift in surgical training. Current simulators lack the physiological fidelity, adaptability, and performance feedback mechanisms required to replicate real intraoperative conditions. In this Rising Star keynote by Prof Matthew Chua, he presents the development of a Modular, Multi-Layered Robotic Surgical Simulator designed to advance competency-based training through a high-fidelity, sensor-integrated, and feedback-rich environment. The system introduces novel physical dynamics—breathing-induced organ motion via inflatable chambers, perfusion systems to simulate haemorrhage, and force-responsive tool interactions—allowing for the replication of real-time anatomical and procedural variability. Augmented reality overlays provide intraoperative visual guidance, while embedded sensors and actuators track precision, force application, and workflow adherence. At the core is a virtual mentor system powered by adaptive learning algorithms that assess skill progression and deliver corrective interventions in real time. This supports shared control of surgical tools during training, enhancing both technical proficiency and decision-making under simulated stress. The modular design further enables rapid reconfiguration for different surgical specialties. This work contributes to the broader vision of sustainable, value-driven care in Singapore, demonstrating how robotics, AI, and simulation can converge meet the demands of next-generation digital healthcare.

Dr. Nantida Nillahoot(Mahidol University, Thailand)

Date and time: Sunday, 28th September, from 15:00 - 17:00
Title: SurgySim: Bridging Surgical Simulation and Innovation – A Journey from Research to Impact

AbstractThis talk presents a decade-long journey of developing SurgySim—a surgical training platform that merges haptic force feedback with image-based simulation. Evolving from academic research into a translational innovation, SurgySim exemplifies the integration of engineering precision, biomedical insight, and innovation strategy to advance surgical education. The system originated from efforts to capture realistic surgical force profiles using handheld sensors and robotic manipulators on porcine tissue. These data enabled high-fidelity simulations with responsive haptics, later enriched with AR and VR modules. Over time, the project expanded into exploring ISO standards and medical device testing protocols, using SurgySim as a model for product validation—even though it is not classified as a clinical device. Beyond technical development, this journey reflects a personal evolution—from early-stage researcher to determined innovator. The speaker’s transition involved exploring commercialization models, entrepreneurship training, and cross-disciplinary collaboration.

Through leadership roles in international events such as RoboCup, and multiple awards— including national thesis honors and innovation prizes—she exemplifies a rising generation of biomedical engineers reshaping the landscape of surgical simulation. This talk offers not only technical insights but also a personal narrative of growth, resilience, and vision—intended to inspire young researchers to embrace innovation, navigate complexity, and translate their research into real-world impact.

Assistant Professor D.S.V. Bandara(Kyushu University, Japan)

Date and time: Sunday, 28th September, from 15:00 - 17:00
Title: Bridging Human and Machine Intelligence for Safer Surgical Environments

AbstractSurgical environments are becoming more complicated due to new technologies in robotics, imaging and data technologies. As the technology continues to improve, so do the opportunities for intelligent support to improve safety, facilitate clinical decision making and reduce cognitive load on surgeons. This talk will explore the current landscape and future opportunities for surgical support systems to make surgery safer, smarter and more adaptive. Further, I will outline important considerations for surgical support systems such as context, anticipation of events in the surgical procedure and real-time responses. While many existing technologies have provided opportunities for automation and assisting with simpler tasks in equipment, the future generation of systems must consider reasoning at a higher level to provide value and assist with decision making in uncertainty; the same way humans do. A significant interest in this talk will be how AI can be used to model surgical activity, recognize patterns created from that activity, and predict why risks might happen before they occur. Similarly, we will consider how intelligent systems could be designed to work together with human expertise, as proposed in creating a collaborative and transparent operating room. Furthermore, we will discuss other wider challenges in surgical support systems; such as clinical validation, interpretability, and ethical considerations. By bridging the gap between technical innovation and clinical need, we can begin to shape a future where intelligent systems not only assist with surgery but actively contribute to its safety, efficiency, and effectiveness.

TOP