Find out more about subscribing to add all events.
2:00pm-2:45pm: Seminar by Ziyang Wang
2:45pm-3:30pm: Seminar by Lin Wu
3:30pm-4:30pm: Networking [Common room EM2.40]
4:30pm-5:15pm: Seminar by Kazu Akiyama
5:15pm-6:00pm: Seminar by Ander Biguri
Ziyang Wang
Title: AI and Robotics for Healthcare and Beyond: Developing Solutions under Real-Life Conditions
Abstract: AI and Robotics offer transformative opportunities to address pressing challenges in healthcare. In this talk, I will present my research journey in developing applied AI and robotics under realistic constraints such as imperfect data and low-cost hardware resources. I will first discuss advances in medical image analysis, where robust algorithms must cope with noisy, incomplete, or biased data to support reliable clinical decisions. I will then highlight contributions in medical robotics, focusing on designing efficient and affordable systems that can operate effectively under hardware and computational limitations. Finally, I will briefly touch on how these principles extend beyond healthcare to other domains such as space exploration, outlining my vision for AI and robotics as enablers of engineering solutions in real-world environments.
Lin Wu
Title: AI-Powered Insights: Tackling Large Visual Language Models, Articulated Pose Estimation and Generative AI.
Abstract: The rapid evolution of artificial intelligence has brought us closer to systems that can both understand and generate complex multimodal content, from natural language to biological structures. In this seminar, I will present a unifying perspective on my research spanning three frontiers: large language models, embodied perception, and generative design. First, I will discuss advances in pose estimation and embodied AI, where category-level 6D object understanding enables more adaptive interaction between intelligent systems and the physical world. Building on this foundation, I will introduce generative AI frameworks for video editing, where natural language prompts drive real-time content creation and manipulation, offering new opportunities for creative industries while addressing challenges of misinformation through deepfake detection. Finally, I will extend the discussion to protein design, where multimodal and generative modeling principles are applied to capture structural dynamics and accelerate discovery in healthcare. By connecting these seemingly diverse domains, I argue for a research vision where cross-modal learning, interpretability, and generative modeling converge to drive breakthroughs in both digital creativity and scientific discovery.
Kazu Akiyama
Title: First Imaging of Black Holes: Computational Algorithms Driving the Scientific Breakthroughs
Abstract: In April 2019 and May 2021, hundreds of front pages worldwide featured the first images of the supermassive black holes M87* and Sgr A*, revealing the shadows cast by their event horizons—the visible edge of space-time. These breakthroughs yielded the second most-cited ground-based astronomy result of the past decade and were delivered by the Event Horizon Telescope (EHT), a global network of (sub)millimetre radio telescopes. EHT achieves the sharpest angular resolution among any existing astronomy instruments by computationally synthesising an Earth-sized aperture using very long baseline interferometry, a technique of radio interferometry (RI).
Central to this success are innovative RI imaging algorithms tailored to EHT, whose development I have led over the past decade. RI imaging is an underdetermined inverse problem: one must reconstruct an image from incomplete Fourier-space measurements with noises. EHT adds two acute challenges—sparse Fourier-plane coverage and severe corruptions of data due to the scarcity of suitable calibrators at EHT resolution. The limitations of traditional methods prompted the development of dedicated techniques now known as regularised maximum likelihood (RML) methods, which couple the power of regularisation with efficient data models robust to calibration errors. RML approaches have since been widely adopted across radio astronomy, well beyond EHT.
This technical research talk will introduce radio interferometry through the lens of computational imaging, present the algorithms underpinning black-hole imaging, and outline the emerging AI-enabled frontier designed to meet the demands of next-generation facilities. I will also highlight cross-disciplinary relevance to other inverse problems in the physical sciences, including medical imaging, and conclude with TomoGrav—an international, interdisciplinary programme at Heriot-Watt University—now being developed for multi-million-pound fellowship bids to the Royal Society and UK Research and Innovation (UKRI).
Ander Biguri
Title: Translational Research on Tomographic Reconstruction: Applied mathematics, computational sciences and real world imaging
Abstract: The inverse problems optimization literature has been rapidly producing sophisticated mathematical tools, both variational regularization and machine-learning-based, for decades. However, it is often the case that these developments have challenges leaving the applied mathematics literature and making the jump into real-world tomographic applications (such as medical imaging or non-destructive testing), often due to challenges in computational cost and/or breaking of mathematical assumptions that arise in real-world applications. This talk summarizes my research into trying to close this gap over the last ten years. The talk will introduce the TIGRE toolbox (github.com/CERN/TIGRE) that I develop and maintain, which allows for the trivial reconstruction of medical and industrial Cone Beam Computed Tomography (CBCT) datasets with a wide collection of optimization methods for large-scale tomographic methods, now widely used in academia, industry, and healthcare. We will explore mathematical, computational, and applied tomographic research that arose from this project. Finally, I will briefly introduce the LION toolbox (github.com/CambridgeCIA/LION), an in-development tool to try to close the same gap but for data-driven methods, and the latest advances I've been involved with to apply these methods to real imaging applications.
Dr. Ziyang Wang is a Lecturer in the Department of Applied AI & Robotics, School of Computer Science and Digital Technologies, Aston University. He specializes in AI and Robotics for engineering applications in healthcare and space, particularly under realistic data and hardware constraints. Dr. Wang has authored 40 papers as first or corresponding author and co-authored 10 additional papers. The code for all his first-author papers has been made publicly available across 15 GitHub repositories with over 1,100 stars. Dr. Wang serves as an Associate Editor for Neurocomputing and has reviewed more than 400 manuscripts for journals and conferences, including Nature Communications, IEEE TPAMI, IEEE TMI, IEEE TIP, IEEE JBHI, ICLR, and ICRA. He has received the 2024 DAAD AInet Fellowship, the 2023 MICCAI DEMI Best Paper Award, and the 2023 IEEE SPS Grant. He has also collaborated on AI-driven projects with industry partners such as Cisco, Ford, Silverstream Technologies, Olesinski, GE Vernova, and Jaguar Land Rover.
Dr Lin Yuanbo Wu earned her PhD from The University of New South Wales, Kensington, Sydney, Australia. She is driven by a passion for addressing real-world challenges in the fields of computer vision and machine learning. With over 90 peer-reviewed research articles, including two book chapters, published in premier journals and proceedings, she has established herself as a leading researcher. Dr Wu's research spans various computer vision tasks and machine learning domains, focusing on video content understanding, 6 DoF pose estimation, Generative AI, and AI-based advancements for medical studies.
She was awarded a pilot research project (Generative AI edge for object removal/generation) with Airbus defence and aerospace, UK. She achieved the 2nd Place Award in the 5th Large-Scale Video Object Segmentation Challenge VIS Track, ICCV 2023, the 2nd Place Award in the Pixel-pixel Video Understanding in the Wild Challenge VPS Track, IEEE/CVF Computer Vision and Pattern Recognition (CVPR) 2023, and the Outstanding Paper Nomination Award, International Conference on PRAI 2024. Currently serving as an Associate Editor for prestigious journals including IEEE Transactions on Cybernetics, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Multimedia, IEEE Transactions on Emerging Topics in Computational Intelligence, IEEE Transactions on Big Data, Pattern Recognition, and Pattern Recognition Letters, Dr Wu's expertise extends to leadership roles as an Area Chair for ICASSP 2025, British Machine Vision Conference (BMVC) 2024, ACM Multimedia 2025 (Dublin), 2024 (Melbourne), 2023 (Ottawa) and 2022 (Lisbon). Dr Wu is a Senior Member of IEEE.
Kazunori Akiyama is a Research Scientist and principal investigator at the Massachusetts Institute of Technology (MIT) Haystack Observatory. He has held more than a decade of international leadership in black-hole imaging with the Event Horizon Telescope (EHT), developing advanced algorithms and high-performance software for data processing and computational imaging in radio interferometry. He leads the international EHT Collaboration as its Deputy Project Scientist and holds multiple leadership roles in designing antenna arrays and computing architectures for next-generation ground- and space-based extensions. His work has expanded to next-generation astronomy projects, including the Square Kilometre Array Observatory (headquartered in Manchester, UK) and AtLAST (with the UK Astronomy Technology Centre in Edinburgh as a leading partner), as well as to non-astronomy applications such as the international space-geodesy programme that underpins the terrestrial reference frame at millimetre precision for broad civilian and defence uses, including global sea-level monitoring. He earned his BSc in Physics from Hokkaido University in 2010, and his MSc and PhD in Astronomy from the University of Tokyo in 2012 and 2015, respectively. After his PhD, he moved to MIT as an Overseas Fellow of the Japan Society for the Promotion of Science (2015), later becoming a Jansky Fellow at the National Radio Astronomy Observatory (2017). He was appointed a Research Scientist at MIT in 2020 and has since served as a principal investigator. Dr Akiyama’s honours include the Young Astronomer Award from the Astronomical Society of Japan (2020), the Young Scientists’ Prize from Japan’s Ministry of Education, Culture, Sports, Science and Technology (2020), “20 Under 40: Young Shapers of the Future (Science & Technology)” by Encyclopedia Britannica (2021), the Frontiers of Science Award from the International Congress of Basic Science (2025) recognising an EHTC paper he co-led, and the Breakthrough Prize in Fundamental Physics (2019) as a co-recipient.
Ander Biguri graduated with his PhD at the University of Bath and CERN, where he worked on iterative reconstruction of CBCT data on GPUs for 4D radiation therapy imaging. Aside from his research output, he developed the TIGRE toolbox there, which led to a Research Fellow position at the University of Southampton's mu-Vis laboratory is now part of the National Facility for X-ray and Computed Tomography (NXCT). There, he further developed high-performance computing applications in inverse problems in tomography, together with research on shape-based regularization methods and applications.
Afterwards, he became a Research Associate at UCL's Institute of Nuclear Medicine (at UCL-Hospitals), where he researched Dynamic Whole Body Positron Emission Tomography. Since 2022, he has been part of the Cambridge Image Analysis (CIA) group at the University of Cambridge and was recently promoted to Assistant Research Professor, where he supervises research students and teaches courses on medical imaging and image analysis. His current main interests are data-driven inverse problem algorithms and their applications (or lack thereof) in real tomographic scanners.