Title: “Digital Retina – Improvement of Cloud Artificial Vision System from Enlighten of HVS Evolution” by Wen Gao, Peking University, China
Time: 2019/12/17 09:00~10:00 Location: Auditorium
Abstract: Smart city wave seems to be making more and more video devices in cloud vision system upgraded from traditional video camera into edge video device. However, there are some arguments on how much intelligence the device should be with, and how much the cloud should keep. Human visual system (HVS) took millions of years to reach its present highly evolved state, it might not be perfect yet, but much better than any of exist computer vision system. Most artificial visual systems are consisted of camera and computer, like eye and brain for human, but with very low level pathway between two parts, comparing to human being. The pathway model of human being between eye and brain is quite complex, but energy efficient and comprehensive accurate, evolved by natural selection. In this talk, I will discuss a new idea about how we can improve the cloud vision system by HVS-like pathway model, which is called digital retina, to make the cloud vision system being more efficient and smart. For the more, the bio-vision system encodes the world into spike train, a different form with conventional video, which inspires us to discover a totally new visual technical system, from new visual sensor to new vision models.
Bio: Wen Gao now is a Boya Chair Professor at Peking university. He also serves as the president of China Computer Federation (CCF) from 2016. He received his Ph.D. degree in electronics engineering from the University of Tokyo in 1991. He joined with Harbin Institute of Technology from 1991 to 1995, and Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS) from 1996 to 2005. He joined the Peking University since 2006. Prof. Gao works in the areas of multimedia and computer vision, topics including video coding, video analysis, multimedia retrieval, face recognition, multimodal interfaces, and virtual reality. His most cited contributions are model-based video coding and feature-based object representation. He published seven books, over 280 papers in refereed journals, and over 700 papers in selected international conferences. He is a fellow of IEEE, a fellow of ACM, and a member of Chinese Academy of Engineering.
Title: “Multimodal Health Surveillance” by Ramesh Jain, UCI, USA( ngs.ics.uci.edu)
Time: 2019/12/18 09:00~10:00 Location: Auditorium
Abstract: Though surveillance evokes mixed feelings, surveillance has been a dominant motivator for computer vision and related technologies. The primary purpose of an intelligent surveillance systems is to detect and monitor real-time evolving situations using all observed data with an implicit goal to help manage situations for the benefit of humans. Since health is the most important factor in determining quality of human life, one wonders how we could use surveillance technology in this area. We know that from diverse medical imaging to video based facial expression detection and gait analyses are being used to understand an individual’s health. We can go far beyond these early applications using various biomarkers and wearable devices for understanding personal health using a multimodal surveillance approach. The central element of personalization is the model of a person from a healthy perspective. Deep personal models require personal chronicle of events not only from cyberspace as used by many current search systems and social networks, but also from physical, environmental, and biological aspects. Episodic models are very shallow for personalization. Multimodal processing inspired by surveillance system principles, including computer vision, plays a key role in creating detailed personal chronicles, aka Personicles, for such emerging applications. We are building such Personicles for health applications using smart phones, wearable devices, different biological sensors, cameras, and social media. These Personicles and other relevant event streams may then be used to build personal models using event mining and related AI approaches. In this presentation, we will discuss and demonstrate an approach to build Personicles using diverse data streams and show how this could result in deeper personal models for applications like personal health navigators.
Bio: Ramesh Jain is an entrepreneur, researcher, and educator. He is a Donald Bren Professor in Information & Computer Sciences at University of California, Irvine. His research interests covered Control Systems, Computer Vision, Artificial Intelligence, and Multimedia Computing. His current research passion is in addressing health issues using cybernetic principles building on the progress in sensors, mobile, processing, artificial intelligence, computer vision, and storage technologies. He is founding director of the Institute for Future Health at UCI. He is a Fellow of AAAS, ACM, IEEE, AAAI, IAPR, and SPIE. Ramesh co-founded several companies, managed them in initial stages, and then turned them over to professional management. He enjoys new challenges and likes to use technology to solve them. He is participating in addressing the biggest challenge for us all: how to live long in good health.