Prof. Sam Kwong
IEEE Fellow
City University of Hong Kong, Hong Kong, China
Sam Kwong
received his B.Sc. degree from the State
University of New York at Buffalo, M.A.Sc. in
electrical engineering from the University of
Waterloo in Canada, and Ph.D. from
Fernuniversität Hagen, Germany. Kwong is
currently a Chair Professor at the CityU
Department of Computer Science, where he
previously served as Department Head and
Professor from 2012 to 2018. Prof. Kwong is the
associate editor of leading IEEE transaction
journals, including IEEE Transactions on
Evolutionary Computation, IEEE Transactions on
Industrial Informatics, and IEEE Transactions on
Cybernetics.
He has filed more than 20 US patents, of which
13 have been granted. Kwong has a prolific
research record. He has co-authored three
research books, eight book chapters, and over
300 technical papers. According to Google
Scholar, his works have been cited more than
25,000 times with an h-index of 70. In 2014, he
was elevated to IEEE Fellow for his
contributions to optimization techniques in
cybernetics and video coding. He is also a
fellow of Asia-Pacific Artificial Intelligence
Association (AAIA) in 2022. Currently, he serves
as the President of the IEEE SMC Society.
Speech Title: Intelligent Video Coding by
Data-driven Techniques and Learning Models
Abstract: In June 6th 2016,
Cisco released the White paper[1], VNI Forecast
and Methodology 2015-2020, reported that 82
percent of Internet traffic will come from video
applications such as video surveillance, content
delivery network, so on by 2020. It also
reported that Internet video surveillance
traffic nearly doubled, Virtual reality traffic
quadrupled, TV grew 50 percent and similar
increases for other applications in 2015. The
annual global traffic will first time exceed the
zettabyte(ZB;1000 exabytes[EB]) threshold in
2016, and will reach 2.3 ZB by 2020. It implies
that 1.886ZB belongs to video data. Thus, in
order to relieve the burden on video storage,
streaming and other video services, researchers
from the video community have developed a series
of video coding standards. Among them, the most
up-to-date is the High Efficiency Video
Coding(HEVC) or H.266 standard, which has
successfully halved the coding bits of its
predecessor, H.264/AVC, without significant
increase in perceived distortion. With the rapid
growth of network transmission capacity,
enjoying high definition video applications
anytime and anywhere with mobile display
terminals will be a desirable feature in the
near future. Due to the lack of hardware
computing power and limited bandwidth, lower
complexity and higher compression efficiency
video coding scheme are still desired. For
higher video compression performance, the key
optimization problems, mainly decision making
and resource allocation problem, shall be
solved. In this talk, I will present the most
recent research and new developments on deep
neural network based video coding and its
applications such as saliency detection,
perceptual visual processing and others. This is
very different from the traditional approaches
in video coding. We hope applying these
intelligent techniques to vide coding could
allow us to go further and have more choices in
trading off between cost and resources.
Prof. Xudong Jiang
IEEE Fellow
Nanyang Technological University, Singapore
Xudong Jiang, IEEE
Fellow, received the B.Eng. and M.Eng. from
University of Electronic Science and Technology
of China (UESTC), and the Ph.D. degree from
Helmut Schmidt University, Hamburg, Germany.
During his work in UESTC, he received two
Science and Technology Awards from the Ministry
for Electronic Industry of China. From 1998 to
2004, he was with the Institute for Infocomm
Research, A-Star, Singapore, as a Lead Scientist
and the Head of the Biometrics Laboratory, where
he developed a system that achieved the most
efficiency and the second most accuracy at the
International Fingerprint Verification
Competition in 2000. He joined Nanyang
Technological University (NTU), Singapore, as a
Faculty Member, in 2004, and served as the
Director of the Centre for Information Security
from 2005 to 2011. Currently, he is a professor
of NTU. Dr Jiang holds 7 patents and has
authored over 200 papers with over 40 papers in
IEEE journals, including 14 papers in IEEE T-IP
and 6 papers in IEEE T-PAMI. His publications
are well-cited with H-index 55 and 4 of his
papers were listed as the top 1% highly cited
papers in the academic field of Engineering by
Essential Science Indicators. He served as IFS
TC Member of IEEE Signal Processing Society,
Associate Editor for IEEE SPL, Associate Editor
for IEEE T-IP and the founding editorial board
member for IET Biometrics. Currently, Dr Jiang
is an IEEE Fellow and serves as Senior Area
Editor for IEEE T-IP and Editor-in-Chief for IET
Biometrics. His current research interests
include image processing, pattern recognition,
computer vision, machine learning, and
biometrics.
Speech Title: Towards Explainable AI: How
Deep CNN Solves Problems of ANN
Abstract: Discovering knowledge from data
has many applications in various artificial
intelligence (AI) systems. Machine learning from
the data is a solution to find right information
from the high dimensional data. It is thus not a
surprise that learning-based approaches emerge
in various AI applications. The powerfulness of
machine learning was already proven 30 years ago
in the boom of neural networks but its
successful application to the real world is just
in recent years after the deep convolutional
neural networks (CNN) have been developed. This
is because the machine learning alone can only
solve problems in the training data but the
system is designed for the unknown data outside
of the training set. This gap can be bridged by
regularization: human knowledge guidance or
interference to the machine learning. This
speech will analyze these concepts and ideas
from traditional neural networks to the very hot
deep CNN neural networks. It will answer the
questions why the traditional neural networks
fail to solve real world problems even after 30
years’ intensive research and development and
how the deep CNN neural networks solve the
problems of the traditional neural networks and
now are very successful in solving various real
world AI problems.
Prof. Habib Zaidi
IEEE Fellow
University of Geneva, Switzerland
Professor Habib Zaidi,
is Chief physicist and head of the PET
Instrumentation & Neuroimaging Laboratory at
Geneva University Hospital and faculty member at
the medical school of Geneva University. He is
also a Professor at the University of Groningen
(Netherlands) and the University of Southern
Denmark. His research is supported by the Swiss
National Foundation, private foundations and
industry (Total 8.8 M US$) and centres on hybrid
imaging instrumentation (PET/CT and PET/MRI),
computational modelling and radiation dosimetry
and deep learning. He was guest editor for 13
special issues of peer-reviewed journals and
serves on the editorial board of leading
journals in medical physics and medical imaging.
He has been elevated to the grade of fellow of
the IEEE, AIMBE, AAPM, IOMP, AAIA and the BIR.
His academic accomplishments in the area of
quantitative PET imaging have been well
recognized by his peers since he is a recipient
of many awards and distinctions among which the
prestigious (100’000$) 2010 kuwait Prize of
Applied sciences (known as the Middle Eastern
Nobel Prize). Prof. Zaidi has been an invited
speaker of over 160 keynote lectures and talks
at an International level, has authored over 360
peer-reviewed articles (h-index=69, >18’000+
citations) in prominent journals and is the
editor of four textbooks.
Speech Title: New Horizons
in Deep Learning-assisted Multimodality Medical
Image Analysis
Abstract: Positron emission tomography
(PET), x-ray computed tomography (CT) and
magnetic resonance imaging (MRI) and their
combinations (PET/CT and PET/MRI) provide
powerful multimodality techniques for in vivo
imaging. This talk presents the fundamental
principles of multimodality imaging and reviews
the major applications of artificial
intelligence (AI), in particular deep learning
approaches, in multimodality medical imaging. It
will inform the audience about a series of
advanced development recently carried out at the
PET instrumentation & Neuroimaging Lab of Geneva
University Hospital and other active research
groups. To this end, the applications of deep
learning in five generic fields of multimodality
medical imaging, including imaging
instrumentation design, image denoising
(low-dose imaging), image reconstruction
quantification and segmentation, radiation
dosimetry and computer-aided diagnosis and
outcome prediction are discussed. Deep learning
algorithms have been widely utilized in various
medical image analysis problems owing to the
promising results achieved in image
reconstruction, segmentation, regression,
denoising (low-dose scanning) and radiomics
analysis. This talk reflects the tremendous
increase in interest in quantitative molecular
imaging using deep learning techniques in the
past decade to improve image quality and to
obtain quantitatively accurate data from
dedicated standalone (CT, MRI, SPECT, PET) and
combined PET/CT and PET/MRI imaging systems. The
deployment of AI-based methods when exposed to a
different test dataset requires ensuring that
the developed model has sufficient
generalizability. This is an important part of
quality control measures prior to implementation
in the clinic. Novel deep learning techniques
are revolutionizing clinical practice and are
now offering unique capabilities to the clinical
medical imaging community. Future opportunities
and the challenges facing the adoption of deep
learning approaches and their role in molecular
imaging research are also addressed.