DAY ONE |
---|
Fundamentals of Deep Learning |
We will introduce the workshop with an overview of the contents and how everything ties together. In this first lecture we will start from the basics of neural networks to build the foundations for the more advanced topics. The main mathematical notations will be introduced here and we will provide you with a short refresher on the necessary mathematical topics, like basics in linear algebra, derivatives and tensors. From there we will analyze simple shallow networks and learn the concepts of stochastic gradient descent, forward and backpropagation on those simple examples. Further we will look at the typical hyper parameters learning rate, momentum, batch size, etc. This will give you the tools and understanding for the following more advanced topics. |
Convolutional Neural Networks |
Convolutional Neural Networks (CNN) are currently the state of the art for classification and object detection problems. In this lecture, we will give an introduction on CNN architectures covering the main components such as convolutional, pooling, ReLU, normalization and fully connected layers. Together with these main components, we present some case studies from both Computer Vision and Medical Imaging communities. |
AutoEncoders |
With autoencoders (AE), we are able to learn an efficient hierarchical representation of data sets in an unsupervised fashion. Autoencoders are trained to reconstruct inputs and can serve as efficient feature extraction and dimensionality reduction methods. In this lecture, we will cover the basics of autoencoders and take a deeper look at its variants like denoising, variational, stacked and convolutional AEs. |
Recurrent Neural Networks |
Many problems in industry and science deal with sequential data, i.e. discrete or continuous data in the (spatio-)temporal domain. Examples are natural language processing, object tracking and activity recognition from video data. Recurrent Neural Networks (RNN) are designed specifically to deal with data of sequential nature. Our talk will cover the basic structure and theory of RNNs and long short-term memory (LSTM) networks, and then provide an overview of state-of-the-art RNN research. |
DAY TWO |
Advances in Convolutional Neural Networks |
This talk will provide a detailed look into state-of-the-art knowledge of current CNN research. We will introduce a variety of methods that provide insight into trained networks, trying to answer questions like “What has my network learned?” or “Why did it not learn anything useful?”. Many different architectures have been proposed in the last years but which of them make sense for what task or amount of available data? This lecture will provide an overview of the typical architectures of many different tasks. From there we will analyze different loss functions and their influence in the training and performance of the network providing you with intuitions on how to tune your architecture. |
Domain Adaptation in Deep Learning |
All learning algorithms, including autoencoders and CNNs, are trained on a specific dataset and thus develop a bias towards it. Domain adaptation (DA) deals with taking models trained on a source domain and transferring the knowledge into a new target domain. In this lecture, we will introduce you to the main facets of domain adaptation, take a closer look at dataset bias and delve into approaches to adapt autoencoders and neural networks. |
Deep Learning for image segmentation in medicine |
Segmentation of object boundaries in images is a classical problem in computer vision and medical imaging. Deep Learning methods, in particular CNNs, have achieved significant breakthroughs in the past few years. In this talk, we briefly cover several works in the medical domain, including techniques like semantic segmentation, OverFeat, and combined CNN-RNNs. Finally, we take a look at our recently proposed Hough-CNN method for segmentation of multi-modal medical image data, which is able to learn the shape and appearance of anatomies from a small number of training examples. |
AggNet: Deep Learning from Crowds |
Collecting annotations for large datasets via crowdsourcing is a convenient means of generating training data, but generally introduces “noisy” annotations. This problem intensifies for ambiguous and highly imbalanced data, which are common in the medical domain. Handling such datasets is very tedious, time consuming and expensive. However, using our recently proposed network, named AggNet, enables the user to fully benefit from crowdsourced annotations as it implicitly handles the aforementioned issues. |
Activity Recognition and Event Detection in Multimodal Medical Surveillance |
Visual perception and interpretation of human activities is a fundamental skill required for the understanding of our environment. While this task is routinely performed by the brain in a few milliseconds, its automation for surveillance applications has not been solved and is an ongoing area of research. CNNs have considerably improved on the state of the art for activity recognition and event detection in image and video data. In this talk, we present the most relevant leading architectures and show an application of CNNs to multimodal patient monitoring for real-time detection of epileptic seizures. |
Deep Learning for 3D computer vision |
The talk gives an overview of the several fields in computer vision based on 3D and RGB-D which have been recently influenced by Deep Learning. This includes both techniques aimed at 3D reconstruction, such as stereo matching and depth-from-monocular, as well as those aimed at 3D perception, such as object detection and pose estimation from RGB-D data, 3D object retrieval and 3D human pose estimation. |
DAY THREE |
Introduction to MatConvNet |
MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for MATLAB. It is designed to give simplicity and efficiency in developing new network layers and architectures for CNNs. It supports both CPU and GPU computations with a C++ back-end for the basic network operations. We will give an introduction, as well as explanations of basic concepts and development with MatConvNet. |
Introduction to TensorFlow |
Google recently released TensorFlow to the public in an effort to boost Machine Learning research and the announcement was extremely well received both from companies and universities worldwide. TensorFlow is a general-purpose framework to model, train and execute computational graphs using Python and/or C++. We will give an introduction as well as explain basic concepts of TensorFlow and its usage as a Deep Learning suite. |