Home | Syllabus | Resources | Assignments |

This seminar introduces the foundations of multimodal machine learning and their applications on simulated and
actual robot platforms. It covers the basics of machine learning, preliminary concepts of sensors (e.g., depth
and color cameras, IMU, position encoders, etc.), sensor fusion techniques, specifications of robot platforms
(e.g., Nao and the Pepper) and simulators. Students will work in teams on a topic in (deep) multimodal learning
for robotics, and present their experimental results in oral presentations and written reports.

**Update:** According to HU's summer semester policy and due to Covid-19, the lectures, assignments, and presentations
will be held online via Zoom.

Some experience with machine learning, programming, and robotics is required. Although we will not assume that you have an extensive background in these fields, basic understanding below-listed items will be necessary to follow the content of the course.

**Linear Algebra:**Throughout the course, we will intensely perform Matrix and vector operations to construct our machine learning pipelines. You should be familiar with the basics and notations of matrix/vector calculus.**Probability and Statistics:**You should have (at least ) a sound knowledge of the basic probability and statistics concepts: random variable, probability distribution, conditional probability, mean, variance, etc.**Programming:**The programming language of this course is Python 3. If you are comfortable with other programming languages (e.g., C++/Matlab), the tutorials will help you to code in Python.**Tools/libraries:**Since this is an online course, you should be familiar with software development tools and libraries: Git, TensorFlow, Keras, Numpy, Matplotlib (or Seaborn), etc.

Upon completion of this seminar course, students will be able to:

- develop hands-on programming skills to build multimodal machine learning pipeline
- understand state-of-the-art machine learning algorithms: perceptron, K-Means, self organizing maps, Q-learnig, and deep neural networks
- grasp the working principles of different sensors: color and depth cameras, range sensors such as sonar and infrared, laser scanner, etc.
- deploy your machine learning models on actual (and virtual) robots

Dr. Murat Kirtay |
Prof. Dr. Verena V. Hafner |

The banner image was downloaded from here and it is subjected to copyright by softbank robotics.

The template was modified by Murat Kirtay by using Mike Pierce's |
© Conference Organizers. |