Computer Science Engineering
Self-Driving Cars with Duckietown is the first robotics and AI MOOC with scale-model self-driving cars. Learn state-of-the-art autonomy hands-on: build your own real robot (Duckiebot) and get it to drive autonomously in your scaled city (Duckietown). This self-study course is not actively moderated. You can view the course for free, but questions will not be answered and there is no guarantee that the content will be available or updated.

Course Details

Language English
Duration 14 weeks
Effort 4-10 hrs/week
Description

Robotics and AI are all around us and promise to revolutionize our daily lives. Autonomous vehicles have a huge potential to impact society in the near future, for example, by making owning private vehicles unnecessary!


Have you ever wondered how autonomous cars actually work?


With this course, you will start from a box of parts and finish with a scaled self-driving car that drives autonomously in your living room. In the process, you will use state-of-the-art approaches, the latest software tools, and real hardware in an engaging hands-on learning experience.


Self-driving cars with Duckietown is a practical introduction to vehicle autonomy. It explores real-world solutions to the theoretical challenges of autonomy, including their translation into algorithms and their deployment in simulation as well as on hardware.


Using modern software architectures built with Python, Robot Operating System (ROS), and Docker, you will appreciate the complementary strengths of classical architectures and modern machine learning-based approaches. The scope of this introductory course is to go from zero to having a self-driving car safely driving in a Duckietown.


This course is presented by Professors and Scientists who are passionate about robotics and accessible education. It uses the Duckietown robotic ecosystem, an open-source platform created at the MIT Computer Science and Artificial Intelligence Laboratory and now used by over 150 universities worldwide.


We support a track for learners to deploy their solutions in a simulation environment, and an additional option for learners that want to engage in the challenging but rewarding, tangible, hands-on learning experience of making the theory come to life in the real world. The hardware track is streamlined through an all-inclusive low-cost Jetson Nano-powered Duckiebot kit, inclusive of city track, available here.


This course is made possible thanks to the support of the Swiss Federal Institute of Technology in Zurich (ETH Zurich), in collaboration with the University of Montreal (Prof. Liam Paull), the Duckietown Foundation, and the Toyota Technological Institute at Chicago (Prof. Matthew Walter).

What you will learn

After this course, you will be able to program your Duckiebots to navigate (without accidents!) in road lanes of a model city with rubber-duckie-pedestrian-obstacles using predominantly computer vision-based techniques.


Moreover, you will:



  • recognize essential robot subsystems (sensing, actuation, computation, memory, mechanical) and describe their functions

  • make your Duckiebot drive in user-specified paths

  • understand how to command a robot to reach a goal position

  • make your Duckiebot take driving decisions autonomously according to ""traditional approaches"", i.e., following the estimation, planning, control architecture

  • make your Duckiebot take driving decisions autonomously according to ""modern approaches"" (reinforcement learning)

  • process streams of images

  • be able to set up an efficient software environment for robotics with state-of-the-art tools (Docker, ROS, Python)

  • program your Duckiebot and make it safely drive in empty roads lanes

  • program your Duckiebot and make it recognize and avoid rubber duckie obstacles

  • submit your robot agents (a.k.a. ""robot minds"") to public challenges, and test your skills against your peers


Additional goals (require hardware)



  • independently assemble a Duckiebot and a Duckietown

  • remotely operate your Duckiebot and see with its eye(s)

  • be able to discuss differences between theory, simulation, and real word implementation for different approaches

  • experience the challenges of deploying complex autonomous robots in the real world, and reap the rewards of getting it to work

Prerequisites

Basic Linux, Python, Git:



  • we are going to use a terminal interface, so basic knowledge of Bash is required (cd, ls, mkdir, ...)

  • We are going to write ""autonomy"" code in Python

  • We are going to pull, fork, push, branch repositories, etc.


Elements of linear algebra, probability, and calculus:



  • We are going to use matrices to represent coordinate systems

  • We are going to use notions of probability (marginalization, Bayes theorem) to derive perception algorithms for the Duckiebot

  • We are going to write down equations of motion, which involve ODEs (recognizing the acronym is a good start!)


Computer with native Ubuntu installation



  • We are going to use Ubuntu 22.04 with a native (e.g., dual boot) installation*

  • Minimum requirements: Quad-core at 1.8Ghz, 4GB RAM, 60GB hard drive, GPU compatible with OpenGL 2.1+

  • Recommended setup: Quad-core at 2.1Ghz, 8GB RAM, 120GB hard drive, GPU compatible with OpenGL 2.1+

  • A broadband internet connection: we are going to up and download gigabytes of data (exercises, activities, agent submissions)

Plan

Module 0: Welcome to the course



  • Welcome to the course, by Prof. Emilio Frazzoli

  • You will familiarize yourself with the logistics and navigation interface of the course resources

  • You will start a learning journey in the world of robot autonomy with Duckietown


Module 1: Introduction to self-driving cars



  • The potentials and challenges

  • Levels of autonomy

  • The vision for autonomous vehicles (AVs)

  • Activities: You will set up your learning environment, and your Duckiebot, and make your first challenge submission


Module 2: Towards autonomy



  • Making a robot

  • Sensorimotor architectures

  • Stateful architectures

  • Logical and physical architectures

  • Application: You will create a reactive ""Braitenberg"" agent to avoid duckies and see how your agent compares to other submissions


Module 3: Modeling and Control



  • Introduction to control systems

  • Representations and models

  • PID control

  • Application: You will design an odometry function and PID controller to command your Duckiebot's angular velocity


Module 4: Robot Vision



  • Introduction to projective geometry

  • Camera modeling and calibration

  • Image processing


  • Application: You will develop image processing techniques necessary for visual lane servoing - controlling your Duckiebot to drive within markings




Module 5: Object Detection



  • Introduction to neural networks

  • Convolutional neural networks

  • One and two-stage object detection

  • Application: You will train a convolutional neural network (CNN) to detect duckies and integrate your model with ROS to run onboard your Duckiebot and avoid duckies


Module 6: State Estimation and Localization



  • Bayes filtering framework

  • Parameterized methods (Kalman filter)

  • Sampling-based methods (particle and histogram filter)

  • Application: You will build a state estimation algorithm combining the dynamics and sensor data of your Duckiebot in order to predict its pose as it travels through the world


Module 7: Planning I



  • Formalization of the planning problem

  • Application: You will create a collision checker to determine if your Duckiebot is crashing into an obstacle


Module 8: Planning II



  • Graphs

  • Graph search algorithms

  • Application: You will tackle a variety of path-planning challenges and leverage all the skills you've built thus far to navigate your Duckiebot in a variety of simulated environments


Module 9: Learning by Reinforcement



  • Markov decision processes

  • Value function

  • Policy gradients

  • Domain randomization

  • Application: You will explore the capabilities and limitations of reinforcement learning models when applied to real-world robotics tasks such as lane following

Course instructors

Andrea Censi

Andrea Censi is a senior researcher at ETH Zuric. He obtained a Ph.D. in Control & Dynamical Systems at Caltech in 2012.

Emilio Frazzoli

Emilio Frazzoli is a professor at ETH Zurich. He works as


  • '06-'16: Professor Aeronautics and Astronautics, MIT

  • '13-to date: Chief Scientist, nuTonomy / Aptiv / Motional


Jacopo Tani

Jacopo Tani works as


  • Aerospace Engineering Ph.D. at Rennsselaer Polytechnic Institute (RPI)

  • Postdoctoral Associate at Massachussetts Institute of Technology (MIT)

  • Senior Researcher at the Swiss Federal Institute of Technology (ETHZ)

Andrea Francesco Daniele

Andrea Franceso Daniele is a Ph. D. Candidate at Toyota Technological Institute at Chicago. He works as


  • Ph. D. in Robotics at MIT in 2007

  • Research Scientist at MIT until 2014


Matthew Walter

Matthew Walter is an assistant professor at Toyota Technological Institute at Chicago. He works as


  • Ph. D. in Robotics at MIT in 2007

  • Research Scientist at MIT until 2014


Liam Paull

Liam Paull is an assistant professor at Université de Montréal. He works as


  • Marine Robotics Ph. D., University of New Brunswick in 2013

  • Research Scientist, MIT until 2017

ETH Zurich

Freedom and individual responsibility, entrepreneurial spirit and open-mindedness: ETH Zurich stands on a bedrock of true Swiss values. Our university for science and technology dates back to the year 1855, when the founders of modern-day Switzerland crea…

58 instructors

Toyota Technological Institute at Chicago

https://www.ttic.edu/

2 instructors

Université de Montréal

https://www.umontreal.ca/

1 instructors