Self-Driving Cars

with ROS and Autoware

Self-driving cars will transform the way we travel and commute.

This technology merges robotics, machine learning, engineering,

and modern software development methods.

What is the course about?

 

Developing production-grade autonomous driving systems require a stack of interrelated technologies. This course brings together all the significant parts into a practical step-by-step guide to architect, develop, test, and deploy an autonomous system.

This intermediate-level course using the popular open-source robotics frameworks ROS 2

and Autoware.Auto algorithms and covers through the course of 14 lectures,  state-of-the-art techniques that combine hardware, software, algorithms, methodologies, tools, and data analytics.

What will I learn?

 

You’ll learn a modern approach to developing complex systems for autonomy that the most innovative automotive companies are adopting. The teachers are experienced professionals who have contributed open-source materials moving the industry towards higher standards of design, engineering, and safety. 

 

Who should take the course?

This is an intermediate-level course for individuals who develop pre-production autonomous driving systems. ​Participants should have knowledge of C++ (including testing), robotics frameworks, and system integration.

 Get notified when new lectures will be released!

Autoware Course Lecture 1 - Part 1 
Autoware Course Lecture 1 - Part 2
Autoware Course Lecture 3 - Part 1
Autoware Course Lecture 2
Autoware Course Lecture 3 - Part 2 
Autoware Course Lecture 4 - Part 1
Autoware Course Lecture 4 - Part 2 
Autoware Course Lecture 5 - Part 2 
Autoware Course Lecture 7 
Autoware Course Lecture 9 - Part 1
Autoware Course Lecture 10
Autoware Course Lecture 11 -Part 2
Autoware Course Lecture 5 - Part 1
Autoware Course Lecture 6 
Autoware Course Lecture 8 
Autoware Course Lecture 9 - Part 2 
Autoware Course Lecture 11 - Part 1
Autoware Course Lecture 12 - Part 1
Autoware Course Lecture 12 -Part 2
Autoware Course Lecture 13
Autoware Course Lecture 14

Course Overview

Lecture 1 | Development Environment


You can get the slides of the speakers at the following links: Part 1 slides: https://gitlab.com/ApexAI/autowarecla... Part 2 slides: https://gitlab.com/ApexAI/autowarecla... Led by Dejan Pangercic and Tobias Augspurger Duration: 60-90min This lecture will provide three major take-aways: 1. A concise overview of all 14 lectures 2. The development environment used and in which you will reproduce the labs 3. The methods used by the course to develop software for safety-critical systems Part 1 1. Couse intro · Autoware in a video · Why are we offering this class · What will the students learn · How will the students learn · Walk through the syllabus 2. Quick start - development environment · Install ADE · Install ROS 2 · Install Autoware.Auto · Run object detection demo · Edit and compile your code Part 2 1. Development of complex and safety critical software - the theory · Communication goal of this document · Safety and security in automotive · Can you get sued for bad code if somebody gets injured? · Formal safety development standards in automotive software systems · Popular software development models · Classical ISO26262 development and the conflict with urban autonomous driving · Agile system engineering · Operational design domain · Fail safe vs. Fail operable · Continuous engineering · Separation of concerns 2. Development of complex and safety critical software - the practice · General design · Develop in a fork · Designs and requirements · Validation · Unit testing and structural code coverage · Integration testing · Verification by operational design domain · Continuous integration and DevOp · Conclusion and the next lecture 3. Conclusion and the next lecture




Lecture 2 | ROS 2 101


You can get the slides at the following links: Slides in pdf format: https://gitlab.com/ApexAI/autowarecla... Slides in html format: https://gitlab.com/ApexAI/autowarecla... Led by Katherine Scott Duration: 60-90min 1. Intro
2. Getting help
3. Unofficial resources
4. ROS Intro · Brief intro to ROS
· Core concepts of ROS
· Environment setup
· Colcon nomenclature 5. Nodes and Publishers
· Overview of topics
· Building and running a node
· Simple publisher build and run
· Modify the publisher
· Building a subscriber
· Pub/Sub working together 6. Services
· Concept overview
· Review basic service
· Running basic services
· Calling services from command line
· Building a service client
· Executing service server/client 7. Actions
· Action overview
· Action file review
· Basic action review
· Running / calling an action
· Action client review
· Running action server with client




Lecture 3 | ROS 2 Tooling


You can get the slides at the following links: Slides in pdf format: https://gitlab.com/ApexAI/autowarecla... Slides in html format: https://gitlab.com/ApexAI/autowarecla... Led by Katherine Scott Duration: 60-90min 1. Overview and motivating concepts
· The Command Line
· Environment Variables
2. Setting up our toy environment 3. ROS2 – help 4. Command line tooling in ROS 2:
· ros2 run: execute a program
· ros2 node: inspect a node
· ros2 node list
· ros2 node info
5. "Sniffing the Bus": Examining topics
· ros2 topic list
· ros2 topic echo
· ros2 topic hz
· ros2 topic info
· ros2 msg show
· ros2 topic pub
6. GUI equivalents
· RQT
· gqt_graph
· ros2 topic pub
7. Parameters
· ros2 param list
· ros2 param get
· ros2 param set
8. Services: Making things happen
· ros2 service list
· ros2 service type
· ros2 srv show
· ros2 service call
9. Actions
· ros2 action list
· ros2 action info
· ros2 action send_goal
· ros2 action show
· More complex calls
10. Logging data: Secure the bag
· What's a bag?
· ros2 bag record
· ros2 bag record -- selecting topics
· ros2 bag info
· ros2 bag play
11. Wrap up and homework




Lecture 4 | Platform (HW, RTOS, DDS)


You can get the slides at the following links: First part slides (ECU & RTOS) in pdf format: https://gitlab.com/ApexAI/autowarecla... Second part slides (DDS Explained) in pdf format: https://gitlab.com/ApexAI/autowarecla... Led by Angelo Corsaro and Stephane Strahm Duration: 60-90min 1. ECU · Which automotive ECUs are available today · Terminology: BSP, ECU, SoC, interfaces · Importance of ECUs for safety critical applications 2. RTOS · What is RTOS compared to e.g. vanilla Linux · Which RTOSes are available and suitable for AD · Micro vs monolithic kernel · Scheduling policies · Safe memory management · Spatial and temporal separation · Support for HW compute accelerators 3. DDS Explained · DDS Foundations · DDS : Selected Advanced Concepts · DDS features for Robotic Applications




Lecture 5 | Autonomous Driving Stacks


You can get the slides at the following links: First part slides: https://gitlab.com/ApexAI/autowarecla... Second part slides: https://gitlab.com/ApexAI/autowarecla... Third part slides: https://gitlab.com/ApexAI/autowarecla... Led by Daniel Watzenig and Markus Schratter Duration 60-90min 1. Motivation to use AD stacks · Complexity · State of the art reference implementation 2. Architecture of AD stack · Sense - Plan - Act · Overview building blocks for AD (sensors, actuators, perception, localization, map, planning …) · How the blocks are in relation 3. Other AD stacks · Nvidia driveworks · Apollo 4. Integration of Autoware into a research vehicle · What is needed? · Hardware and integration overview · Providing AD functionality for research projects 5. Use case: Roborace · Using parts of Autoware components




Lecture 6 | Autoware 101


You can get the slides at the following link: https://gitlab.com/ApexAI/autowarecla... Led by Josh Whitley Duration: 25min Students will learn about The Autoware Foundation and its two primary projects: Autoware.ai and Autoware.Auto. Subtopics covered: · The Autoware Foundation structure · The history and capabilities of Autoware.ai · Current status and future goals of Autoware.Auto · Architectural overview of Autoware.Auto · The Autoware.Auto development process/how to contribute




Lecture 7 | Object Perception: LIDAR


Led by Christopher Ho Duration 60-90min Students will learn the purpose and role of object detection in the autonomous driving stack and understand the design of an object detection stack and the space of algorithms within. Students will also develop a detailed knowledge of the Autoware.Auto object detection stack, including how it works, how to use it, and how to tune it. 1. Object detection and the autonomous driving stack 2. A classical lidar-based object detection stack 3. Preprocessing LiDAR data 4. Ground filtering 5. Clustering/object detection 6. Shape Extraction 7. Using detected objects 8. Lab: The Autoware.Auto Object Detection stack




Lecture 8 | Object Perception: Camera


Led by Michael Reke, Stefan Schiffer, Alexander Ferrein Duration: 60-90min Cameras are one of the key sensor systems for autonomous driving. What you will learn in this lecture is, how you can use camera pictures to detect real-world objects. Based on a brief introduction to camera technology in general you will learn, what steps are necessary to calibrate your camera system for compensation of the distortion. You will see, how you can make use of neural networks to detect objects like lanes, vehicles, pedestrians, etc. and which toolboxes you might use for that. Finally you will create your own lane detection node in ROS2. 1. Camera basics · Basic KPIs: resolution, ... · Calculating real world points · Monovision systems · Sereovision system · Epipolar coordniates 2. Camera calibration · Installing the camera system · Calibration procedure with chessboard pattern · Calculation of intrinsic / extrinsic parameters 3. Object detection · Basic of neural networks · Examples of available DNNs: YOLO, etc. · Available data sets for training: KITTI, Ford, etc. · The computation problem (real-time?) 4. Available toolboxes · Basic algorithm toolbox: OpenCV · GPU deployment toolbox: Cuda · Higher level integrated toolboxes: e.g. nVidia AD toolbox 5. Example: Lane detection · Basics of lane detection · Polynominal lane-fitting for data-reduction · Step-by-step hands-on 6. Hands-on course · Example lane detection based on real data · Read data from data-stream · Calculate lanes real-world coordinates · Polynomial fitting of detected lanes · Generating ROS2 messages · Visualisation e.g. rviz2




Lecture 9 | Object Perception: Radar


Led by Michael Reke, Stefan Schiffer, Alexander Ferrein Duration: 60-90min Radar is the most important sensor system for collision avoidance. What students will learn in this lecture is, how radar sensors basically work and how they can be used for object detection. Automotive grade radar sensors today provide a lot of internal signal processing and integrated object detection. You will learn how to parametrize such sensors and you will finally create your own Radar ROS2 node. 1. Radar basics · What is radar...? · Basic sensor setup with multiple internal receivers · Measurement of the radar cross section (RCS) · Difficult sensing constellations 2. Object detection · Sensor internal object detection · Need for ego car velocity · Measurement of object parameters: distance, dimension, velocity · Object filtering 3. Available sensors · Different frequency bands (SRR and LRR) · Internal signal processing and object detection · Sensor paramterisation · CAN as data interface 4. Example: Integration of Radar sensor to ROS2 node · Continental ARS-408 as example · Step-by-step hands-on 5. Hands-on course Example radar sensor integration · Read data from data-stream · Sorting and filtering of object list · Generating ROS2 messages · Visualization e.g. rviz2




Lecture 10 | State Estimation for Localization


Led by Josh Whitley, Steve Macenski, Yunus Emre Çalışkan Duration: 60-90min Students will gain an understanding of the transform architecture and localization methods implemented in Autoware.Auto and how they relate to ROS standards. 1. Introduction · Localization for self-driving cars · The Autoware.Auto Transform Tree · Localization in Autoware.Auto: An Example of Flexible Design 2. Odometry state estimator · Kalman filters · What if your system is nonlinear? · EKF and UKF · Robot localization · Setup and use in Autoware.Auto 3. Environmental sensor localizer · NDT algorithm in 2D · NDT algorithm in 3D · NDT class implementation · NDT node implementation · Setup and use in Autoware.Auto




Lecture 11 | LGSVL Simulation


Led by Dmitry Zelenkovsky Duration: 60-90min 1. Installation of the simulator · System requirements · GPU drivers and libraries 2. Getting started · Basic simulator concepts · Maps, vehicles, clusters, simulations · How to start simulation · Simulation parameters 3. Running simulation with Autoware.Auto · Different sensor configurations · Setting up ROS2 bridge · Visualizing sensor information 4. Automation and Python API · Controlling environment using Python API · Controlling non-ego vehicle actors · Controlling custom objects (controllables) · Callbacks and custom sensors plugins · EXAMPLE: Collecting data for model training · EXAMPLE: Judging ride comfort 5. Advanced Topics · New environment creation · New vehicle creation · New sensor creation




Lecture 14 | HD Maps


Led by Simon Thompson Duration: 60-90min 1. What is a map? · Layers · Localization · Navigation · Traffic rules 2. How maps are created · Aerial imageryLiDAR, cameras and GNSS 3. Industry standards for maps · Physical Storage Formats · Navigation Data Standard (NDS) · OpenDrive · OSM XML · Logical model · OpenLaneModel · Lanelets Using Maps in Autoware (Tier IV) (30mins - 45mins) 1. Map Architecture: Lanelet2 integration into Autoware.Auto - broad architecture of Map usage in Autoware, map provision 2. Perception - map as a virtual sensor - use in traffic light detection 3. Localisation - pcd map for localisation 4. Navigation - map use in global and local planning 5. Traffic regulations - stop lines/crosswalks and traffic lights in Autoware 6. Demo - short demo of loading pcd/lanelet2 map in Autoware + video example of use in Autoware (T4B)




Lecture 12 | Motion Planning & Control


Led by Stefano Longo, Sandro Merkli, Takamasa Horibe Duration: 60-90min 1. Hierarchical architecture in autonomous driving · General architecture for autonomous driving · Hierarchy of decision-making modules · Alternative architectures 2. Decision making in autonomous driving · Route planner · Path planner · Behavior selector · Motion planner · Obstacle avoider · Controller 3. Motion planning in autonomous driving - introduction · Scope of motion planning · Hierarchy of requirements · Classification of algorithms based on outputs, space-time properties and mathematical domain 4. Motion planning in autonomous driving - algorithms · Space configuration · Pathfinding algorithms · Attractive and repulsive forces · Parametric and semi-parametric curves · Artificial intelligenceNumerical optimization 5. Model Predictive Control – FAQs · Feedback control · Optimal control · Relationship between LQR, LQG and MPC · Relationship with DP and RL 6. Motion planning advanced methods · Path planning and tracking as a single problem
· A nonlinear MPC formulation 7. Testing MPC with Autoware.Auto 8. Autoware parking planner · What the parking planner does
· Where to find it in the repository
· How to call it
· How to inspect the results




Lecture 13 | Data Storage and Analytics


Led by Florian Friesdorf Duration: 60-90min 1. Setup MARV; find and process ROS1 and ROS2 bags
2. Add metadata files with customer scanner
3. Use metadata for filter and listing
4. Write custom nodes to process data streams
5. Use Tensorflow to detect objects in video





LINKS
CONTACT

Join the Autoware community discussion forum on Slack:

https://autoware.herokuapp.com

 

Discourse - Autoware project-specific discussions:

https://discourse.ros.org/c/autoware/ 

 

To join the Autoware Foundation:

auto@autoware.org

© The Autoware Foundation 2020. All rights reserved. “Autoware” is a trademark of the Autoware Foundation.

DISCLAIMER and Autoware Trademark Policy