Updating your distributed Autoware.Auto
Following the deployment of Autoware.Auto in a scalable manner using multiple boards it comes natural to think: how do I go about updating a particular software module? Fear not, this is a very straightforward process through the k8s infrastructure.
To illustrate the process we will update the sensing-rear-lidar deployment where the pod container name is rear-pod-lidar. As we need a different image to the one currently running on the pod we have created a sensing-1.0.0_v2 which is just a retag of the sensing-1.0.0 image and sensing-1.0.0_bad which is a retag of the udpreplay image so that it crashes upon booting since the Velodyne drivers are not available.
Now, to perform the update we just need to use the
$ kubectl set image
command and we can check the update progress using the
$ kubectl rollout status
command as shown below. Upon completion we can check that the newly created pod is using the updated image.
The above process works really nice if the new image has been built appropriately, but, what happens if the update image contains a bug, or if for some reason we wanted to roll back the update? Well, let's try using the sensing-1.0.0_bad image which leads to an Error state when k8s tries to start it.
As we see below, k8s keeps the previous image running while the new one is stuck. Once we identify this issue we can easily roll back the update using the
$ kubectl rollout undo
command which keeps the previous pod that was running fine.
Taking now Rviz to visualize the lidar data we see that upon a successful update of the front lidar pod container image using the same process as mentioned above, we get a nice feed of the pointcloud data within Rviz.
With this we are now ready to fully manage a distributed Autoware.Auto deployment using k8s as we are now able to roll updates or take them back as needed for the individual software modules. To get the full step-by-step blog post is here.