icon_home

Hardware overview

Software overview

This year we again used a camera for line tracking, more detailed information can be found in our TDP from last year. However, we added a second USB camera, which was able to look much further ahead, so that we could recognize the victims in the rescue area even from a large distance (about 60cm). For this purpose we used a self-trained neural network that gave us back the position of the victim and whether it was alive or dead. We finally achieved acceptable results, after round shadows caused us problems last year. In addition, we used two distance sensors place at the side of our robot to calculate the angle to the wall using basic trigonometry, allowing us to align easily with the walls. Additional details regarding the hardware and software of this years robot can also be found in our GitHub repository.

Image of our robot


Image of our robot

Video of our robot