Building blocks of an autonomous mobile platform may sound futuristic but ever since I was a child, I dreamed of building robots, so I feel so fortunate to be alive and involved in science / engineering in this age as this is all possible on a budget and here is the start of my journey.
I started to look into learning Computer Vision as i want to build Rovers and Drones that are not only remotely operated but also aware of their surroundings for automated BIM capture to start as a commercially viable platform but with the insight in mind to expand these platforms for emergency service too once the core platform is established.
The mechanical aspects are an obvious hurdle which require knowledge of what role the platform will take i.e. Land, water or Aerial based. What work is to be done, what accessories etc. It all starts to take physical shape working back from the design brief / CtoQ’s / Goals or scope of the design vision.
Once you have the basic mechanical concept then Electrical components start to take shape to provide the mechanical structure with the motion it requires for the length of time required between charges and also the charging / automated battery swap criteria. I use battery swap loosely as when designing electric powered vehicles, I do not rule out liquid batteries which drain and fill electrically charged fluids rather than a solid lump of a battery.
Once the bulk of the electrics are designed, you can start placing the control electronics and sensing devices (camera,lidar, Ultrasonic, bump switch etc) but modifying the electrics to suit.
That just about sums up the overview of Robotics Hardware which for an engineer is not easy but not an impossible challenge either.
Now for control software we could start from scratch using Java ( which is not freeware for much longer) or Python ( which would be great) but for most standard platforms there already is Open Source Robotic Control Software (flight software) ready to be tweaked. For Rovers (land based) see Ardupilot.org and in fact this will do every vehicle type but for Aerial platforms, also look at px4.io or droncode.org as these are industry supported and in development. I will also mention that you need a compatible autopilot hardware kit, which for me with a raspberry Pi 3 will be the Navio2. These tend to come with a GNSS antenna for high location precision.
There is also numerous open source ground control and mission software like QGroundControl.com using MAVLINK or Mission Planner. If using a PC, you will need a telemetry transmitter and receiver kit(433 or 868MHz for UK, 900 for USA and Canada). There are numerous free offering for tablets too in Android or IOS flavours.
Ok So we no have the blocks to create a fully functioning remotely controlled semi autonomous vehicle but how do we make it autonomous. Well that, ideally would take LiDAR and Computer Vision with OpenCV.
LiDAR is an option at this point but with limited open source options we will leave this for a more advanced Robot ‘The Mark 2’.
So lets talk about Computer Vision. This is the route that Car manufacturers are going down, with support of LiDAR, and is all about detecting dangers and picking out data from the camera and turning that image data into usable sensory information that can be processed by the Controller. To do this we can use a piece of open source software called OpenCV. I will mention that OpenCV will also process LiDAR data so we can expand the capabilities later.
This will turn the image data into code which we can interact with using Python code.
At this point I will mention that I am not going to create videos on how to use OpenCV myself because I have found an abundance of youtube courses which are perfect so why remake the wheel. Instead I am going to compile pages with other peoples videos and supplementary information to help cover all bases. This means that I can write faster and other kind people who have taken the time to create content get the boost from more views on their videos.
I will however embed the videos and also give the code from the video which I have tested and added comments – yes we are all doing this together.
So without further a do, lets get on with learning OpenCV and really get our robotics alive. Once we understand how this all works, we will come back to the design brief capabilities and then hopefully on with the design and build. This should be fun, as is most blue collar engineering, but please share the posts with colleagues / friends and comment back with suggestions. Don’t forget you can always email privately at email@example.com.
If you are looking for all the current lessons then please look under this page’s dropdown in the navigation menu to the left.