Autonomous Platform – The Intro

Building blocks of an autonomous mobile platform may sound futuristic but ever since I was a child, I dreamed of building robots, so I feel so fortunate to be alive and involved in science / engineering in this age as this is all possible on a budget and here is the start of my journey.

FastDomain Web Hosting $6.95

I started to look into learning Computer Vision as i want to build Rovers and Drones that are not only remotely operated but also aware of their surroundings for automated BIM capture to start as a commercially viable platform but with the insight in mind to expand these platforms for emergency service too once the core platform is established.

The mechanical aspects are an obvious hurdle which require knowledge of what role the platform will take i.e. Land, water or Aerial based. What work is to be done, what accessories etc. It all starts to take physical shape working back from the design brief / CtoQ’s / Goals or scope of the design vision.

Once you have the basic mechanical concept then Electrical components start to take shape to provide the mechanical structure with the motion it requires for the length of time required between charges and also the charging / automated battery swap criteria. I use battery swap loosely as when designing electric powered vehicles, I do not rule out liquid batteries which drain and fill electrically charged fluids rather than a solid lump of a battery.

Once the bulk of the electrics are designed, you can start placing the control electronics and sensing devices (camera,lidar, Ultrasonic, bump switch etc) but modifying the electrics to suit.

That just about sums up the overview of Robotics Hardware which for an engineer is not easy but not an impossible challenge either.

Now for control software we could start from scratch using Java ( which is not freeware for much longer) or Python ( which would be great) but for most standard platforms there already is Open Source Robotic Control Software (flight software) ready to be tweaked. For Rovers (land based) see Ardupilot.org and in fact this will do every vehicle type but for Aerial platforms, also look at px4.io or droncode.org as these are industry supported and in development. I will also mention that you need a compatible autopilot hardware kit, which for me with a raspberry Pi 3 will be the Navio2. These tend to come with a GNSS antenna for high location precision.

There is also numerous open source ground control and mission software like QGroundControl.com using MAVLINK or Mission Planner. If using a PC, you will need a telemetry transmitter and receiver kit(433 or 868MHz for UK, 900 for USA and Canada). There are numerous free offering for tablets too in Android or IOS flavours.

Ok So we no have the blocks to create a fully functioning remotely controlled semi autonomous vehicle but how do we make it autonomous. Well that, ideally would take LiDAR and Computer Vision with OpenCV.

LiDAR is an option at this point but with limited open source options we will leave this for a more advanced Robot ‘The Mark 2’.

So lets talk about Computer Vision. This is the route that Car manufacturers are going down, with support of LiDAR, and is all about detecting dangers and picking out data from the camera and turning that image data into usable sensory information that can be processed by the Controller. To do this we can use a piece of open source software called OpenCV. I will mention that OpenCV will also process LiDAR data so we can expand the capabilities later.

This will turn the image data into code which we can interact with using Python code.

At this point I will mention that I am not going to create videos on how to use OpenCV myself because I have found an abundance of youtube courses which are perfect so why remake the wheel. Instead I am going to compile pages with other peoples videos and supplementary information to help cover all bases. This means that I can write faster and other kind people who have taken the time to create content get the boost from more views on their videos.

I will however embed the videos and also give the code from the video which I have tested and added comments – yes we are all doing this together.

So without further a do, lets get on with learning OpenCV and really get our robotics alive. Once we understand how this all works, we will come back to the design brief capabilities and then hopefully on with the design and build. This should be fun, as is most blue collar engineering, but please share the posts with colleagues / friends and comment back with suggestions. Don’t forget you can always email privately at contact@vulcanrealm.com.

If you are looking for all the current lessons then please look under this page’s dropdown in the navigation menu to the left.

EyeLine Video Surveillance Software

POLICING AND BIM – A WIN WIN PROPOSAL

With ever increasing pressures on policing budgets for more Officers on the beat and such an overwhelming task for utilities and businesses to get BIM Level 2 and PAS 256 compliant, there is the potential for a partnership that will benefit Private industry with an expandable framework for the police and public services.

FastDomain Web Hosting $6.95

I will start by asking you to be open minded and dismiss any BIG Brother concepts or conspiracies and look at the potential for good. There is nothing wrong with technology, only those that use it for bad.

I recently posted a short piece on 360 video for photogrammetry so we can model the travelled world by utilities giving a targeted group of selfie customers some 360 cameras and free hosting with the proviso that the utilities could extract the built environment data. It can be found here – upgrade to 360 video cameras. This expands on that, but changes the operating user for better utilisation of funds.

So as discussed in the previous article that pointclouds and models can be created from the 360 videos which can be uploaded to a central server and post processed to identify things like telegraph poles and manhole covers which will give point to point locations of sewers, and buried telephone and power lines as well as other services. This is nothing new but the software and algorithms get better every day.

So why are we not doing this already – we are but it costs a fortune in labour and meetings to target the priorities of that small survey labour force. This is crazy as we have to get it done anyway and it is always the end customer that has to pay for it in the long run, whether it be taxes or recovered through the bills of the end product like SMART meters or did bills just go up at the same time by coincidence – I am not on the inside on that one, there was a report by BEIG but I refer you to the UK parliament paper to research yourself, on how they are not as good as they are made out, and there are the issues of electro smog (see the difference between radio emmission limits between the west and east + switzerland) and once you have one – they wont remove it – my personal experience is the same with my energy provider.

Ok back to topic, So we have 2 issues – Companies not currently investing enough in labour or infrastructure to do this and multiple organisations duplicating labour. What other issues do we have – One large one is an underfunded police force. Hey lets partner with the police to capture the imagery as they go everywhere and repetitively so it is always up to date.

I know what you are saying – Big Brother State, Hate to burst the bubble but we already are there technologically and it is only the control measures and procedures that stop a 1984 distopia – and there is no reason for this to change especially in the UK.

Unhackable Kaspersky

The technicalities of the police capturing 360 footage can be as basic as a helmet or selfie pole (attached to uniform) mounted 360 camera or even go the whole hog and mount an Infra red camera directly above the 360 so maximising the opportunity.

This will give the benefit for capturing the built environment for BIM and PAS 256 but also the police forces benefit from 360 recording and reporting of police activities.

So why would this benefit the police in the long run. Well before we get to the technical advantages, there is the fact that money would be diverted to the police forces as a surveying service / data source which would be an easy sell to private industry as they are omni present and thus have the resources ready to go now without immediate investment in staff, but, this increase in finance would aid the police to increase its staffing levels to provide a better policing service and more up to date data source which benefits the whole of society and public purse as you move the policing sector from being an economic drain on the public purse to a revenue generating / self supporting sector which gives it a positive business investment plan.

This product can then be used for the police service’s own gain by creating Mixed reality software/hardware that aids an officers automated threat detection system, much like technology applied to military aircraft – something basic to start like someone about to attack from behind should not be too hard to incorporate, thus giving health and safety benefits to police officers on the beat. Having just mentioned the military – Just think of the benefits of this same technology to peace keepers, maybe they have something already they could share to aid this development.

The Most Affordable Meetings On The Market

So expanding on this, as an Agile development, you can develop the ability to combine a localy processed geolocated video stream to help locate geographics and comparitive video overlap to detect what has changed over time. This has its obvious law enforcement and military applications, but now if we extend its application to search and rescue, Fire Service, Environment Agencies – we have a full cross sector data reference system for a virtual world construct that can automatically detect areas of risk or interest depending on the criteria. Imagine being able to direct flood rescue personnel to search certain areas with a geographic location and image on their helmet display of what they expect to see underfoot in real time.

I know people are saying its too heavy and will not work but we can combine a micro computer (think Raspberry Pi) with different operating systems i.e. Android / Win10IOT, that support something like openCV so we can pre load areas onto it in a mesh format which is much lighter than pointcloud so we are not far off technologically thanks to opensource gurus with ethics.

Ok, I have strayed from the original point with good reason – Whenever I talk about Virtual world I come up against – Who will use it? and if I answer Future growth planners, maintenance, Construction – I always get the answer of they already do something and its their budget which creates a great deal of pain for me as the public or customer pay for everything in the end so I look at the best efficiency for them. Yes we need to combine this or tailor to application, and I do not have all the technical answers, maybe the guys that came up with OpenCV and mapping companies like ESRI do, but this is just a concept to be enhanced. If we all worked together in a cross sector way for society, which we all are members of, rather than self image or local profit / reward, we might get somewhere – He sais knowing he will place ads in this post.

So moving on with the Police service benefits, you can incorporate feature recognition and OCR (Optical Character Recognition). The Feature Recognition aspects may be as low key as identifying a truck or car for traffic enforcement but could be as enhanced as identifying persons of interest from the crime database. Some algorithms are as advanced as to detect behavioral conditions which could help dealing with people on a friday night or, being 360 degree vision, be able to detect someone needing help in the distance behind as the officer may be distracted with a lower priority task.

The OCR feature could obviously help track vehicles but also to report an officers location to control which will help in built up areas where satellite navigation systems have reduced accuracy.

Ok, with feature recognition software, you also get facial recognition so moving away from the beat policing, If a crime had been committed and you had officers walking the BEAT as it were, the software could pick out regular faces who may provide leads on past cases, not just recent but if you think of transference of geographic habitual habits, i.e. someone at 18 Dortmund street always buys a pint of milk at the corner shop of never was ere lane, then 30 years ago, the resident of that address is likely to of done something similar which may help gain witness leads to close cold cases.

One other point to consider on recent crime locations is the video would provide Accident / first responder investigation – never miss a thing even if someone removes evidence later. As long as someone was there – you have a full record of the crime scene. This would also aid other services like the AAIB when they fly out to plane crashes in the middle of densely populated forests. The software could even be tailored to search areas instead of relying on the skill and attention of the searcher. Imagine the time it would save identifying the four corners of the aircraft if you could send locals with a hat on before the pillaging starts.

Ok, I think I have nailed why the police should be considered the geographic photogrammetric surveyors of the world – if you agree then please share this post on social media and get them the funding to police society properly.

I wanted to share this last note to those that have read this and are scared of too much technology in the states hands. I had a conversation with someone the other day, and as much as we disagree on so many things, we both agreed that the way to avoid an autocratic ‘Big Brother State’ is to keep the ‘Bobby on the beat’. The simplest way to achieve this is for us to all make them an economic benefit to the state as industry invests in profit channels and I believe this is one without corrupting the law enforcement high standard, just needed some high level lateral thinking…

Now imagine if we equiped several police vehicles with high speed laserscanners above the light bar – well maybe you should read https://vulcansrealm.com/2019/04/14/driverless-autonomous-vehicles-how/

On the other hand if i am in jail next week – the system is corrupt – No freedom of speech – its a conspiracy and I am innocent !!!!

Upgrade to 360 Video Cameras

So we are all familiar with action cameras and taking video on our smart phones but what should we do and when? Why not just use a smart phone for everything? Actually some phones do have the ability to clip on a 360 camera and share the video instantly – which is great if you have a modern mobile phone but I am a dinosaur and love dedicated units.

So lets go back to basics, First we had physical film as a recording medium, then we could digitize this in post processing. Soon we started recording in digital on the camera i.e. DVD-R camcorders which were then followed by Non Volatile memory storage like Compact flash. Up till recently it has only been the recording medium that changed but the capture device has remained fairly unchanged in principle i.e. Film SLR cameras became Digital SLR etc.

Although the size of 1 GB storage medium physical size has been reduced through advancements in storage formats (not the same as recording format) and the actual physical media, which is impressive but the biggest advancements have been through the actual fixed lens development.

So why is this important?

Today we use photography and videophotography for so much from AI, Traffic enforcement, Policing, 3D modelling of the natural and built environment while also detecting environmental problems with drones. All of this is only possible because of good photographic capture and recording to be processed. It really is true that if you put bad data in then you get bad data out and a digital picture is exactly that – DATA. So the better the lens, the better the data.

So I love my DSLR but it is getting old now and 24 Megapixel is just about cutting it these days so I am looking into other ways of capturing Good quality Images with less moving parts. This Brings me to Action Cameras.

So I have a GoPro Hero 5 and it is amazing. I use it for my job to capture asset data but struggle to get good photogrammetry results,that rival the DSLR, with many software packages and have been reluctant to get into spherical (360) photography due to the past issues with inability to measure and scale accurately. These problems have, in the main, and continue to be ironed out with modern image stitching techniques and equirectangular processing/conversion.

So from my perspective, I am upgrading my GoPro with a forward looking light so I can improve the reliability of the focus on close up captured image data in low light conditions, for the short term, but I can’t help but think, how useful will it be to capture everything in 360 for total environment capture, even if it requires 360 fill lighting, and let it all get straightened out in post processing and there is my problem – POST PROCESSING.

Post Processing normaly requires lots of power and it is recommended to have something like an Intel I7-6700 (equivalent or better) and NVIDEA GTX GPU (equivalent or better). So although entry cameras are affordable, you also have to think of your computing power to process the 360 images into equirectangular (normal) video. This is normaly completed by the manufacturers own software but once converted, you can use your normal equipment and video editing suite.

However – The benefit of never missing anything when you leave a job is worth it alone but when you think you can re-use the data for photogrammetry (results dependent on processing software/hardware and the images also need to be processed into equirectangular format) or even many manufacturers support an app to display in many VR headsets which is cool and a great way to save money on hazard or industrial site familiarisation training.

Obviously Action Cams are always thought of as the Adrenaline Junkie tool on the end of a selfie stick, but they really do have a productive side especially when upto 5.7k video is paired with some lateral thinking, software and IT skill.

Some 360 cameras out there include:

  • Insta360
  • Insta 360 nano
  • GoPro Fusion 360
  • Garmin Virb 360
  • Ladybug 5
  • NCTech iStar
  • Ricoh Theta
  • Ricoh Theta S

There is so much more to say about the advantages of 360 which are obvious like –

  • Being able to take a panoramic with one click and not have to worry about setting up and leveling a tripod.
  • Having one security camera unit cover a 360 view reducing the amount of camera positions to remove blind spots.
  • Driver cam – capture the road and the drivers mental state reflected in his body language
  • Photogrammetry time saving.
  • Work Site/job familiarisation and safety training.
  • Contractor insight – Being able to give a contractor an eye into say your plumbing issue while also being able to see you might be a bonus.
  • Incident/accident/Crime investigation – The ability to capture everything in relation to each other will be able to give investigators i.e. crash or accident investigators perspectives that only Laserscanning would have been able to do previously. Imagine the evidence that could be captured on a wet rainy night that would normally have been washed away.
  • Finally – that awesome one in a million wave or pipe.

The remaining big issue is sharing the data as cloud data space, on many sharing forums, can be costly so you could set up your own website or private cloud. Alternatively you could social media it but there could be a good little mutualy beneficial thing here.

If there was a central cloud service that was free to store as much data as you liked from action cam and especially 360 video capture, we could pretty much model a vast part of the traveled world in a decade at zero survey cost to business. We could speed that up if customers of utility companies were given something like a 360 action cam.

Why – Photogrammetry and Feature recognition with cloud hosted AI. We have the technology and can ramp up the cloud to handle the data. Someone once said to me – no one will pay to survey the country, I say we do not need to. Vloggers and the general public will – just give them something mutually beneficial i.e. a 360 action cam and unlimited storage and sharing space for those videos with the proviso that the host can extract the built environment data which would satisfy any privacy concerns. Imagine Google Earth VR in one year.

So before we become batteries in the Matrix (showing my age) I will end this but give it thought and get on the 360 train.

FastDomain Web Hosting $6.95

UGEE M708 GRAPHICS TABLET

I always wanted an all singing and dancing Wacom Graphics tablet for my 3D modelling but the thought of paying similar to a new laptop got me thinking about the Lenovo Yoga series of laptop, with the foldable screen hinge, as you can see and interact directly with your model or digital art.

I wanted to also make sure it was what I wanted so thought ‘Go Cheap import to test before investing huge wedge’ and found this UGEE M708 Graphics Tablet for £60 2 years back and it is still going well. I have the older battery powered pen version but the one from amazon below has a self recharging pen which is cool, having said that the battery normally lasts me 2 months which is ok for a single AAA battery. I will also mention there are others like XP PEN which are cheaper still and look the same but that’s all I can say.

Click Here to See the UGEE M708 on Amazon

Ok so after wanting one of these for ages, I now have it. Its A4 ish paper size so quite large so I should be good to go. I mean, I can use Blender and 3DS with a mouse, right, and this was supposed to be more natural. That’s the problem, I have no natural art skill so thats my first thing to say – you can have all the tools but don’t expect to be pulling a Van Gogh in 5 minutes. Now I am glad I only spent £60 and not £400 on its equivalent Wacom.

Ok so I never give up and slowly started to see what I can do with simple spray can art which was great and I managed to make some beautiful coloured boxes in Blender. Without knowing I slowly started replacing my mouse with the pad and pen, for normal computer use, as you can do all 2 button mouse operations with it while not having the Mouse click RSI as it effectively makes a normal computer gain the benefit of a Tablet at a fraction of the cost.

img_20190614_201452.jpg

IMG_20190614_201129

The Tablet itself is easy to use as the pad corners (white Brackets) are mapped to your screen so there is no dragging and scrolling like on the old tablets you just hover the pen over where you want the cursor to appear on the screen and when it is close enough to the tablet, about 15mm, sure enough the cursor appears on the screen. Sure it would be great to have one of the £700+ tablets with a screen underneath but then we go back to my original buying a Lenovo Yoga. Amazon also have them Lenovo Yoga 720

The Following Video shows how easy it is to use with Blender in Plug and Play. I am only using the Pen here but there are 6 more configurable buttons that can be tailored as you desire but you will have to load the Driver Disk (supplied) to configure them. You can also download the driver from the UGEE website if you do not have a Disk Drive.

As you can see it is very easy to use and all of this via USB plug and play. Talking of which, the pen is compatible with windows 10 ink so is customisable in Windows 10 too.

In short it is a cheap and cheerful tablet that can give you an extra few years out of your old PC when you want to expand your skills into 3D modelling / Paint / Photo Editing / Drawing / Handwriting or even just want to add some signature functions.

Ok the nitty gritty:

  • Active Area – 10″ x 6″
  • Resolution – 5080 LPI
  • Report Rate – 230 RPS
  • Accuracy – +or- 0.01″
  • Pressure sensitivity – 2048 Levels
  • Connection – USB

 

Raspberry Pi 3B+ – Getting Started

Picture of the Raspberry Pi and accesory box
Picture of the Raspberry Pi and accesory box

Ok so we got the above in the mail – its all bits and I have to add heat sinks – PANIC!!!!!!!

First off, if you are a man like me, you tossed the instructions to the side……. There should be an A6 size leaflet on how to set up your Raspberry Pi for first time use but incase you have sourced yours elsewhere then you should have the following.

  • Raspberry Pi 3B+ (nowon called RPI3B+)
  • Micro SD card 16GB minimum for this section but you can use just a 4GB card to make the RPI3B+ work. The speed of the card is important and needs its speed to be class 4. This is the number on the card encircled by a C.The number indicates the write speed in MB per second.  On my card I have a SanDisk Ultra 16GB C10 (this is the class speed) MicroSD HC I which was supplied in the package.
  • Power adaptor (micro USB) with a switch.
  • HDMI cable if you are connecting to a monitor in addition to USB Keyboard and Mouse( I would suggest a wireless one as we will need the USB ports later but cross that bridge when we get there.  we will be controlling through a remote desktop to give you the feel of IOT and server interfacing.
  • A case is supplied in the kit but is technicaly not required but it does protect it nicely when kids and pets are running around.
  • Card Reader for installing the Operating System (OS) onto the memory card.
  • Heat Sink for the CPU
  • Heat Sink for Network Chip

If you purchased a kit then your SD card should be preloaded with the OS and you are itching to go but STOP!!!!!!! You must fit the Heat Sinks or your RPI3B+ is going to cook eggs. Ideally I would like a fan too but maybe thats a project for later.

Your heatsinks should have nice self-adhesive pads so, one at a time, unpeel the pad cover foil and stick the big castlated one on the CPU and the smaller metal flush plate on the Network Chip as shown below. ( My one is already in its case so ignore that continuity error at the moment.

Raspberry Pi 3B+ with lid off
Raspberry Pi 3B+ with lid off

 

Ok now pat yourself on the back and then gently clip it into its case, first locating it on its base and then clipping both into the walls of the case. Then clip on the Lid but be carefull not to try and force it as it is ment to have that 2-3mm gap.

Raspberry Pi 3B+ in its case
Raspberry Pi 3B+ in its case

Now its in its case, lets insert that SD card

It slips in here

Raspberry Pi 3B+ SD card slot
Raspberry Pi 3B+ SD card slot

like this

Raspberry Pi 3B+ SD card slot this way up
Raspberry Pi 3B+ SD card slot this way up

 

Ok so connect up your power supply and if you have a monitor and keyboard then you can just power up and ready for the next section. If not and you are going the route as me and remotely accessing your RPI3B+ then you have to connect it to the LAN (network) port on your router (Internet Hub in laymans speak i.e. Sky Q hub). This will require a LAN cable that is normaly supplied with your router. If you need one then it is called a CAT5 network cable. If you are having networking problems with it then check that each of the 8 contacts has a wire by looking through the plastic. Some really cheap ones only have 4 wires and never work. If your ISP (Internet Service Provider) gave you one of those, you may have grounds to question the service they will provide over that locked in contract.

Anyway Turn the Power on and you should see a green light come on by the power adapter. Awesome stuff and ready to move on. If you do not have a green light, either it is faulty but more likely you have no power to the RPI3B+. Either bad power adapter or supply.

Ok so thats the end of this page and the next will deal with the OS setup and first time configuration. If you like this then please like and share so I know or even leave a comment.