OpenCV – Select, Capture And Save Camera Feed

EyeLine Video Surveillance Software

This Video Shows how to Capture and save the camera feed using OpenCV. It also Details how to select different cameras.

As always, To compare your scripts, if you have issues, I have included notes and the completed scripts at the bottom of the page which you can view or download from my Google Drive. The scripts also have notes as comments which appear after a #.

Remember if you can’t view fullscreen, you can right click the ‘youtube’ icon in the bottom right and select ‘view on Youtube.com’

Notes

The supplementary information can be found on the official OpenCV website https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html

The # is used in python to comment code after it on the same line. The importance of using this to comment code is the compiler ignores this when running the code but anyone coming later to work with your code can see why you have used the code you have or any notes you may have put in to modify it.

Some people say that “””comments””” is ok for multi line comments but this is wrong as it can get you into trouble. This is because it is a ‘docstring’ which is treated slightly differently. more info on ‘docstrings’ and their rules can be found here https://www.python.org/dev/peps/pep-0257/ but it is a slightly more advanced topic that you should save for later. Just remember only use a ‘#’ for comments on each line.

Download Links

Github OpenCV repository https://github.com/opencv/opencv/

Completed Scripts

Below is the link to the Google Drive repository for this lesson. You can either view online or download.

https://drive.google.com/drive/folders/190fHm-aJZfLM41rWjFPHxJycKteIq5zW?usp=sharing

OpenCV – Read and Write Images

Xara Web Designer

This Video Shows how to Open, Read and Write images using OpenCV. In the last post, I mentioned he flicks back to using Pycharm IDE for code editing so definitely go to https://www.jetbrains.com/pycharm/ and download / install the pycharm IDE which at present is free in community edition.

To compare your scripts, if you have issues, I have included notes and the completed scripts at the bottom of the page which you can view or download from my Google Drive.

Remember if you can’t view fullscreen, you can right click the ‘youtube’ icon in the bottom right and select ‘view on Youtube.com’

Notes

When copying lena.jpg, depending how your file structure has formed, when running the script, you may get an error. Try right clicking on the ‘venv’ folder and copy the image there or drag and drop from the main folder location. The script should now run and display the ‘lena.jpg’ image.

Keyboard key values can be found at http://www.asciitable.com/. It looks like below. Use the key ‘DEC’ column value. The key description is in red in the last column.

Ascii Table
http://www.asciitable.com

Download Links

You can download Pycharm from https://www.jetbrains.com/pycharm/

You do not need to but if you want to download microsofts awesome free code editor Visual Studio Code, This is the link https://code.visualstudio.com/ but all of the lessons are completed in Pycharm which I would advise using to follow along and then convert to VS code later when you are experienced.

Github OpenCV repository https://github.com/opencv/opencv/

Completed Scripts

So I hit another WordPress file type issue and can’t share script files so below is the link to the Google Drive repository for this lesson. You can either view online or download.

https://drive.google.com/drive/folders/190fHm-aJZfLM41rWjFPHxJycKteIq5zW?usp=sharing

Opening Downloaded Files

Sometimes your browser will not allow you to open downloaded files from the bottom of the browser, especialy if they are .zip files but you can either use the drop down arrow next to the downloaded file at the bottom of your browser and select ‘open in explorer’ or go to your ‘Downloads’ folder and find it there. Again if it is a .zip file then you can right click it and select ‘Extract All’ which will create a larger size folder version of the zip file. This will work now.

OpenCV – Installing on Windows

Xara Web Designer

If you are using a Windows Operating system i.e. Windows 10 then this is the video showing you how to install OpenCV which is crucial to learning. If however you are using a Linux Ubuntu system then please look at this video https://youtu.be/cGmGOi2kkJ4

The topics covered in this video are also useful for learning background features in Windows 10.

Please see the links and text below to follow along.

Download Links

You can download Python from https://www.python.org/

You do not need to but if you want to download microsofts awesome free code editor Visual Studio Code, This is the link but the rest of the lessons are completed in Pycharm which I would advise using to follow along and then convert to VS code later when you are experienced. https://code.visualstudio.com/

Commands for command prompt

  • python –version
  • pip –version
  • pip install opencv-python
  • python
  • import cv2
  • cv2.__version__

Opening Downloaded Files

Sometimes your browser will not allow you to open downloaded files from the bottom of the browser, especialy if they are .zip files but you can either use the drop down arrow next to the downloaded file at the bottom of your browser and select ‘open in explorer’ or go to your ‘Downloads’ folder and find it there. Again if it is a .zip file then you can right click it and select ‘Extract All’ which will create a larger size folder version of the zip file. This will work now.

Command Prompt

Windows 10 is what they call ‘evergreen’ which in laymans terms means it is always updating and changing. This is normally behind the scenes stuff but does affect how the menus work. At the time of writing, to get the command prompt in the method shown on the video, you have to

  • Right Click the Windows Icon (Menu Icon)
  • Select ‘Run’
  • Type ‘cmd’ in the box
  • click ‘OK’ or press ‘Enter’

OpenCV – Introduction

Xara Web Designer

OpenCV is all about processing Images from still photography or film and converting the data in the images to computer code and gives you the ability to work with the raw image data.

Thats a big piece to get your head around but I have found this set of youtube learning videos that starts at the basics of how computers see images and takes you through it in bite size pieces up to the capability of reading lane marking for self driving vehicles. I am also adding notes and supplementary notes after the videos to help understand as this site is geared towards knowing nothing and taking through to being capable yourself without needing to attend college. I myself am self taught on most things with the guidance and material provided by others on the internet or libraries. The downside is that you do not have a piece of paper for a job interview but you will have demonstrable skills so stick with it as you can learn anything.

The you tube video below gives a good introduction to OpenCV and how computers see images. It also includes an brief introduction to ‘numpy’ which is a highly optimised set of Python objects for working with images and numbers.

Please see the links and text below to follow along.

If you have never coded with Python and don’t quite follow along then please have a look at the following tutorial but if you just stick it out, you will pick it up. Think of coding like when you learnt to drive or ride a bike, scary at first but second nature after a while.

youtube – freecodecamp.org learn python in 4.5 hours

ATMOS UAV | Fixed Wing VTOL Drones for Mapping & Surveying

I saw this VTOL drone in the flesh at the GeoData exhibition in London a couple of weeks back and was very impressed as it takes photogrammetric modelling flights to the realms of 1 hour in the sub £20k bracket.

This is a tool that is perfectly tailored to DEM mapping, farming and Urban mapping.

The skin is crash resilient but that should not be a problem as all flights are pre planned and Mapped.

Definitely worth further research and look at their website for further details. The link is below.

ATMOS MARLIN

Autodesk Recap Photo – part of Recap Pro

This is a much underrated product in the AEC community but it has its strengths and is great for those that need good results that can be exported into most mesh formats.

Ok it costs £42 a month as a subscription to Recap Pro which also handles registering of laserscan data from many manufacturers. With this you get a certain amount of Cloud credit for processing your images or pointclouds into mesh models. This is not great as at last check 1 cloud credit was $1 and upto 300 photos will cost you 12 credits so even though you pay that £42 a month, every project will cost you between £10 to £50 depending on how many images you use.

This brings me on to the next issue, Photo models or closeups can only be 20-300 images and UAV / Drone Models can only be derived from a maximum of 1000 images. Considering photogrammetry tends to need at least a 30% overlap of image data to create tie points between images, you are not going to cover a great deal of area for you £50.

Ok thats the negative and why it quite rightly gets a slam from the AEC community over the costs but there are so many good things about it like:-

  • It is fairly accurate time after time
  • You only need a standard cheap laptop so there is a cost saving.
  • It has a simple GUI system to upload your project images to the cloud and download the result.
  • Once uploaded to the cloud, you no longer need to tie up your computer with processing images to models and can get on with other aspects of your business which is the strength of cloud based solutions as if you stop work, you probably lose 5 or 6 times more money than the processing cost but if it is not as expected then you have to pay again to process another project – no refunds for bad models.
  • You can optimize the export format of the model to many bits of software including Blender so no lock into autodesk products.
  • Autodesk supports students and education facilities with free access to many of their products on a Education Licence.
  • It ties in seamlessly with other Autodesk software like Maya and 3DS which are used extensively within many Industries so it adds to a great workflow.

There is also a 30 day trial available on Autodesk products so it is worth trying out for yourself.

On test it produced a model better than that of non CUDA Open source software to the extent of picking out the Cigarette Butts that had been discarded in the trough which is pretty good for that set of images. Please have a look at the video below which illustrates what you can expect. Do also note how much extra background imagery has been converted too.

I did also like how it computed a bottom for the trough even though there were no deliberate images of the bottom. 

 

 

For a complete look at this software go to Autodesk Recap Page

Please Like and Share this page with colleagues if you found it helpful.

Geospatial Modelling For Free

This Post has been really fun to research and what a sense of satisfaction I had when I found out how to do it but I apologise to all that follow as it has taken all my time up dedicated to this one issue so lets crack on as I am so excited.

So what did I want to prove?

  1. I wanted to prove that you could Geospatial model straight from a game engine like Unity or Armory3D
  2. This could be done for free without having to hard code – spreadsheet is fine as most that read this will be able to use basic spreadsheets but few will be able to go off writing Translation programs so it had to be a solution for all.
  3. No Proprietry Lock in to a platform.
  4. It also has to be able to make the Modelling program geospatial so you can tie the world together.

Where Did I start?

First I wanted to see what was possible with what I had, a FARO S70 Laserscan, FARO SCENE LT 2019 (yes its free and meshes point clouds beautifully if you have the hardware), Blender 2.8, Meshlab.

I will first say that the Laserscan I had was Geolocated as a project, If you are inserting a non geolocated laserscan, you can adjust the geolocation in the properties section. Even still, this is not a problem as I will also explain how to geolocate a standard mesh too so you don’t need a laserscan, its just this software makes it a cleaner process.

So I meshed out the laserscan in FARO SCENE and exported as OBJ,PLY,STL etc but none of the formats could show up in the viewport. With so much saying about Blender 2.8 Bugs at the moment, I just dismissed this as an option so I opened up in Meshlab and voila, it was there, a lovely Laserscan of the building. So I saved it and made a new OBJ file which was almost twice the size of the first and imported into UNITY which on completion was nowhere to be seen.

After much faffing with this and that and trying this, standing on one leg with fingers crossed and arms – anyway you get the picture, while exporting or clinching at any last notes from the net, I thought OBJ – its an old format, I wonder if I can open it in Notepad.

Sure enough it is a text file format and the first thing I recognised was that the X,Y,Z looked an aweful lot like Decimal Degree co-ordinates for the area but out of range.

Now while researching GNSS Systems I had to get familiar with different Historical systems of mapping in use and origins as well as the Technical aspects to the satellites themselves – Good Times :-), The point is that I was reminded of not looking at the world as a globe but as a flat paper map. Flat Land Maps, not Naval Charts, Used to be and often are still in a system using Cartesian Co-ordinates commonly known as Eastings and Northings.

So I went to the Ordnance Survey Site and inserted the

  • X value in the Eastings Field,
  • Y value in Northings Field,
  • Z value in the Height Field.

BINGO exactly where the scan was Taken

So if it is here then why cant I work on it in my modelling suite, i.e. 3DS or Blender. Thats simple now, we know Eastings and Northings are in meters like Blender and 3DS so an Eastings of 583947.75m would be 583.94775KM East of the 0 point on the X axis, a little way away so of course you will see nothing.

In Blender, you can correct this easily by first Selecting the Imported Object in the Heirachy, Move the Cursor over the Viewport and Right Click. This will give you a menu where you can select ‘Set Origin’ which opens a sub menu and select ‘Set Origin To Geometry’.

You should now see the Transform Component have a huge number in X and Y but Z hopefully will be the Altitude so much smaller.

MAKE A COPY of THIS AS YOU WILL NEED TO PUT IT BACK.

OK now go into Edit Mode (push tab) and set the Local Median co-ordinates to 0 ( it will not allow Zero and will go to some smallish number). This only sets the centre of the model to 0 or in the middle ish but close enough for this as it is all relative from now on.

Go Back into object mode and hopefully your Transform component numbers have not changed, now set them to Zero. Your model should now be there.

Do your modelling and once finished, put the old Transform co-ordinates that you copied, back into the Transform Component to Re-locate the Model back into its Geolocated Space.

The link between the XYZ to Eastings and Northings is really powerful when we now import our model into a Game Engine like Unity or Armory3D as we can use a simple script to read the player or model current xyz and convert it into a Global Decimal Degree which can be linked out into all mapping systems, Smart Phones and everything Geolocatable thus creating the link between Virtual World and Real World.

This New XYZ can also become an Assets Identifier.

Ok but I have a non Geolocated model, How can I merge this into the GeoSpaced Game.

Idealy you would have GNSS Equipment Like SPECTRA SP20 GNSS Device for BIM or one of their other High Accuracy units but if you are at home, you could use a web map to derive a Rough location (still might be better than your phone) Google Maps (Left Double click Location to drop a pin, Click on the decimal degree output at the bottom pop up, Copy and paste from the left hand sidebar)  or Bing Maps (Right click location, select copy below the decimal degree co-ordinates, pres CTRL + C)

Put these co-ordinates in the Transform Tool Decimal Degrees section at Ordnance Survey (If using maps, you will only get Lat and Long without a height value, set Elipsoid Height to 45.734 to pin on the ground at 0m height) which will give you an easting and Northing.

Ok This is your reference position. We can simply enter them in the Global Transform Component (Object Mode) after setting the Local Median (Edit Mode) to the location of the object that is to locate at that geolocation.

If not using Blender, then at worst case to apply your vertex points we need them in a spreadsheet for editing.

Open the OBJ file in Notepad (rightclick -> open with other apps -> select notepad)

We need to copy all lines beginning with a ‘v’ and there might be a few thousand. Each line has six values, position x,y,z and colour x,y,z(the colour is ranged from 0-1).

Select the first line and scroll to the end of the ‘v’ lines and hold shift and click at the end of the last line which will select everything inbetween.

Open your spreadsheet editor and paste. You need to chop and divide the text into seperate columns, there are lots of youtube videos on how to do this, and you can use the space to identify the place to seperate.

Once you have seperated the values into seperate cells, write a formula to add the reference easting to the first value, reference northing to the second value and reference height to the third value. Then copy cell all the way down for it to autopopulate the new figures.

Now concatenate (join) the values back into one single line of text with the spacings and colours added back in. Again lots of youtube videos on how to do this.

Copy the new data into the OBJ notepad file replacing the old ‘v’ values and save as a new OBJ. If the file extension is not available, select saveAs, type the filename followed by ‘.obj’ without the quotes.

now this obj will open in its new geolocation in eastings and northings, sorry I mean XYZ, co-ordinates.

Just to conclude that now we can write a script that tracks the xyz of anything we require to track either its movement in the virtual or even be effected by the movement of something in the real world to give a truly real time virtual simulator.

Imagine if a bridge was raised prematurely, you could not only see a visualisation of what is happening but also use AI to aleviate the problems as it can read and control the virtual easier than the Real but with full control of the Real by Proxy.

VIRTUAL WORLD IS POSSIBLE EASILY, CHEAPLY AND RELIABLY

NOW!!!!

Please Share and Like as I do not have a virtual Billboard, That will be the next big thing, Immersive Advertising – I can Hear George Carling now.