TEXT EXTRACTION WITH FreeOCR

Optical Character Recognition software has been around for a while now and is used in many applications from number plate recognition to document scan to text. The big opportunity comes when incorporated into Surveying and robotics. For this, many companies turn to big Cloud Computing Products such as Google Cloud or Azure but there is a cheaper way if you are prepared to do some computer legwork and are not after a one stop shop solution. If you want one stop then GCP or Azure is the place to go but you will pay for it.

Rescue deleted files or photos - O&O DiskRecovery

Alternatively you can combine smaller products as below to achieve similar great results for efficiency in mass processing. I have remained generic and not to mention brands as either the open source or proprietry products will return good results.

The big hold up has been getting GOOD free opensource/freeware OCR software. We now have it in the form of FreeOCR downloadable from paperfile http://www.paperfile.net.

FreeOCRIcon

This software allows you to scan all documents into word format extracting the text automatically and works with pictures too. It uses the tesseract OCR engine which is at the following GitHub page http://code.google.com/p/tesseract-ocr/ and can be compiled into your own software creation for the aspiring coders amongst us.

Lets say you want to scan CCTV footage for registrations of people coming and going through a gate, Simply turn the video into images (maybe using VLC) and load the saved images into FreeOCR. Hit the OCR button and it will convert any text in the images to a text file.

This might be a good idea for automated scanning of CCTV footage after a crime to find witnesses.

Another alternative use would be for BIM and scanning asset tags or data plates. Lets say you have some georeferenced images taken with something like a SPECTRA SP20 you would be able to cross refer the OCR recovered Model and Serial number with the GeoTag in the images Metadata in an automated way to geolocate the asset data in the database.

You would already have to have an asset database to query but  you could add assets like this too.

Maybe you could automate cheaply using a GoPro Hero5 (or later) set to Linear or Medium Field Of View, 50 Frames Per second and good forward lighting. You would also need a piece of software that uses tesseract and will record the frame or picture number against the OCR output and image metadata including GPS data.I mention this method with a GoPro example as you could use the same video footage as you take for photogrammetric modelling where the results would improve with the better camera and also records the Geolocation. If you need a GoPro, they now supply the Hero7 12MP on Amazon. Click the image below to see the listing and the Specs.

This Photogrammetric modelling would provide the basis for the 3D virtualised world engine for self operated robots later.

Now Moving on from data collection, imagine if robots could read languages and understand, orientate and operate themselves with OCR or feature extraction with 360 cameras, then we are talking that we are close to robots automated operations in changing environments.

Now lets combine other work with driverless cars and the virtualised world engine, we are talking about fully autonomous vehicles or self operated machines in a variable world.

The helps the Future to be Exciting as we change how we apply current technologies to deliver futuristic capabilities today.

Insta360 Air (Micro USB) 360 Camera
Advertisements

Screen Record with VLC Media Player

Screen Record with VLC Media Player video is now live and can be viewed on you tube or below.

This is an awesome feature of VLC media player as it gives the ability to record your desktop and everything that is displayed on it – nudge nudge wink wink – without having to go to great expense of proprietary software or the restrictions of Windows 10 Game Bar.

Please watch the video and leave comments if this was useful to you and you want to see more like this. Don’t forget share PLEASE.

 

 

Rotating Videos With VLC Media Player

When you are out and about and just want to capture that moment, you are not thinking about which orientation you need to record in so often you start in portrait (vertical) mode and rotate the phone for a better shot later. Unfortunately this looks great on the phone as the screen rotates automatically but often the video is locked in the original orientation that you started recording in and when played back you find that the video is now playing at 90 degrees on the screen as below.

videorotate1.PNG

This is easy to correct with VLC media Player, for free without watermarks, as detailed in this video but I have also detailed the steps below the video.

First of all you need VLC media Player which can be downloaded here https://www.videolan.org/vlc/

Once Installed, Open it up and select MEDIA -> OPEN FILE

videorotate2.PNG

Select Your File and Click OPEN

videorotate3.png

Now the Video should auto play and let it play till you get to the part where it needs rotating. I will say at this point that it converts the whole video file so I suggest you save a copy and trim only the parts you need rotating in a video editing suite. If you have Windows 10, you have the Video Editor tool located in the menu under V. You can easily cut up a video and stitch it back later with this free tool. Alternatively there are other free tools out there.

Ok so we have a point where the video has rotated. Pause the video here and In the top Menu bar, select  TOOLS -> EFFECTS AND FILTERS

videorotate4.PNG

This Will display a pop up box as below. Select the Following Tabs

VIDEO EFFECTS -> GEOMETRY

videorotate5

You will now want to select the TRANSFORM check box and select how you want to rotate it from the drop down menu. See image below. You will note that the video rotates in the background to confirm you have the correct Transform applied. I have selected 270 degrees which is the same as -90 degrees.

videorotate6.png

Once you are happy, Click on Close.

Ok, we now have a rotated video but we need to export it ( Save into a new independent  file) so lets go to the Top Menu Bar and select

MEDIA -> CONVERT/SAVE

videorotate7.PNG

This will produce a pop up box. Click the ADD button

videorotate8

This will open the file explorer so select your file and click OPEN

 

This will return you to the Open Media Pop up so now at the bottom, click on the drop down menu (arrow) next to Convert / Save and select CONVERT.

videorotate10

Select the Profile (output format), I always leave as MP4 as this is widely used. Now in the Destination Section, Select the BROWSE button.

videorotate11

You now need to select the output folder (this is where your video will be found after conversion). I tend to put it in the same folder and filename but add what has happened to the file name. In this case rotated 270 degrees. Click the SAVE button.

videorotate12

Nearly there, You will be returned to the CONVERT pop up and now you simply click the START button.

videorotate13

You will now see a blank screen, as below, with the blue bar increasing from left to right to show progress.

videorotate14

Once complete, The screen will go back to the opening/launch blank screen.

You should now have a converted file in the file location specified but play it back in another media player or close VLC and re-open. If you try and play the video straight away, you may find it at the wrong orientation because you still have the Effects Transform set to rotate it. You could also go back to EFFECTS and Set the Transform to none or 0 degrees rotation.

If you liked this and found it helpful then please like, share and comment. Please also use the contact form with any suggestions of topics you want us to investigate.

You can also follow this website on Facebook / LinkedIn / Twitter and videos on YouTube.

 

VISUALISATION WITH PANOPTIKON

PANOPTIKON – It is Swedish,  and as far as I can remember the translation means ‘to observe’ but comes from the Greek ‘all-seeing’ and in English refers to ‘a Prison with cells arranged around a central point so they can be seen at all times’ – Very Colonial, I prefer the Swedish as it is simply clear and just looks cool which is much the story with the PANOPTIKON visualisation software.

I saw these guys at the GeoBusiness 2019 and was blown away by the potential this has to engage with communities in an open and clear method across any language barrier. In short, it is a really good Video/Augmented Reality Communication/engagement tool for BIM.

We have all seen videos of proposed construction sites ‘this is what it will look like if we have the OK’ in dry architectural drawings but this incorporates all BIM model data so it releases all the power of the digital age to show a vision of exactly how it is to how it will be built exactly.

It Does this buy combining Drone footage with BIM Data and combining the two. The following 2 screen shots are from their video but it shows the effect. You can view the video on their website www.bjorkstromrobotics.com

The next applications I see are in farming and infrastructure development to aid positive engagement of communities to develop their land.

Imagine a community or utility has a sudden issue with over development i.e. too much home building (postage stamp houses) and blocks development, maybe there is no parking spaces or playgrounds. You need to address their concerns and convince them that there is a plan to resolve the issue.

This is the communication tool to do just that.

They also have another really useful tool for a tablet which shows where all the utility services run and will visualise in real space (on the tablet) everything that is proposed with accurate GNSS derived positioning.

So now if we combine the 2 products, we are looking at the ability to show a video demonstration of the project life cycle, at the build location, and then after the video you can engage with the audience in live augmented reality to answer any of their questions – maybe discover new ones but it is a full engagement system. They can even walk around with the tablet to see what the new area will look like and get a feel for it. I am hopeful that they will develop for full Mixed Reality soon but they seem keen to develop the product to fulfill their partners need – and I like that Partner attitude, not just customer, customer just sounds disposable to me.

The other use, I see, is in forensics, Health and safety or engineering to reconstruct events on site whether it be an accident and you need to investigate view points on site or an augmented reconstruction, or maybe you just want to maximise your aircraft parking in your hangar.

OK so I am not on commission so I will now refer you to their website for further information and videos www.bjorkstromrobotics.com/

I will however say that it was an insightful discussion I had on the day and they seem like a very helpful and positive group to deal with.

UGEE M708 GRAPHICS TABLET

I always wanted an all singing and dancing Wacom Graphics tablet for my 3D modelling but the thought of paying similar to a new laptop got me thinking about the Lenovo Yoga series of laptop, with the foldable screen hinge, as you can see and interact directly with your model or digital art.

I wanted to also make sure it was what I wanted so thought ‘Go Cheap import to test before investing huge wedge’ and found this UGEE M708 Graphics Tablet for £60 2 years back and it is still going well. I have the older battery powered pen version but the one from amazon below has a self recharging pen which is cool, having said that the battery normally lasts me 2 months which is ok for a single AAA battery. I will also mention there are others like XP PEN which are cheaper still and look the same but that’s all I can say.

Click Here to See the UGEE M708 on Amazon

Ok so after wanting one of these for ages, I now have it. Its A4 ish paper size so quite large so I should be good to go. I mean, I can use Blender and 3DS with a mouse, right, and this was supposed to be more natural. That’s the problem, I have no natural art skill so thats my first thing to say – you can have all the tools but don’t expect to be pulling a Van Gogh in 5 minutes. Now I am glad I only spent £60 and not £400 on its equivalent Wacom.

Ok so I never give up and slowly started to see what I can do with simple spray can art which was great and I managed to make some beautiful coloured boxes in Blender. Without knowing I slowly started replacing my mouse with the pad and pen, for normal computer use, as you can do all 2 button mouse operations with it while not having the Mouse click RSI as it effectively makes a normal computer gain the benefit of a Tablet at a fraction of the cost.

img_20190614_201452.jpg

IMG_20190614_201129

The Tablet itself is easy to use as the pad corners (white Brackets) are mapped to your screen so there is no dragging and scrolling like on the old tablets you just hover the pen over where you want the cursor to appear on the screen and when it is close enough to the tablet, about 15mm, sure enough the cursor appears on the screen. Sure it would be great to have one of the £700+ tablets with a screen underneath but then we go back to my original buying a Lenovo Yoga. Amazon also have them Lenovo Yoga 720

The Following Video shows how easy it is to use with Blender in Plug and Play. I am only using the Pen here but there are 6 more configurable buttons that can be tailored as you desire but you will have to load the Driver Disk (supplied) to configure them. You can also download the driver from the UGEE website if you do not have a Disk Drive.

As you can see it is very easy to use and all of this via USB plug and play. Talking of which, the pen is compatible with windows 10 ink so is customisable in Windows 10 too.

In short it is a cheap and cheerful tablet that can give you an extra few years out of your old PC when you want to expand your skills into 3D modelling / Paint / Photo Editing / Drawing / Handwriting or even just want to add some signature functions.

Ok the nitty gritty:

  • Active Area – 10″ x 6″
  • Resolution – 5080 LPI
  • Report Rate – 230 RPS
  • Accuracy – +or- 0.01″
  • Pressure sensitivity – 2048 Levels
  • Connection – USB

 

3DF Zephyr Photogrammetry

Ok so we keep running into that NVIDEA CUDA requirement but a vast amount of laptops only have intel or AMD graphics. This is a big problem but hey there is 3DF Zephyr which is free for personal use and is not too expensive if you go pro.

First of all the free version is only allowing 50 pictures or frames but its enough to learn and if you need more then 150 euros gets you the next level for up to 500 images or frames. The next two pricing tiers turn this software into a onestop shop for all photogrammetry and laserscan processing but it would need trialing well to warrant the expense.

Here though we are testing the basic free version on a Win10 laptop with an intel I5-4310 2.6GHz CPU with 8GB of Ram and Intel 4000 Graphics.

The Pictures have been taken with a Nikon D3200 DSLR 24 Megapixel which has now been replaced by the D3500 and can be seen here on amazon.

Ok so while typing, I have been processing a batch of photos of a memorial Trough in 3DF Zephyr (free). It took about 5 mins to install from the website https://www.3dflow.net/3df-zephyr-free/

Once installed Click File -> New Project

This will open a dialogue to upload pictures or video, remember free version is limited to 50 frames or pictures so if using a video capture camera, maximise your settings for quality as you want the most ammount of pixels in your frame. Even still you are likely to get less than 12 frames per second (standard is 24fps) so at slowest it is about 4 seconds of video. Alternatively there are many open source programs or free to use like VLC or OpenShot that will convert your video to images but some have had issues with the correct codec being recognised from their device which then requires the video converting in handbrake first but there is alway free video to JPG Converter which will work on most codecs and is simple to use although I did lose all EXIF and GPS data when I tried it. This can be a problem if creating automated Geolocated point clouds or mesh, which will require manual geolocation.

Once the images have been selected,you need to tell the software how they were taken, I took these stnading up in an Urban environment so I selected Urban with standard everything. I then click next and select what the output process will be, I selected everything as I want to export a mesh for modelling so I have to have a Dense Point cloud first. Then hit Run which is at the top of the page.

Needless to say it took a while, approx 1.5 hours, but was fully automated. I did note that it took 85% disk buffer and kept peaking out at 100% memory and CPU. Even though, it only used 15 images (called Cameras) out of the 30 selected images. This has led to issues with the final render but the below is the video of Generated Sparse Point Cloud, Dense Point Cloud, Mesh, Textured Mesh.

The results Were good considering it did not use any of the images from the back side so I think the next thing to try is a video and select the images more carefully from that.

I hope this was a good intro to 3DF Zephyr and lets try and find out how to improve the reliability of completed model output. I believe that it is in the technique so shall devote a little time to this and share when I have the answer / method.

Meshroom Photogrammetry

2019 – What a wonderful year for Photogrammetry. Not only have we had great advances in hardware like UAV’s but also the long awaited Meshroom hit the web in open source format using the AliceVision framework.

This means that the little person can use a good piece of software to create mesh models from pictures – providing that the camera is good enough.

For too long we have had to either pay heavy amounts of money or use command line for any power processing photogrammetry projects but now this promisses to change all that witha full professional GUI and DOPE editor combined with the SFM AliceVision System.

Obviously we need to convince big business that open source is worth supporting so please all spread the word.

Over the Next few days I will be trialling the software and upload some samples and if it produces great results, like they have, we will use it for creating open source models to share so watch this space and lets reduce that modelling time for games.

Here is to Hope but if you want to investigate yourself then please look at their website https://alicevision.github.io/#