Screen Record with VLC Media Player

Screen Record with VLC Media Player video is now live and can be viewed on you tube or below.

This is an awesome feature of VLC media player as it gives the ability to record your desktop and everything that is displayed on it – nudge nudge wink wink – without having to go to great expense of proprietary software or the restrictions of Windows 10 Game Bar.

Please watch the video and leave comments if this was useful to you and you want to see more like this. Don’t forget share PLEASE.

 

 

Advertisements

Rotating Videos With VLC Media Player

When you are out and about and just want to capture that moment, you are not thinking about which orientation you need to record in so often you start in portrait (vertical) mode and rotate the phone for a better shot later. Unfortunately this looks great on the phone as the screen rotates automatically but often the video is locked in the original orientation that you started recording in and when played back you find that the video is now playing at 90 degrees on the screen as below.

videorotate1.PNG

This is easy to correct with VLC media Player, for free without watermarks, as detailed in this video but I have also detailed the steps below the video.

First of all you need VLC media Player which can be downloaded here https://www.videolan.org/vlc/

Once Installed, Open it up and select MEDIA -> OPEN FILE

videorotate2.PNG

Select Your File and Click OPEN

videorotate3.png

Now the Video should auto play and let it play till you get to the part where it needs rotating. I will say at this point that it converts the whole video file so I suggest you save a copy and trim only the parts you need rotating in a video editing suite. If you have Windows 10, you have the Video Editor tool located in the menu under V. You can easily cut up a video and stitch it back later with this free tool. Alternatively there are other free tools out there.

Ok so we have a point where the video has rotated. Pause the video here and In the top Menu bar, select  TOOLS -> EFFECTS AND FILTERS

videorotate4.PNG

This Will display a pop up box as below. Select the Following Tabs

VIDEO EFFECTS -> GEOMETRY

videorotate5

You will now want to select the TRANSFORM check box and select how you want to rotate it from the drop down menu. See image below. You will note that the video rotates in the background to confirm you have the correct Transform applied. I have selected 270 degrees which is the same as -90 degrees.

videorotate6.png

Once you are happy, Click on Close.

Ok, we now have a rotated video but we need to export it ( Save into a new independent  file) so lets go to the Top Menu Bar and select

MEDIA -> CONVERT/SAVE

videorotate7.PNG

This will produce a pop up box. Click the ADD button

videorotate8

This will open the file explorer so select your file and click OPEN

 

This will return you to the Open Media Pop up so now at the bottom, click on the drop down menu (arrow) next to Convert / Save and select CONVERT.

videorotate10

Select the Profile (output format), I always leave as MP4 as this is widely used. Now in the Destination Section, Select the BROWSE button.

videorotate11

You now need to select the output folder (this is where your video will be found after conversion). I tend to put it in the same folder and filename but add what has happened to the file name. In this case rotated 270 degrees. Click the SAVE button.

videorotate12

Nearly there, You will be returned to the CONVERT pop up and now you simply click the START button.

videorotate13

You will now see a blank screen, as below, with the blue bar increasing from left to right to show progress.

videorotate14

Once complete, The screen will go back to the opening/launch blank screen.

You should now have a converted file in the file location specified but play it back in another media player or close VLC and re-open. If you try and play the video straight away, you may find it at the wrong orientation because you still have the Effects Transform set to rotate it. You could also go back to EFFECTS and Set the Transform to none or 0 degrees rotation.

If you liked this and found it helpful then please like, share and comment. Please also use the contact form with any suggestions of topics you want us to investigate.

You can also follow this website on Facebook / LinkedIn / Twitter and videos on YouTube.

 

VISUALISATION WITH PANOPTIKON

PANOPTIKON – It is Swedish,  and as far as I can remember the translation means ‘to observe’ but comes from the Greek ‘all-seeing’ and in English refers to ‘a Prison with cells arranged around a central point so they can be seen at all times’ – Very Colonial, I prefer the Swedish as it is simply clear and just looks cool which is much the story with the PANOPTIKON visualisation software.

I saw these guys at the GeoBusiness 2019 and was blown away by the potential this has to engage with communities in an open and clear method across any language barrier. In short, it is a really good Video/Augmented Reality Communication/engagement tool for BIM.

We have all seen videos of proposed construction sites ‘this is what it will look like if we have the OK’ in dry architectural drawings but this incorporates all BIM model data so it releases all the power of the digital age to show a vision of exactly how it is to how it will be built exactly.

It Does this buy combining Drone footage with BIM Data and combining the two. The following 2 screen shots are from their video but it shows the effect. You can view the video on their website www.bjorkstromrobotics.com

The next applications I see are in farming and infrastructure development to aid positive engagement of communities to develop their land.

Imagine a community or utility has a sudden issue with over development i.e. too much home building (postage stamp houses) and blocks development, maybe there is no parking spaces or playgrounds. You need to address their concerns and convince them that there is a plan to resolve the issue.

This is the communication tool to do just that.

They also have another really useful tool for a tablet which shows where all the utility services run and will visualise in real space (on the tablet) everything that is proposed with accurate GNSS derived positioning.

So now if we combine the 2 products, we are looking at the ability to show a video demonstration of the project life cycle, at the build location, and then after the video you can engage with the audience in live augmented reality to answer any of their questions – maybe discover new ones but it is a full engagement system. They can even walk around with the tablet to see what the new area will look like and get a feel for it. I am hopeful that they will develop for full Mixed Reality soon but they seem keen to develop the product to fulfill their partners need – and I like that Partner attitude, not just customer, customer just sounds disposable to me.

The other use, I see, is in forensics, Health and safety or engineering to reconstruct events on site whether it be an accident and you need to investigate view points on site or an augmented reconstruction, or maybe you just want to maximise your aircraft parking in your hangar.

OK so I am not on commission so I will now refer you to their website for further information and videos www.bjorkstromrobotics.com/

I will however say that it was an insightful discussion I had on the day and they seem like a very helpful and positive group to deal with.

Autodesk Recap Photo – part of Recap Pro

This is a much underrated product in the AEC community but it has its strengths and is great for those that need good results that can be exported into most mesh formats.

Ok it costs £42 a month as a subscription to Recap Pro which also handles registering of laserscan data from many manufacturers. With this you get a certain amount of Cloud credit for processing your images or pointclouds into mesh models. This is not great as at last check 1 cloud credit was $1 and upto 300 photos will cost you 12 credits so even though you pay that £42 a month, every project will cost you between £10 to £50 depending on how many images you use.

This brings me on to the next issue, Photo models or closeups can only be 20-300 images and UAV / Drone Models can only be derived from a maximum of 1000 images. Considering photogrammetry tends to need at least a 30% overlap of image data to create tie points between images, you are not going to cover a great deal of area for you £50.

Ok thats the negative and why it quite rightly gets a slam from the AEC community over the costs but there are so many good things about it like:-

  • It is fairly accurate time after time
  • You only need a standard cheap laptop so there is a cost saving.
  • It has a simple GUI system to upload your project images to the cloud and download the result.
  • Once uploaded to the cloud, you no longer need to tie up your computer with processing images to models and can get on with other aspects of your business which is the strength of cloud based solutions as if you stop work, you probably lose 5 or 6 times more money than the processing cost but if it is not as expected then you have to pay again to process another project – no refunds for bad models.
  • You can optimize the export format of the model to many bits of software including Blender so no lock into autodesk products.
  • Autodesk supports students and education facilities with free access to many of their products on a Education Licence.
  • It ties in seamlessly with other Autodesk software like Maya and 3DS which are used extensively within many Industries so it adds to a great workflow.

There is also a 30 day trial available on Autodesk products so it is worth trying out for yourself.

On test it produced a model better than that of non CUDA Open source software to the extent of picking out the Cigarette Butts that had been discarded in the trough which is pretty good for that set of images. Please have a look at the video below which illustrates what you can expect. Do also note how much extra background imagery has been converted too.

I did also like how it computed a bottom for the trough even though there were no deliberate images of the bottom. 

 

 

For a complete look at this software go to Autodesk Recap Page

Please Like and Share this page with colleagues if you found it helpful.

3DF Zephyr Photogrammetry

Ok so we keep running into that NVIDEA CUDA requirement but a vast amount of laptops only have intel or AMD graphics. This is a big problem but hey there is 3DF Zephyr which is free for personal use and is not too expensive if you go pro.

First of all the free version is only allowing 50 pictures or frames but its enough to learn and if you need more then 150 euros gets you the next level for up to 500 images or frames. The next two pricing tiers turn this software into a onestop shop for all photogrammetry and laserscan processing but it would need trialing well to warrant the expense.

Here though we are testing the basic free version on a Win10 laptop with an intel I5-4310 2.6GHz CPU with 8GB of Ram and Intel 4000 Graphics.

The Pictures have been taken with a Nikon D3200 DSLR 24 Megapixel which has now been replaced by the D3500 and can be seen here on amazon.

Ok so while typing, I have been processing a batch of photos of a memorial Trough in 3DF Zephyr (free). It took about 5 mins to install from the website https://www.3dflow.net/3df-zephyr-free/

Once installed Click File -> New Project

This will open a dialogue to upload pictures or video, remember free version is limited to 50 frames or pictures so if using a video capture camera, maximise your settings for quality as you want the most ammount of pixels in your frame. Even still you are likely to get less than 12 frames per second (standard is 24fps) so at slowest it is about 4 seconds of video. Alternatively there are many open source programs or free to use like VLC or OpenShot that will convert your video to images but some have had issues with the correct codec being recognised from their device which then requires the video converting in handbrake first but there is alway free video to JPG Converter which will work on most codecs and is simple to use although I did lose all EXIF and GPS data when I tried it. This can be a problem if creating automated Geolocated point clouds or mesh, which will require manual geolocation.

Once the images have been selected,you need to tell the software how they were taken, I took these stnading up in an Urban environment so I selected Urban with standard everything. I then click next and select what the output process will be, I selected everything as I want to export a mesh for modelling so I have to have a Dense Point cloud first. Then hit Run which is at the top of the page.

Needless to say it took a while, approx 1.5 hours, but was fully automated. I did note that it took 85% disk buffer and kept peaking out at 100% memory and CPU. Even though, it only used 15 images (called Cameras) out of the 30 selected images. This has led to issues with the final render but the below is the video of Generated Sparse Point Cloud, Dense Point Cloud, Mesh, Textured Mesh.

The results Were good considering it did not use any of the images from the back side so I think the next thing to try is a video and select the images more carefully from that.

I hope this was a good intro to 3DF Zephyr and lets try and find out how to improve the reliability of completed model output. I believe that it is in the technique so shall devote a little time to this and share when I have the answer / method.

Meshroom Photogrammetry

2019 – What a wonderful year for Photogrammetry. Not only have we had great advances in hardware like UAV’s but also the long awaited Meshroom hit the web in open source format using the AliceVision framework.

This means that the little person can use a good piece of software to create mesh models from pictures – providing that the camera is good enough.

For too long we have had to either pay heavy amounts of money or use command line for any power processing photogrammetry projects but now this promisses to change all that witha full professional GUI and DOPE editor combined with the SFM AliceVision System.

Obviously we need to convince big business that open source is worth supporting so please all spread the word.

Over the Next few days I will be trialling the software and upload some samples and if it produces great results, like they have, we will use it for creating open source models to share so watch this space and lets reduce that modelling time for games.

Here is to Hope but if you want to investigate yourself then please look at their website https://alicevision.github.io/#

Geospatial Modelling For Free

This Post has been really fun to research and what a sense of satisfaction I had when I found out how to do it but I apologise to all that follow as it has taken all my time up dedicated to this one issue so lets crack on as I am so excited.

So what did I want to prove?

  1. I wanted to prove that you could Geospatial model straight from a game engine like Unity or Armory3D
  2. This could be done for free without having to hard code – spreadsheet is fine as most that read this will be able to use basic spreadsheets but few will be able to go off writing Translation programs so it had to be a solution for all.
  3. No Proprietry Lock in to a platform.
  4. It also has to be able to make the Modelling program geospatial so you can tie the world together.

Where Did I start?

First I wanted to see what was possible with what I had, a FARO S70 Laserscan, FARO SCENE LT 2019 (yes its free and meshes point clouds beautifully if you have the hardware), Blender 2.8, Meshlab.

I will first say that the Laserscan I had was Geolocated as a project, If you are inserting a non geolocated laserscan, you can adjust the geolocation in the properties section. Even still, this is not a problem as I will also explain how to geolocate a standard mesh too so you don’t need a laserscan, its just this software makes it a cleaner process.

So I meshed out the laserscan in FARO SCENE and exported as OBJ,PLY,STL etc but none of the formats could show up in the viewport. With so much saying about Blender 2.8 Bugs at the moment, I just dismissed this as an option so I opened up in Meshlab and voila, it was there, a lovely Laserscan of the building. So I saved it and made a new OBJ file which was almost twice the size of the first and imported into UNITY which on completion was nowhere to be seen.

After much faffing with this and that and trying this, standing on one leg with fingers crossed and arms – anyway you get the picture, while exporting or clinching at any last notes from the net, I thought OBJ – its an old format, I wonder if I can open it in Notepad.

Sure enough it is a text file format and the first thing I recognised was that the X,Y,Z looked an aweful lot like Decimal Degree co-ordinates for the area but out of range.

Now while researching GNSS Systems I had to get familiar with different Historical systems of mapping in use and origins as well as the Technical aspects to the satellites themselves – Good Times :-), The point is that I was reminded of not looking at the world as a globe but as a flat paper map. Flat Land Maps, not Naval Charts, Used to be and often are still in a system using Cartesian Co-ordinates commonly known as Eastings and Northings.

So I went to the Ordnance Survey Site and inserted the

  • X value in the Eastings Field,
  • Y value in Northings Field,
  • Z value in the Height Field.

BINGO exactly where the scan was Taken

So if it is here then why cant I work on it in my modelling suite, i.e. 3DS or Blender. Thats simple now, we know Eastings and Northings are in meters like Blender and 3DS so an Eastings of 583947.75m would be 583.94775KM East of the 0 point on the X axis, a little way away so of course you will see nothing.

In Blender, you can correct this easily by first Selecting the Imported Object in the Heirachy, Move the Cursor over the Viewport and Right Click. This will give you a menu where you can select ‘Set Origin’ which opens a sub menu and select ‘Set Origin To Geometry’.

You should now see the Transform Component have a huge number in X and Y but Z hopefully will be the Altitude so much smaller.

MAKE A COPY of THIS AS YOU WILL NEED TO PUT IT BACK.

OK now go into Edit Mode (push tab) and set the Local Median co-ordinates to 0 ( it will not allow Zero and will go to some smallish number). This only sets the centre of the model to 0 or in the middle ish but close enough for this as it is all relative from now on.

Go Back into object mode and hopefully your Transform component numbers have not changed, now set them to Zero. Your model should now be there.

Do your modelling and once finished, put the old Transform co-ordinates that you copied, back into the Transform Component to Re-locate the Model back into its Geolocated Space.

The link between the XYZ to Eastings and Northings is really powerful when we now import our model into a Game Engine like Unity or Armory3D as we can use a simple script to read the player or model current xyz and convert it into a Global Decimal Degree which can be linked out into all mapping systems, Smart Phones and everything Geolocatable thus creating the link between Virtual World and Real World.

This New XYZ can also become an Assets Identifier.

Ok but I have a non Geolocated model, How can I merge this into the GeoSpaced Game.

Idealy you would have GNSS Equipment Like SPECTRA SP20 GNSS Device for BIM or one of their other High Accuracy units but if you are at home, you could use a web map to derive a Rough location (still might be better than your phone) Google Maps (Left Double click Location to drop a pin, Click on the decimal degree output at the bottom pop up, Copy and paste from the left hand sidebar)  or Bing Maps (Right click location, select copy below the decimal degree co-ordinates, pres CTRL + C)

Put these co-ordinates in the Transform Tool Decimal Degrees section at Ordnance Survey (If using maps, you will only get Lat and Long without a height value, set Elipsoid Height to 45.734 to pin on the ground at 0m height) which will give you an easting and Northing.

Ok This is your reference position. We can simply enter them in the Global Transform Component (Object Mode) after setting the Local Median (Edit Mode) to the location of the object that is to locate at that geolocation.

If not using Blender, then at worst case to apply your vertex points we need them in a spreadsheet for editing.

Open the OBJ file in Notepad (rightclick -> open with other apps -> select notepad)

We need to copy all lines beginning with a ‘v’ and there might be a few thousand. Each line has six values, position x,y,z and colour x,y,z(the colour is ranged from 0-1).

Select the first line and scroll to the end of the ‘v’ lines and hold shift and click at the end of the last line which will select everything inbetween.

Open your spreadsheet editor and paste. You need to chop and divide the text into seperate columns, there are lots of youtube videos on how to do this, and you can use the space to identify the place to seperate.

Once you have seperated the values into seperate cells, write a formula to add the reference easting to the first value, reference northing to the second value and reference height to the third value. Then copy cell all the way down for it to autopopulate the new figures.

Now concatenate (join) the values back into one single line of text with the spacings and colours added back in. Again lots of youtube videos on how to do this.

Copy the new data into the OBJ notepad file replacing the old ‘v’ values and save as a new OBJ. If the file extension is not available, select saveAs, type the filename followed by ‘.obj’ without the quotes.

now this obj will open in its new geolocation in eastings and northings, sorry I mean XYZ, co-ordinates.

Just to conclude that now we can write a script that tracks the xyz of anything we require to track either its movement in the virtual or even be effected by the movement of something in the real world to give a truly real time virtual simulator.

Imagine if a bridge was raised prematurely, you could not only see a visualisation of what is happening but also use AI to aleviate the problems as it can read and control the virtual easier than the Real but with full control of the Real by Proxy.

VIRTUAL WORLD IS POSSIBLE EASILY, CHEAPLY AND RELIABLY

NOW!!!!

Please Share and Like as I do not have a virtual Billboard, That will be the next big thing, Immersive Advertising – I can Hear George Carling now.