Dienstag, 5. August 2014

Automate html creation with the linux dash

Overview

This article describes the basic concepts for html generation with the linux dash command interpreter.


Example: Video Collection

The dash command line interpreter can be used to generate video player websites for each video in a directory.

Example: Video Collection
Example Video Collection


Dash commands


for - loop through files

Use the for-command to loop through every file. The example below loops through all mp4-files.
for filename in *.mp4;do echo $filename; done

The for-command can also be used to generate video thumbnails and background-images with a video formats converter.


while - loop through code lines

Use the while-command to loop through every line of a file. The example below will read the html template file template.html and loop trough all its lines and replace the SSI-variable *VARIABLE* by the dash variable $variable. In this example the output file is vidcenter.html.
while read line;do
    line=$(echo $line | sed "s/\*VARIABLE\*/$variable/")   
    echo $line >> "vidcenter.html"
done < "template.html"

Conclusion

'for' and 'while' are handy dash commands for linux to generate html from templates, replace variables and run video converters on the fly.


Montag, 7. Juli 2014

Creating an image histogram with Excel

 

Introduction

An image histogram

 

About Histograms

In computer vision and photography histograms show the pixel counts of an image grouped by the brightness. Usually this report is represented as a bar chart.

PATENT INFRINGEMENT WARNING!

This article shows how to load image data into an Excel spreadsheet. The algorithm may be protected by patents in your country.


Loading image statistics into a spreadsheet

Required components

To load image information into Excel these components are required:
  • An excel spreadsheet with macros (*.xlsm)
  • A .Net Framework Wrapper Library (*.dll) which can be generated with Visual Studio 


Installing a wrapper Library (*.dll)

The wrapper library needs to be COM-visible and has to be registered for COM-interop in the Windows registry. It provides access to the System.Drawing.Bitmap object which provides basic image access routines, in particular the GetPixel(int x, int y) method.

Loading the wrapper library with excel macros

In the VBA window of Excel the wrapper library has to be checked as a reference. Then the wrapper class has to be defined as a variable with the New instruction.


Finally the image data can be inserted in a spreadsheet table.



Dienstag, 6. Mai 2014

Sky Observation Tips

Things to consider when observing the sky in European cities

Stars and celestial bodies can be photographed either from your window or from a location outside. The main reason for choosing outside locations is the better visibility of the sky.

If you choose to go outside there are a few things you should keep in mind.

Public nuisance: the use of astronomic equipment in public places may require the authorization of the local authorities. Though its use may disturb others and can lead to your arrest.

Wildlife: if you intend to leave the urban area at night you should try to avoid endangered animals like bears and wolfs, because encounters may cause injury or death. According to internet sources wolfs prefer to travel on streets while bears travel on mountains and fields. Both cut off the way of their prey when it tries to get to the waterhole.

Montag, 28. April 2014

The limits of optical zoom


How image quality is affected by optical zoom
To record videos, images need to be projected on the cameras image sensor. Earlier models used electron tubes, todays cameras use semiconductor based sensors, CMOS for example.
The optical zoom is basically a telescope in front of the camera. Typical magnification rates are 25x for TV cameras, 50x for sports objectives and 100x for telescopes. Though, there is no limit for optical zoom.
But, the image quality decreases with increasing zoom level. Two problems occur when zooming in: chromatic aberration and blur.
Chromatic aberration causes the image to be divided into all its different colors, comparable to the colors in a rainbow. Because each color has a different index of refraction, each color channel of the image appears to be at another position. For video cameras only red, green and blue are of importance. So the chromatic aberration in video cameras causes the color channels to appear at different positions on screen.
There are three different ways to reduce the chromatic aberration: moving the channels with software to the right position, using higher focal lengths, which makes the cameras bigger and using so called “optical glass” as lens, which is lighter than normal glass and therefore has a reduced refraction. Pure quartz glass is a very good optical glass. Non-optical glasses are also made of quartz glass but with additives, that lower the production cost.
Blur is another problem that occurs when zooming in. This has to do with the wave nature of light and the probability of sharp light getting through the lens. When the diameter of the aperture is too small, not enough light gets trough to make the image appear sharp. Photographers might experience the opposite effect, which is due to enhanced depth of field. When the aperture size is too big, there is too much light, which doesn’t appear as a sharp image.
Smaller exposure times can compensate too much light. Though long exposure time cannot compensate too small aperture sizes and may cause noise in the image. For RGB cameras orange objects cause the most image noise and white objects like walls and buildings cause the least noise.
Finally, it’s hard to tell which camera zoom objective to use for a certain purpose. It is highly recommended to try before you buy, because there are other quality aspects and design issues to consider.


Samstag, 26. April 2014

How is video image stabilization working?

Video Image Stabilization explained

The process of video image stabilization removes undesired vibrations from a video recording. There are two different types of stabilizers: hardware and software based stabilizers. Hardware based stabilizers use electromagnets to stabilize the image by moving optical lenses and prisms. Software based stabilization detects image features, such as object contours, highlights  and shadows and tracks its movement. In this article software based stabilization is explained.

Software based video image stabilization takes place in just three steps, feature detection, movement calculation and movement correction.

First, notable features of an image are detected. Features are regions of an image, which catch the feature detectors attention. There are several different feature detectors available to the public, which are known since the 1980s, for example, “good features to track”.

The movement detector compares two or more images and calculates the movement of each feature.

Then movement correction uses the movement information from the detector to stabilize the image by just moving it in the opposite movement direction.

But, the image moves out of the screen and disappears after some time. Unless you want to do photo stitching, movement of the image is unwanted when recording videos. Though the image movement can be used to measure camera rotation. The resolution of video cameras is far higher than the accuracy of potentiometers or acceleration measurement chips. Video cameras can detect movement of just a few arc seconds.

To avoid moving the image out of the visible screen, the movement detector has to distinguish between camera movement and vibrations. This is done by statistical analysis of the movement, which is comparable to distinguishing the volatility from the moving average of a stock chart. Sophisticated image stabilizers use fast Fourier or cosine transform to move the image into the right position before a shock occurs.

Moving objects confuse the image stabilizer, so moving objects have to be excluded from the stabilization process. By discriminating regions of different movement directions, the image stabilizer can detect moving objects like cars, clouds and swarms of birds and recognize them even in front of a moving background.

Camera rotation is also confusing for the image stabilizer and is far more difficult to exclude from the stabilization process than the problems mentioned above. Because the center of the rotation may be outside of the field of view and the background may move while the camera is rotating, additional statistical analysis is required, which can slow the image stabilizer down. When there is not sufficient computing power, the software can just make a guess.

Finally, stabilized video images are much easier for the eye to watch and increase the compression rate of video streams and files. Watching stabilized videos can reduce stress and help lower the cost of disk storage and data bandwidth.