Go to content Go to navigation

Mayan Anaglyph · 2009-12-06 13:26 by Black in

My semester thesis in Software Engineering was titled Mayan. This by itself is as non-descriptive as it can get, but it basically is an improved method for anaglyph stereo. The improvement was to allow for better color perception and preservation while achieving superior fusion.

Stereoscopic Images?

Stereoscopic images are pictures that can be seen in 3D, they contain data from both eyes’ viewpoints. Various technologies exist to produce, store, display and perceive such images. In this article am writing about anaglyph, color encoded stereo.

How does it work?

Almost all color display technology these days works by mixing three color channels, red, green and blue. Due to the limited sensory equipment humans possess, this is enough to imitate a wide range of the visible color spectrum. The existence of three channels is used in traditional anaglyph, it displays the image for one eye in red only, for the other eye in green and blue, which appears cyan. (There are many methods to mix the images that preserve different aspects of the image)

Mayan displays the left image in the Magenta plane, red and blue. The right image in the Cyan plane, green and blue. The blue channel is therefore shared. Having two channels for each eye makes color perception much better. Of course the visibility from both sides’ blue causes crosstalk, but due to shortcomings in human physiology, blue can not be perceived as sharp as other colors, so the impact is lessened.

Mayan example image

Fusion vs. Rivalry

Fusion is achieved when both the left and right image are perceived by the correct eye, and the brain considers them actual data seen from different viewpoints and fuses them to a single 3D impression. It is easy to fuse on images that contain fine structure that exist in both viewpoints.

Rivalry is when the images can not be fused and the brain alternates between perceiving only the left and only the right image. It can occur when the two viewpoints differ too much, or there is not enough structure to fuse on.

The Mayan algorithm has a tuning parameter to allow easier fusion or better color perception. It influences how much the pictures are desaturated and mixed into the respective side’s channel.

And after all that talk, here’s my thesis paper. It contains a more detailed description of the mixing and some analyses of the crosstalk and possible mitigation strategies.

Comment

Creating new user with PowerShell · 2009-12-05 19:35 by Black in

Exchange Server 2007 has removed the ActiveDirectory integration of previous versions, creating a user in AD does no longer also create and link a Mailbox. To create everything properly, the Management Console for Exchange or PowerShell has to be used.

This PowerShell Script creates a new User with parameters set in a GUI. .Net is used to display a Dialog Box, the text boxes are then evaluated and used to create a new user. After that, that user is changed to reflect the remaining settings.

Additional features include the creation of file shares on the server, automatic generation of the E-Mail address with some limited CharSet cleaning, live updating UI, user expiration date setting and more. The whole script is quite customized to the environment it is used in, but I am sure the core can be used by anyone.

An interesting concept this PowerShell Script show are the creation and event based updating of .Net Widgets. updateUI is a function that is called as event handler for the text box, it can execute input validation, update other parts of the UI or do anything else. (See the linked source for more context):

new-user.ps1 [20.72 kB]

  1. $form = new-object System.Windows.Forms.form
  2. $form.Text = "Exchange 2007 User Create Form"
  3. $form.size = new-object System.Drawing.Size(440,550)
  4. $form.AutoSize = $true
  5. $form.AutoSizeMode = "GrowOnly"
  6.  
  7. ### FirstName
  8. $posY += $lineHeight
  9.  
  10. # Add FirstName Box
  11. $firstNameTextBox = new-object System.Windows.Forms.TextBox
  12. $firstNameTextBox.Location = new-object System.Drawing.Size($posXControl,$posY)
  13. $firstNameTextBox.size = new-object System.Drawing.Size($controlWidth,$controlHeight)
  14. $firstNameTextBox.add_TextChanged({updateUID})
  15. $form.Controls.Add($firstNameTextBox)

Creating a new User with PowerShell is easy thanks to the new-mailbox cmd-let the Exchange Integration installs. But setting some of the properties was rather complicated. For some, an AD Object has to be generated:

new-user.ps1 [20.72 kB]

  1.     # General Stuff (alternative: use set-user, for some of those)
  2.     $user = get-user -identity $upn
  3.     $aduser = [ADSI]("LDAP://"+$user.DistinguishedName)
  4.     if ($desc -ne "")
  5.     {
  6.       $aduser.description = $jobDescDrop.Text
  7.     }
  8.     if ($phone -ne "")
  9.     {
  10.       $aduser.telephonenumber = $phone
  11.     }
  12.     if ($webpage -ne "")
  13.     {
  14.       $aduser.wwwhomepage = $webpage
  15.     }
  16.     $aduser.company = $company
  17.     $aduser.department = $department
  18.     # FS
  19.     $aduser.profilePath = $pathPro + $alias + "\%osversion%"
  20.     $aduser.homeDrive = "P:"
  21.     $aduser.homeDirectory = $pathBase + $alias + "$"
  22.     # Commit Settings
  23.     $aduser.SetInfo()

Others such as setting the expiration date of an account to “never expires” require to use a more arcane syntax:

new-user.ps1 [20.72 kB]

  1.     # Hard to change Expiration Date is set directly
  2.     #$aduser.psbase.InvokeGet("AccountExpirationDate")
  3.     $aduser.psbase.InvokeSet("AccountExpirationDate", $validUntil)
  4.     $aduser.psbase.CommitChanges()

Comment

Encoding Movies with x264 and mplayer · 2009-12-04 17:09 by Black in

An easy way to encode movies is mencoder. Unfortunately, it is fairly outdated and it’s muxers are mostly problematic. A better way is to use the tools directly: mplayer for decoding, x264 to encode the video, a suitable audio encoder, and a muxer for the container format that is desired.

I wrote an encoder script to handle this all easily. The core is using named pipes, constructs that act like files but don’t actually store content, but relay it:

Encoding Core

  1. mkfifo fifo.y4m fifo.wav
  2. x264 fifo.y4m -o out.mkv &
  3. faac -q 128 fifo.wav -o out.m4a &
  4. mplayer in.mpg -vo yuv4mpeg:file=fifo.y4m -ao pcm:file=fifo.wav:fast
  5. mkvmerge -o result.mkv out.mkv out.m4a

What this does: First it creates two named pipes of the desired name. Then the encoder programs for video and audio are started. They read from the named pipes, which blocks until an other process writes data into them. As last process, mplayer is started, writing raw output into the two named pipes with the file based video and audio output module. After encoding is complete, the two resulting video and audio streams are merged. If everything went well, the streams should fit perfectly.

The script also contains a lot of maintenance code that handles encoding and decoding in their own screen sub-sessions. That way the output from all tools can be seen easily… but it’s not really useful… :)

Problems with this method are the inability to handle variable framerate content and depending on the muxer the loss of meta data such as framerate and view aspect ratio/display size.

Comment

About Me · 2009-12-04 15:59 by Black in

I am Master of Science ETH in Computer Science and graduated in Fall 2008. My current occupation is doing what ever I want (also known as Holidays), so I spend my time coding, playing around with new software and playing World of Warcraft.

The past months I spent serving for my country in end-user support, which was often quite boring, but where I also had the opportunity to do some scripting. I’ll post some of the results here.

During the holidays I started working on my master thesis work again, ExaminationRoom. Changes include actual OpenGL error handling and a new Shader based renderers.

Comment

ArtPad Vector alpha · 2008-01-13 05:12 by Black in

I’ve written about it before, and also implemented it some time ago: Drawing in ArtPad is now vector based. The downside in the new version is that the eraser does not work yet. But it does use much less memory, and drawing is also faster. Not to mention that it looks great.

How does it work?

A line is drawn by drawing a rectangular texture. The texture consists of a single line in a rectangular file (32×32 pixel). To draw lines with a given angle, the texture is rotated. This is done by transforming the texture coordinates with the inverse rotation matrix. Due to the property of the texture mapping to infinitely repeat the border pixel for all points outside the texture area, the texture can also be scaled down for long lines.

The newest version can be gotten here

Comment