Friday, October 21, 2011

File Formats - TIFF (Tagged Image File Format)

So my first installment in File Formats Series is TIFF (Tagged Image File Formats).

We all know JPEG is widely used file format these days because it provides extreme flexibility for number of software and applications.

But from last few years TIFF file format is being used a lot. Tiff format is one of the most multipurpose and integrated raster formats. It is easily ported among platforms and used for many different purposes. So I am here to discuss a few pros, cons, facts, figures and benefits of TIFF format.

 I am breaking down the information so that it's easy to understand.

1) TIFF stands for Tagged Images File Format. Now, what's tagged in it? Is it like facebook image tagging or a hyperlink to something? No, Nothing like that. The Images which we use like jpegs, pngs and others contains information like Image Dimension,  Color Space (grayscale, Bitmap and CMYK etc.), Bit Depth/ Data Type (8bit, 16 bit or 32bit), Compression algorithm, Copyright Information and some other technical specification. This information is saved as fields in these file formats. But In case of TIFF format the information fields are saved as 'Tags' and are capable of saving all the information like other formats but also defined to hold your own application specific information. The TIFF specification defines a framework for an image header called 'IFD' (Image File Directory) that is essentially a flexible set of specifically those tags that the TIFF writer software wishes to specify.

2) The main reason why TIFF is so much popular is the Compression methods which it holds. Other formats are designed to hold a single compression method where as TIFF formats can hold many Compression methods like JPEG, LZW (Lempel-Ziv-Welch), ZIP, Deflate and some other also. (I have discussed these later.) Apart from the functionality to save files with compression methods, TIFF can also be saved in Uncompressed format. In short, TIFF supports Lossy and Lossless compression both.

 3) Other beneficial and powerful TIFF features is to support for a wide range of data types. You can store signed or unsigned integers, floating point values and even complex data in the TIFF file. Combined with the possibility to store multiple number of image channels about 24 channels per Image, this makes TIFF a very useful format to store scientific data.

 4) TIFF support for multiple images in a single file. Such a file is then called 'Multi-Page' TIFF. Thus, the TIFF format is very well suited to e.g. store the many pages of a single fax in a single file.

5) If you are working in Adobe Photoshop TIFF file format saves layers and layer styles also.

6) TIFF was developed by Aldus company in 1980 and from 2009 Adobe systems holds the copyright of TIFF. The main reason to develop TIFF was to standardize the image outputs of desktop scanners or desktop publishing. TIFF was agreed by many scanner vendors and companies as a single universal format rather than producing their own proprietary format.

7) TIFF supports pixel Order i.e. how pixels are arranged and manipulated in an Image. It supports Interleaved and Per Channel pixel order. And because of these pixel order images are easy manipulated in an Image if we are working on platform like MAC or PC.

8) It includes vector based clipping paths.


9) It supports Image processing methods like Image Pyramids which is again a plus point.
10) TIFF formats also saves transparency.

11) At last I will discuss a few drawback of TIFF format. The first one, it lacks standardized support for advanced imaging features that were developed over the last couple of years. Adobe systems holds the copyright of TIFF as we know, So the compression methods provided by Photoshop is not sometimes suited to other applications.

12) Second drawback is the TIFF format is not streamable i.e. not suited for websites.

If you have any question or want to suggest something about the discussed points please feel free to contact me .

Take Care and Thanks for reading.

Mohit Sharma
mht.shr@gmail.com
www.facebook.com/mht.shr

Thursday, April 7, 2011

Why 3D movies are in Red and Cyan Color and Why 3D glasses are of these two different colors?

Okhay, A very interesting question was asked by Vikrant Gupta (user of Facebook Third Dimension Knowledge Group).

He asked that "Why do we use two colors "red and cyan" only while converting movies into 3d or even watching them?"

So, as the question is really interesting and informative I decided to make a blog on it.....

I am explaining the answer more than you asked so that others may also have the benefits.

In a real world we see things in Three dimension. We understand distance using parallax based on the difference between what the right eye sees and what the left eye sees. This is because of the distance in our eyes (which is approximately 2 inches). Now lets see how we see in real world. There are two Photoreceptors in our Eyeball/Retina known as Rods and Cones. These Photoreceptors are the cells in eye's retina which convert light electromagnetic radiations in to the biological process which our brain can understand. Now when we see something our brain perceives the slight shift in the images because of the distance in our eyes. And then brain composite two images to generate understandable depth. These Photoreceptors are responsible to send the signals to the brain and we understand and see the objects or surrounding in front of us.

Rods and Cones in Eye.


Now lets take this principle to the 3D cinemas.

The movie which we watch in Cinemas works by projecting two frames from two different projectors on the screen in a quick succession. One frame is according to the  perspective of left and the other is according to the right eye. This setup of two projectors is to generate a slight shift between two COLORED Frames. OK, Now we came to the real question, Why Colored?????

Avatar Movie screenshot showing a slight shift in colors.


Olrite,

We will talk about Cone Photoreceptors first, Cones sees Red color easily. Now blue color wavelength is 475 nm which is comparatively higher as compared to red color which is approx 650nm (nm stands for nano meter if you are not a physics student). Rods sees blue color easily and are responsible for our dark-adapted, or isotopic, vision. The rods are incredibly efficient Photoreceptors. More than one thousand times as sensitive as the cones, they can reportedly be triggered by individual photons under optimal conditions. Now the things goes blah blah blah blah..... yeah I am a physics student so I know hell a lot of crap. OK lets make this simple....

Cones Photoreceptors work with Red color easily and Rods Photoreceptor works with Blue/Cyan color easily and moreover these two colors are choose in 3D movies because they work best because they are at the opposite ends of the visible spectrum. This is the reason why this Red/Cyan color scheme is used these days in 3D movies because of biological process and the spectrum position.

Now, Last thing is about the 3D colored glasses.

3D Glasses.



The rods and cones works in the eyeball that see red and blue, see the individual tones and our brain composites them. Now, As I told that Red and Blue are at the opposite ends of the visible spectrum so in the 3D glasses two different filters are used which is in red and cyan color. Red filter restricts the coming cyan frame and same is the case with the other one. So in a simple way, we can say the real world perspective illusion is made using the glasses which restricts one frame for left and other for the right eye. The human brain naturally combines the two 2 dimensional images into a single, which makes it like we are watching real world and creates the illusion of Three Dimensional scene.

These days a newer technique is used apart from 3D colored glasses. The technology is called Polarization. Which is more powerful then we discussed. If you want to know about this Polarization technique also you can ask questions about it in the group.

I hope this answer satisfy you.
Take Care.

Mohit Sharma
mht.shr@gmail.com
www.facebook.com/mht.shr

Monday, January 31, 2011

History of High Dynamic Range Imaging (HDRI)

One of my friend in Facebook asked a question in my Third Dimension Knowledge Group about History of HDR and about Paul Debevec. So I researched on this and decided to make a blog. So correct me If I am wrong in some places.

First of all i want to specify that HDR is not a procedure or a process, it is a CONCEPT. The concept of generating a dynamic range of Luminance to acquire HDR Images. As we will see that people have developed many different methods and procedures to acquire or generate high dynamic range images. An accurate HDR image acquires ALL the radiance and irradiance of a scene

So the idea of HDR was developed in 1850 by Gustave Le Gray.

Self Portrait of Gustave Le Gray

He attempted to render seascapes showing both the sky and the sea. Gray made it possible by using one negative for the sky, and another one with a longer exposure for the sea. He there by combined two pictures.

Then in 1852 Multi Negative Compositing also known as combination printing was introduced by Hippolyte Bayard.

Bayard's Self Portrait as a Drowned Man


Hippolyte publicized the Multi Negative method to create and image. He accomplished this by photographing a scene in 2 parts and then merging the results during the Printing Process.

Then in 1858 a River Scene by Camille Silvy was presented.

Camille Silvy
French, negative 1858, print 1860s


Now Camille implemented and extended the first method of using Multiple Negatives to create an image. He accomplished this by photographing a scene in 2 parts and then merging the results during the Printing Process. He extended this process by showing how a photographer could take the seperate exposures at different times and from different positions. Although this is not technically HDR rather more Compositing, it was the first time anyone implemented this type of development method. It would take more than 100 years before Paul Debevec extended this method and produced the first photo based HDR images. (I will discuss about Paul Debevec later.)


Then 1971 Retinex by Edwin Land was presented.

This was one of the first most impacting papers on virtual every tone mapping operator that followed. Although this paper didn’t have an exact implementation several papers and methods were later developed using these concepts.
The word "retinex" is formed from "retina" and "cortex", suggesting that both the eye and the brain are involved in the processing.”
Retinex Theory explains why we can perceive an object's true color under a tinted light. This effect is known as color constancy also. I will not go in detain because we have to discuss other topics also. But If you want to know about Retinex search just HOLLA!! at me.

But originally Charles Wyckoff (the same guy who inspired Computational Photography) developed HRDI in the 1930-40. So the world came to know about HDRI and the first HDRI image was a nuclear explosions which appeared in Life magazine in the mid 1950s.

Life Magazine




(I am not pretty sure about this magazine cover, that this is the one or not.)

Wyckoff implemented the technique of local neighborhood tone remapping (which is done by using Dodgine and Burning in which they selectevly increase or decrease the exposure of the photograph to yield better tonality reproduction) to combine differently exposed film layers into one single image of greater dynamic range.

Now the people in the Industry was quite familiar with HDR but the limitation of available computer processing powers was still a constraint for HDRI.

The first practical application of HDRI was by the movie industry in late 1980s and, in 1985, Gregory Ward created the Radiance RGBE image file format which was the first (and still the most commonly used) HDR imaging file format.

Then finally Wyckoff's concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988. In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.

And then finally in 1997 this technique of combining several differently exposed images to produce a single HDR image was presented to the public by Paul Debevec.

In 2005, Adobe System introduced a system in Photoshop CS2 to Merge to HDR. This function was capable of combining 15+ stops into single image that results in a more painterly image than traditional ideas of exposure control has resulted.

So summing up according to our industry i will say that the use of HDRI in CG was introduced by Gregory Ward in 1985 with his open source Radiance format which created the first file format to retain a high dynamic range image.

So the another topic was about Paul Debevec.

Paul Debevec


Now Paul Debevec is a graphic researcher. He leads the Graphics Laboratory at USC's Institute for Creative Technologies. Now lets get straight to the facts that why this guy is so popular.
Paul invented several imaging and lighting techniques that are used in computer graphics. Like techniques involved in Image Based Lightning and Light Probes. He invented the technology of using HDRI image in side of 3D applications. HDRI was first time used in the Film  The MATRIX for which the film won the academy award for Visual Effects. Debevec leads the design of HDR Shop, the first high dynamic range image editing program, and co-authored the 2005 book High Dynamic Range Imaging. 

Debevec has led the development of several Light Stage systems that capture and simulate how people and objects appear under real-world illumination. The Light Stages have been used by studios such as Sony Pictures Imageworks, WETA Digital, and Digital Domain to create photoreal digital actors.

Paul received several awards including Creative and Innovative Work in the Field of Image-Based Modeling and Rendering, Gilbreth Lectureship from the National Academy of Engineering, pecial Award for a Distinguished Professional Career in Animation/VFX from the Mundo Digitales Festival in A Coruna, Spain and in 2009 received the "Visionary Award for VFX" at the 3rd Annual Awards for the Electronic and Animated Arts.


Thanks for reading.

Mohit Sharma
mht.shr@gmail.com
www.facebook.com/mht.shr

Monday, January 24, 2011

The Father Of Virtual Reality

Morton Heilig (December 22, 1926 – May 14, 1997)

Morton Heilig, The Father Of Virtual Reality in many books and articles. He was one of the great visionaries of our time, he was a Philosopher, Inventor, Filmmaker and in general a man who looked towards the future and was way ahead of his time.

In 1956, Morton Heilig began designing the first multisensory virtual experiences. Resembling one of today's arcade machines, the Sensorama combined projected film, audio, vibration, wind, and odors, all designed to make the user feel as if he were actually in the film rather than simply watching it. Patented in 1961, the Sensorama placed the viewer in a one person theater and, for a quarter, the viewer could experience one of five two-minute 3D full color films with ancillary sensations of motion, sound, wind in the face and smells. The five "experiences" included a motorcycle ride through New York, a bicycle ride, a ride on a dune buggy, a helicopter ride over Century city in 1960 and a dance by a belly dancer. Since real-time computer graphics were many years away, the entire experience was prerecorded, and played back for the user.

Heilig also patented an idea for a device that some consider the first Head-Mounted Display (HMD). He first proposed the idea in 1960 and applied for a patent in 1962. It used wide field of view optics to view 3D photographic slides, and had stereo sound and an "odor generator". He later proposed an idea for an immersive theater that would permit the projection of three dimensional images without requiring the viewer to wear special glasses or other devices. The audience would be seated in tiers and the seats would be connected with the film track to provide not only stereographic sound, but also the sensation of motion. Smells would be created by injecting various odors into the air conditioning system. Unfortunately, Heilig's "full experience" theater was never built.


Sensorama

Heilig's Sensorama
The syn-aesthetic, immersive machine developed by Heilig in 1962 widened the Expanded Cinema to a theater of total illusion, which Heilig called the EXPERIENCE THEATER.

Built inside a booth, in which visitors could sit on a movable stool, Heilig imagined that, for the price of 25 cents, Sensorama would offer multi-sensorial impressions of a virtual, ten-minute-long motorcycle ride through New York City.

Apart from seeing the film, the Sensorama user would simultaneously experience the corresponding vibrations, head movements, sounds, and rushes of wind. But Heilig was unable to commercialize his visionary prototype for a cinema of the future. 

In a later interview he stated: The Sensorama may have been too revolutionary for its time. He patented his invention as the  Experience Theater with 3–D images in 1969, and in 1971 the  Sensorama Simulator for creating the illusion of reality.

Here I have uploaded some more Images.
Heilig's head mounted display from his patent application
















Thanks for reading.

Mohit Sharma
mht.shr@gmail.com