Thursday, June 30, 2011

four

Because measuring area from images has numerous applications – medical research (i.e. cancer cells), remote sensing (i.e. land area estimation) and quality control (i.e. solder leads in circuit boards, grains) – that was what we had to do for this activity.

Although there are many equations available for area computation, these are only applicable to regular shapes.  Like anyone would tell you, the real world isn’t made up wholly of regular shapes.  As humans, we have a tendency to complicate the simple things.  Thus, a way to measure the area of irregular shapes would be to take the points bounding the shape and use Green’s Theorem whose mathematical representation is shown below

equation
Figure 1. Mathematical expression for Green’s Theorem

To test the accuracy of this method, I created shapes in MS Paint whose area can be computed for with the appropriate equations and saved these images into .BMP files.  In the process, I also took the pixel coordinates of the corners or edges and computed the area values for these shapes.  I then adapted Green’s Theorem into a Scilab 4.1.2 code making use of the SIP Toolbox.  If I must say so myself, the computed areas from the implementation of the theorem are not so far off from the values I manually obtained.

rectangletriangleshape
Figure 2. The geometric shapes used to test Green’s Theorem. 
Note that these shapes are contained in individual images and are only presented as a stack.

code_green
Figure 3. Adaptation of Green’s Theorem in Scilab 4.1.2

res_shapes
Figure 4. Results of the area computation using Green’s Theorem (Scilab)
and manual computation (MS Paint ) and the corresponding deviation values

Because the place holds so many good memories for me, I decided to use the Quezon Memorial Circle (QMC) for my attempt at using Green’s Theorem estimation.  I isolated the said land mass using Google Maps and I had the option of choosing the satellite or the map version of the image.  If it’s not too clear from the picture below, I used the map version.

qc_map
Figure 5. Screen grab of the Quezon Memorial Circle from  Google Maps

After making a duplicate of the screen grab for reference purposes, I then went on to crop the screen grab so that only the QMC would be shown.  Using techniques gained from previous activities (see here), I was able to isolate the shape of QMC itself and thus, find it’s area using a modified version of the code I presented earlier.

qc_map copy
Figure 6. Shape of the Quezon Memorial Circle

The Green’s Theorem algorithm came up with an area value of 50560.  I rejoiced for two seconds thinking I was done and then realized that this value was expressed in square pixels, not in a physical measurable unit.  I used the scale in Figure 6, determining that a distance of 200 m in the real world equaled 87 pixels in the Google map world – at least by my estimation.  With the concept of unit conversion, the QMC then had a physical world area of 267195.1 square meters. 

I realized too late that I forgot to search for a theoretical value of the area and when I did, no source could give me an answer.  I then had to adapt the method used for the geometric shapes.  Once again using unit conversion, I came up with an area value of 269320.5 square meters. 

I give myself a 8/10.  Seeing my results and my presentation of the figures, I would have given myself a ten were it not for my rather inaccurate “theoretical value” of the QMC area and my overall disappointment in my performance.  On a lighter note, I’m very pleased to know that I retain information from previous activities and am able to apply them to .  Rest assured, the fresh concepts I came to face with in this activity are going in my memory bank.

REFERENCES:
Google Maps
Activity 4 Area Estimation for Images with Defined Edges, Applied Physics 186 Manual

Tuesday, June 28, 2011

three

Get your reading glasses on, this is gonna be a long one.

Before I go on and on about what the activity actually made me do, I think it’s only right I give you a (relatively) brief  introduction into the world of digital images – something that I myself went through these past few days. 

First off, there are four basic image types – binary, grayscale, truecolor and indexed images. 

A binary image is – like most computer geeks can tell you – made up of 0’s and 1’s.  In other words, there are only two colors present – black and white.   Grayscale images on the other hand have 256 colors present, albeit all of these colors being various “shades” of gray existing between black and white.  Nevertheless, they are not the same, despite what modern photography tells us.  A good example would be the two images that follow which, at first glance, may seem to belong under the same basic image type.  Using the imfinfo command in Scilab (via SIP Toolbox), we see that the first image contains only 2 colors and the second, 256, characteristics respectively of binary and grayscale images.

binary
info binary
Figure 1. (top) A binary image. (bottom) Image Properties Obtained through imfinfo in Scilab (SIP Toolbox)

grayscale
info grayscale
Figure 2. (top) A grayscale image. (bottom) Image Properties Obtained through imfinfo in Scilab (SIP Toolbox)

Like the preceding basic image types, truecolor and indexed images seem like they’re one and the same.  By definition however, truecolor images are ones with three channels or bands so that each channel is representative of the intensities of the colors red, green and blue within each pixel.  Indexed images, on the other hand basically contain two data sets – the image itself and the color map.  Differentiating the two is easily done using again the same command we employed earlier.

truecolortruecolor info
Figure 3. (top) A truecolor image. (bottom) Image Properties Obtained through imfinfo in Scilab (SIP Toolbox)

indexed
info indexed
Figure 4. (top) An indexed image. (bottom) Image Properties Obtained through imfinfo in Scilab (SIP Toolbox)

The world can’t function with just these four basic image types.  Advances in technology brought on a subsequent leap forward in the types of images available to us.  A few samples of these advanced image types are high dynamic range (HDR), multi or hyperspectral, 3D and temporal images.  HDR images are used to record digital x-rays or bright events while multispectral images are used for satellite images due to greater number of channels that each pixel can hold as compared to truecolor images.  3D images are used to store spatial 3D information and temporal images (otherwise called videos) are, well, images captured and presented together sequentially.

hdr
Figure 5. An example of a high dynamic range (HDR) image

hyperspectral
Figure 6. An example of a multi or hyperspectral image

3d
Figure 7. An example of a 3D image

Figure 8. An example of temporal images or a video

I am aware the quality of the above video isn’t all that great. I included it here because, well, who doesn’t love the mentos and coke experiment?  But if you, like me, had to squint through most of the video, here’s another science-y video dealing with … uhh.. pendulums. *grin*  It’s longer, yes, but the quality is so much better and the content even more so.
 
Figure 9. A much better example of temporal images or a video best viewed in HD

The truecolor image from Figure 3 was converted into grayscale and binary using the code below and from the image sizes, it can be seen that the red, green and blue color channels representative of a truecolor image is no longer present (Figure 10).  Also shown below are the grayscale and binary images that resulted from the conversion.

truecolor_codetruecolor_size
Figure 10. (top) Scilab code used to convert a truecolor image to grayscale and binary and
(bottom) the image sizes of the respective images.

truecolor_gray
truecolor_bw
Figure 11. (top) Resulting grayscale and (bottom) binary image.

Remember the first activity and the lifesaving scanned image that went with it?  Well, the cropped version of that same image makes a cameo in this activity.  The image – which was originally a truecolor image - was first read as a grayscale image whose histogram was then taken.  With the help of Mr. Timothy Joseph Abregana, I was able to realize that the proper threshold value for my image was 0.30 on account of its grainy quality.  All things I deemed important to this part of the activity are shown below

fmc_code
Figure 12. Scilab code used to perform the necessary operations on the scanned image.

fmc_gray
fmc_histplot
Figure 13. (top) Grayscale image of the graph and (bottom) its histograph with 255 divisions.

fmc_bw
Figure 14. Obtained Binary image using a threshold value of 0.3

But then image types aren’t the only important things to note when it comes to images.  File formats are important as well.  Different file formats arose from the realization that better cameras meant better pictures with greater resolution and size.  Images can be compressed either in lossy or lossless compression formats.  Both are appropriately named as lossy image compression results to smaller sized files with certain data missing.  Lossless image compression, on the otherhand. preserved every single pixel information – a trait that makes it beneficial for medical imaging, research or professional purposes.  A variety of image file formats are now available for use, each one with its own history and purpose elaborated on below.

More Common Image File Formats

JPEG (Joint Photographic Expert Group)
This lossy file format saves certain color  information might be saved at a lower resolution especially those which the human eye cannot discern at adjustable compression levels.

TIF (Tagged Image File)
This sometimes-lossy-but-usually-lossless image format store details of the compression as part of the image file itself.  Because no image detail is lost, files of this format are usually large.


PNG (Portable Network Graphic)
This is exclusively a lossless storage format that  looks for patters so it can compress an image and decompress it as well.

BMP (Bitmap)
This is a lossless file format that was invented by Microsoft.


GIF (Graphic Interchange Format)
This is a file format that is selectively lossless – specifically for images with less than 256 colors.  If the image has a greater color number than these, algorithms are used to approximate the other colors.  For images with a lot of colors, up to 99.998% of the colors could be lost.

Other File Formats


RAW
This is an optional lossless file format that is used by cameras.  Because of this, certain camera manufacturers have different software geared towards reading their own RAW file versions.

PSD, PSP, etc.
These are file formats used by image enhancement or modification programs (i.e. Photoshop, etc.) and is normally large due to the layers and other elements that these images contain.  Like RAW files, these files have to be opened using particular programs.

At the end of the day, I have to give myself an 11.  My images are of very good quality and I finished the work on time with little help from others.  The 1 extra point that I gave myself was simply because I did look for other file formats and I went to the effort of uploading videos onto my YouTube account to both provide an example of a video file and to entertain.

REFERENCES:
Applied Physics 186 Image Types and Formats 2010 Manual
IMAGES*
Binary | Grayscale | Truecolor | Indexed | HDR | Hyperspectral | 3D | Temporal – taken from personal archives
*Images presented above have been resized for economy of presentation.  Images used in analysis are of the size in their source links.

Tuesday, June 21, 2011

two

Last week, the manual for our Scilab Basics activity had been given to us together with our Digital Scanning activity.  As things were - first week of school and all - I had much time to spare and so I was able to work on the activity before the time to formally start it came.  Appropriately titled, the activity introduced us to Scilab and if you are a person who's previously used MATLAB, Scilab isn't really that hard to navigate.  Installing it, of course, is another matter.

From my Applied Physics 185 class last semester, I had Scilab 5.3.0 installed on my computer.  I downloaded the appropriate SIVP Toolbox but it didn't work.  Sigh.  I decided then to use Scilab 4.1.2 along with the SIP toolbox.  At first, SIP and Scilab just didn't mesh with each other.  Sigh.  But, with the help of Ma'am Jing's blog post (see it here), I was able to use SIP (finally!).  I actually have to give the most credit to a comment on the same post.  Upon following the instructions there, I could simply click on siptoolbox under Scilab's toolboxes tab to make use of it - no more unnecessary typing on the console!  Of course, that does not excuse me from the few moments of age catching up with me - moments that would result to ultimate panic because my code wasn't working.  What did I do then?  Well, of course, I just forgot to click on siptoolbox under the toolboxes tab!

From there on (excluding the panic attacks), I breezed through the different matrix operations - addition, subtraction, matrix and element per element multiplication - just so I could get a feel of how Scilab really is.  Like I said previously, not that much different from MATLAB.

Included in the manual was a code that, when implemented would generate a centered circle that could be used  to simulate a circular aperture or a pinhole.  Because this isn't a W***P**** (must ... stay ... loyal) blog, I can't actually upload the code itself.  Hence, I provide you a screenshot of my code and the image it generated.

Figure 1. (top) Image of generated Centered Circle Aperture using (bottom) Scilab 4.1.2 Code

Unlike the previous results, the ones that follow were generated using codes written by yours truly.  The first was a square aperture.  In truth, I somewhat recycled the code for the circular aperture here, with a few lines modified of course to be able to generate a square instead of a circle.

 Figure 2. (top) Image of generated Square Aperture using (bottom) a Scilab 4.1.2 Code

Generating the next two results proved to be quite interesting to me as they look very similar in appearance.  Similar, but not the same.  The sinusoid along the x-axis, obviously has "gray" areas as I might call them due to, well, it's being a sinusoid.  Gratings, as I have come to know do not possess "gray" areas because they have sharp edges.  Needless to say, the "gray" areas depict the decreasing or increasing values of the sinusoid.   Another way of putting it would be that the sinusoid looks much like how a tin roof would if viewed from above and the grating like prisoner's uniforms.  But instead of boggling your mind with the strange mental pictures I associate with the images I was able to create, let me just show them to you with the codes that go with them.

 Figure 3. (top) Image of generated Sinusoid along the x-direction using (bottom) a Scilab 4.1.2 Code

 Figure 4. (top) Image of generated Grating along the x-direction using (bottom) a Scilab 4.1.2 Code

I have to admit, I had to run to Google for the next task.  I don't think I've ever heard of the word "annulus" until now.  Of course, I might have forgotten it since I never was one to hold onto the technical terms of things.  Anyway, generating the annulus was fairly simple involving only the combination of two circular apertures forming what looks like a donut or a ring - depending on if you're into food or jewelry.  

  Figure 5. (top) Image of generated Annulus using (bottom) a Scilab 4.1.2 Code

The final task was what gave me hell.  Up until this morning, I was confused on what a "Circular Aperture with Graded (or Gaussian) Transparency".  Not until we had our Physics 166 class right before our Applied Physics 186 class did I realize that it meant I would have to "multiply" a Gaussian Filter and a circular aperture (credits to Dr. Wilson O. Garcia for that).  I came up with the results below:

  Figure 5. (top) Image of a generated Circular Aperture with a Graded or Gaussian Transparency 
using (top) a Scilab 4.1.2 Code

All the snippets of code that I've placed here show the numerical values used to generate the above figures I've shown.  You could probably tweak the numbers a bit to show the different results that can be yielded - of course, you'd have to type it so good luck to you on that. *evil smirk*

This is going to sound bad but I wasn't really able to do more than the assigned tasks because it slipped my mind that I had the option of doing so.  So many senior moments for this activity!  I feel like I do have to reward myself for completing the task before the class even began and for that, I give myself a perfect 10.

NOTE TO SELF: Don't just do what needs to be done.  Do what can be done to make your work a step beyond amazing.

Thursday, June 16, 2011

one


Before I begin with the report itself, I would like to thank Tracy Tuballa for being everybody's personal assistant. It was she who photocopied AND scanned the figures for us.  On that note, I would NOT like to thank the College of Science Library for deciding to renovate now. Boo.  
Like most "firsts" of any semester or year, I enjoyed it. Much of that had to do with the figure I'd ended up working with (See graph below).  Unlike my classmates' figures, I didn't have to collect a million data points - I didn't even have to collect ten.  My graph, luckily, already had data point markers, making things so much easier.

 
Original Graph of Time Constant vs. Temperature 
Source: "Relaxation  times and the initial conditions 
of the one-dimensional Fokker-Planck Equation" 
by Josefino Z. Villanueva, September 1973

The objective was to use ratio and proportion to find the numerical values of a digitally scanned hand-drawn plot (Activity 1 - Digital Scanning Manual, Applied Physics 186).  It wasn't a straight-forward activity but with repeated reading of the manual (and, yes, eavesdropping on my classmates' discussions), I was able to sink my teeth into the task.  On that note, I acknowledge Mr. Mar Philip Elaurza, Mr. Timothy Joseph Abregana and Mr. James Christopher Pang for answering the steady stream of questions I had for them before we all attempted the task.  Also, I just have to thank Mr. Kirby Cheng who walked from the second floor to the oh-so-far fourth floor just so he could download - and share - the activity manual.  It was because of his effort that we were able to start a good 45 minutes before the class started.  Of course, if I have to thank people, I have to thank Ma'am Jing for instructing me how to convert the pixel data to graph data. Cheers to you all! 

Now, on to what I did!

The first step in all of this was to determine the pixel coordinates of my graph's origin, data point markers and axes points using MS Paint.  Because my computer runs on a Windows 7 OS, my Paint program allowed me to overlay grid lines on the image making it easier to obtain consistent pixel coordinates for the axis intervals.  Also, my graph didn't have proper tick marks along the axes so I had to put a great deal of estimation into that.

When I plotted the raw pixel coordinates I'd gotten, it looked like a vertically-flipped version of the original graph.  I realized that MS Paint took the 0, 0 pixel coordinate to be located at the top left corner of images.  The y-coordinates I used from there on end are converted set of y-pixel coordinates (image height in pixels - y-pixel coordinate).  

To recreate the graph, I remembered the objective of using ratio and proportion and initially thought it was best to use the average of the pixel distances between the x- and y-axis points.  Through simple ratio and proportion (graph distance / pixel distance), I was able to reproduce a shifted version of the graph.  Upon showing this to Ma'am Jing, I was told this was wrong.  She delivered a one-liner, a paraphrased version of it shown below, that clung to me throughout the day.  
"You're physicists, class.  Using the average simply won't do"
She told me that plotting the axes point's pixel and graph distances would allow me to obtain an equation for which if you had one value, you could get the corresponding distance.  Yey!  


 Plot of the pixel versus graph distance of the major intervals along the (top) x- and 
(bottom) y-axis along with the best fit line used to approximate the final graph

Armed with the best fit equations from the above plots, all that was left to do was input the x- and y-pixel coordinates I'd early obtained and voila!  Shown below are the final graphs along side the original, complete with best fit lines (Power Laws) and R^2 values.  Before you get confused, I show you two graphs because OpenOffice Calc - the program I'd been using up to this point - does not do the world's most accurate (or is the term precise?) best line fitting.    


Reconstructed plots of the time constant versus Kelvin using (top) Excel and (bottom) OpenOffice.org Calc

Truthfully enough, Ma'am Jing (for it was her who suggested I attempt a best line fitting in Excel) was right.  The equation given by Excel shows more precision than that given by OpenOffice.org Calc.

I have to admit that my results aren't really as perfect as I'd hoped them to be.  In the throes of my celebration due to the presence of data point markers on my graph, I failed to realize that I should have taken more points in between the markers.  I could have then produced a more accurate result.

I guess I should explain why I only reconstructed the experimental plot.  Suffice to say that I was not aware the graph we'd need to use should only have a single plot on it.  I then had to make a decision between the theoretical or the experimental plot.  I don't think there's a need to explicitly say the choice I made (wink).

With all that said and done, I give myself a score of 9/10.  I understood the lesson - both the principle and the technique behind it.  I was also able to finish the task well before the class had to end (something I attribute to the original graph itself).  For technical correctness, that's a 5.  Although I was able to finish, the quality of the plots that I produced weren't exactly top-notch.  The image of the graph that's been overlayed onto both the Excel and OpenOffice.org Calc results seem too grainy - hence the 4 I graded myself for the Quality of the Presentation.

It might not have been that complicated of an activity, but it was one where I learned a few tips and tricks that might just come in handy in future endeavors - both academic and in research.  I guess I relearned something I was first thought in the second semester of my first year - Simplicity is Key.