Wednesday, September 14, 2016

Activity 4 - Length and Area Approximation

Life is a paradox. It is both complex and simple to the core. Its complexity comes from the various networks and systems we are part of; from the simplest family networks to the very complex nervous system that allows me to align (or dis-align) my thinking process to write this post, we are reminded always that we are part of something so well-thought and beautifully complex. But amazingly, in the midst of all the complexities of life, our ideas, thoughts, and even our nature tends to intersect and maybe collapse at a certain singularity.

The harmony of simplicity and complexity allows us to realize that life at times might be a roller coaster ride, but upon changing the reference point (or changing the axes), we can transform something so chaotic into something so simple. One example here is getting the area of a highly irregular figure such as the map of Quezon City. Using the analytic equations by dividing it into rectangles, squares, circles, triangles, and so, we can lessen the complexity in solving for the area using the computational power of computers and a certain approximation method. Here, we used Scilab and ImageJ to approximate the area and length of known location and figure areas.

Initially, three basic figures namely, a Circle, Square, and a Triangle was made using the Paint tool of Microsoft as shown in Fig. 1 below.

Figure 1. Three synthetic monochromatic bitmap images (ordered from left to right), a circle, 
a square, and a triangle, which were created using Microsoft Paint.
The three were saved as monochromatic bitmap (.bmp) images. The area of the images were obtained through three different methods. The first is through the (1) analytic equation (theoretical area) method. This can be obtained by getting the length through pixel counting (see Activity 2 for more details), and the area through the analytic equation of the known figure. Another method is called (2) pixel counting. Here, we obtain the area by counting the number of white pixels of the figure. Lastly, we introduce the approximation method of (3) Green's Theorem. This method uses the edges and the center of the figure to obtain an approximate area of the image through Green's Theorem given by the equation:
In this activity, we use the discrete form (discrete pixels) of the Green's Theorem given by:
.
where A is the approximated area, Nb is the number of pixels in the edges of the image, and the collection of points in the image is given by (x,y). 

The Scilab code, as seen in the Fig. 2 below, was used to obtain the corresponding areas with varying edge detection methods, namely, Sobel, Prewitt, Canny, Log, and FFT Derivation methods.
Figure 2. The Scilab code for obtaining the approximate area of a circle, square, and triangle through pixel counting method and Green's theorem with 5 different edge detection methods (Sobel, Prewitt, Canny, Log, and FFT Derivation).
For the area approximation methods, the pixel counting method was found to be highly reliable for both the areas of the circle and the square with corresponding percent errors with the analytical area of only 0.0391 and 0.0000 respectively. The said method was found out to have a higher percent error of 0.3577 for the triangular image which can be accounted for the graphical errors observed in Fig. 3 upon zooming-in the triangle.

Figure 3. Zoomed-in image of the triangle showing the non-cohesive behavior of the hypotenuse pixels


It was observed that the created image using pain had non-cohesive diagonal pixels which decreased the accuracy of the area approximation methods. Upon applying the Green's theorem with edge detection for area approximation, varying accuracies were observed for different edge detection methods. This can be physically seen upon qualitatively analyzing the obtained edges for each corresponding method as seen in Figs. 4, 5, and 6.

Figure 4. Various synthetic images created using the edge detection of Scilab for a simple circular image. The (a) original image is initially shown and different methods namely the (b) Canny, (c) FFTDeriv, (d) Log, (e) Prewitt, and (f) Sobel edge detection method.
Figure 5. Various synthetic images created using the edge detection of Scilab for a simple square image. The (a) original image is initially shown and different methods namely the (b) Canny, (c) FFTDeriv, (d) Log, (e) Prewitt, and (f) Sobel edge detection method.
Figure 6. Various synthetic images created using the edge detection of Scilab for a simple triangular image. The (a) original image is initially shown and different methods namely the (b) Canny, (c) FFTDeriv, (d) Log, (e) Prewitt, and (f) Sobel edge detection method.
Out of all the detection methods, the most accurate method was the Log method (percent deviations ranging from approximately 0.1% to 0.2%)  which somehow sandwiches the edge with two lines: one for the inner line, and one for the outer line. The accuracy can be noticed upon zooming in the images. Due to the low image resolution, pixels tend to be non-cohesive and do not follow the expected (theoretical figure), which in return, results to an image with higher degree of error. Sandwiching the edges with inner and outer lines tend to decrease the errors brought by the pixel's non-cohesiveness for low-resolution images by means of error cancellation. The inner line gives a lower area and the outer line gives a greater area, resulting to a final averaged area with lower errors. The Prewitt and FFTDeriv methods were found to also be highly accurate with equal percent deviations ranging from 0.15% to 0.5%. The Canny method was also accurate but with higher percent errors ranging from 0.2% to 0.85% , while the Sobel method was found to have accuracy issues with errors ranging from 3% to 94%. The tabulated area values with their corresponding percent errors can be seen in Table 1 below.

Table 1. Approximated Area values with their corresponding percent deviations from the analytic value for different area approximation and edge detection methods.

For the part 2 of the activity, the approximation methods were extended in obtaining the area of a local building in UP Diliman, the CSRC, and the area of one of the largest cities in country, Quezon City. The use of Google Maps and Windows Snipping Tool was utilized, together with the Scilab code and Paint for distance scaling and area approximation. A snip of the local building/city was initially obtained from Google Maps and the distance and area measuring tool of the said application was used to obtain the approximate area of the said building (CSRC) or city (Quezon City). Paint was then used for distance scaling as done in Activity 2 of this blog and for monochrome image conversion. The process can be seen in Figs. 7, 8, and 9, wherein the approximate area of the CSRC building was obtained with the methods introduced.

Figure 7. A snip of the CSRC building obtained from Google Maps.
Figure 8. The obtained area and perimeter of the CSRC building using Google Maps' distance calculation function.
Initially, the theoretical value was obtained through the area and length approximation of Google Maps wherein the edges of the building was traced and scaled by the said application. The theoretical value was observed to be 1137.55 sq. meters as seen in Fig. 8.
Figure 9. Various synthetic images created using the observed 3 most efficient edge detection methods of Scilab for the CSRC building; namely, the FFT Derivation method (top), the Log method (middle), and the Prewitt method (bottom). A certain threshold values (right panes) were introduced to get more accurate representations as compared to the representations without the introduction of the said threshold (left panes).
The multiple area values were then obtained through the observed three most efficient edge detection methods namely, the FFT Derivation, Lod, and Prewitt methods, coupled with the algorithm of Green's Theorem in Equation 2. The pixel counting method was also used for its high area approximation efficiency, represented by the small deviation from the theoretical value. The area calculated by the pixel counting method was found to be equal to 1143.4 sq. meters which garnered a percent deviation of 0.5124% with respect to the theoretical value. Edge detection methods were found to be more accurate having lower deviations of 0.4598%, 0.4734%, and 0.4598% for the FFTDeriv, Log, and Prewitt methods respectively. Threshold values of 0.75, 0.552, and 0.91 were then applied to the same methods in similar order which produced more accurate area approximation values of 1142.7, 1142.3, and 1142.7 sq. meters having deviations equal to 0.4565%, 0.4182%, and 0.4554%, respectively. The Log method was again observed to be the most efficient area approximation method upon the application of Green's theorem. The deviations can be accounted by the errors in tracing the theoretical area of the building and the deviations resulted by the pixel scaling method used. 

Similar processes were also done in determining the approximate area of Quezon City and compared with the known theoretical value of 165.3 sq. km. as seen in Figs. 10, 11, and 12. The high irregularity of the city figure resulted to a much more complex process of pixel cleaning and scaling.
Figure 10. A snip of the whole Quezon City through Google Maps
The image was cleaned and monochromatic bitmap image was obtained through the use of Microsoft Paint as seen in Fig. 11. The approximate areas were then calculated by incorporating the Green's theorem with the selected edge detection methods as seen in Fig. 12. The pixel counting method was also found to be highly accurate with only about 6.57% deviation (area = 176.16 sq.km.) from the expected value. The application of Green's theorem with Scilab Image detection was also found to be accurate with about 6.72% (area = 176.42 sq.km.), 6.84% (area = 176.60 sq.km.), and 6.73% (area = 176.42 sq.km.) deviations for the FFTDeriv, Log, and Prewitt methods, respectively. Upon varying the threshold magnitudes, the accuracy was not that improved except for the FFT Derivation method. But as we examined the resulted synthetic image for the edges, it was found out that the accuracy was just an artifact due to the deviation from the expected image. This can be observed in Table 2 and Fig. 12. Here we suggest that the obtained image from Google Maps might have accuracy issues with the known area of Quezon City. Similarly, the scaling factor given by Google Maps might also not be that accurate when applied to larger areas.
Figure 11. Pixel cleaning and scaling for the monochromatic bitmap image of the ehole Quezon City area

Figure 12. Varied synthetic images formed through the three most efficient edge detection methods namely, the FFT Derivation (top), Log (middle), and Prewitt (bottom) methods. Threshold values (right panes) for image detection was applied to obtain more accurate area approximations as compared to the non-threshold approximations (left panes).
Table 2. The obtained approximate area values for the areas of the CSRC building and Quezon City with corresponding percent deviations.


For the final part of the activity, ImageJ was used to determine the approximate area of a known flat object such as a school ID. Here, a scanned personal UP Diliman ID card as seen in Fig. 13 was analyzed in ImageJ for area approximation. The scanned object was then edited and aligned through Windows 10 Photoviewer and Editor for higher contrast that will be essential for edge detection (seen in Fig. 14). The said object was then varied in image sizes and their corresponding areas were then obtained and checked with the theoretical value of 8.84 sq.cm (note that the area is not of the whole ID card but of the red box for ease of use and efficiency).

Figure 13. Scanned personal UP Diliman Identification Card

Figure 14. Edited UP Diliman ID card for alignment, scaling, and better contrast for edge detection. The image was also varied in size to obtain the dependence of the approximated area with the image size.
The results show that ImageJ can be a reliable tool for area and length approximation with errors varying from about 0.1% to 2%. It was also observed that the deviations are proportional to the image size as seen in Table 3. But I believe that this trend is just an artifact due to resolution and accuracy of the box tool used in ImageJ.

Table 3. UP Diliman ID card approximate areas obtained through ImageJ with varying image size percentages and corresponding percent deviations. 

I would like to thank the Musni Family for my long weekend Subic trip which helped me clear off my mind and allow me to be more productive the succeeding days. I would also like to thank my Adviser Dr. Rene Batac for understanding and being patient with my sickness, coupled with the moral support and academic guidance he gives.

Lastly, I would like to rate myself a 12/10 for the very rigorous parts done in not only one segment but in all segments of this activity. The complexity of each step allowed me to appreciate more hardwork and time-management in dealing with academic, family, and extra-curricular work. God bless and see you in my next blog post!

Wednesday, August 31, 2016

Activity 3: Scilab Basics

Life is really complex. You have to balance various factors and take into account multiple priorities and responsibilities while trying to optimize your time and efforts in pursuit of a certain goal. May it be a short-term or long-term goal, you'll end up realizing that life is quite complex than you think it is.

Fractals are objects or phenomena that tend to look similar upon closer inspection by zooming in and out of the material. Fractality, in its essence, may also apply with life. You can observe that what you struggle with today for your day may also be your main struggle for your year, and so on. Looking at it in a more positive sense, your blessings and realizations of the day, can also be applied throughout the year, and even for the rest of your life. Even though this fractality may or may not add to the complexity of life, it is essential to know life's basics first before we go into deeper. 

Some life basics are self-learnt, while some need proper supervision. One does not need to learn how to eat and ingest food but one needs proper guidance in excreting stool. Haha. It might be a weird example but yeah, you get my point. but there are just those basics which can never be fully understood. These basics are the life-blood of living and the variable that fits every equation in our universe. These are the ones that we allow to grow, nourish and embrace throughout the years. 

Take love for example. You do not need "proper guidance" for you to love, because essentially, loving should be innate in us. But it is also an established fact that you need a certain amount of guidance in continuing that love for you to lessen the hurts and struggles that come with it. It's one basic in which you can never fully understand, and that's okay.

Like love, we have lab -- Scilab to be exact. I guess this is one thing that I'll know but I also won't fully understand; and yes, like love, that's okay as long as you're maturing and growing in love (in Scilab).

THE ACTIVITY

What is Scilab?

According to www.scilab.org, Scilab is a free and open source software for numerical computation providing a powerful computing environment for engineering and scientific applications. Well, if that sounds like a handful, Scilab is basically a free open-source tool used for video and image processing and analysis.

Also, from the handout given, it is said that Scilab is a free, high level, scientific programming language which is an acceptable substitute for Matlab. It has many features similar to Matlab foremost of which is its treatment of variables as arrays and matrices. Matrix math in Scilab
or Matlab is very convenient – matrix algebra can be performed in one line of code as compared to “for” loops in C or Fortran..

In this activity, we were tasked to explore and get comfortable with this software by obtaining various synthetic images and creating our own images through the concept of matrix multiplication and addition.

Initially, I downloaded the latest free version of Scilab, that's Scilab 5.5.2, and I then added the SIVP (Scilab Image and Video Processing) toolbox for ease of use. After that, I plotted the value given by the module and observed that the software works fine and is compatible with my laptop. I then proceeded with the practice before the activity (making a synthetic image of a centered circle) as shown in Fig. 1.

Figure 1. Initial steps in creating a centered circle through Scilab. This resulted in a low quality image, though the circle was executed properly.
I then observed that the output image was somehow of low quality. I searched for image enhancements and higher quality images in Scilab and it led me to the imshow() command from the SIVP module. I also increased the size and the aperture ratio of the image. This resulted into a much higher quality image as seen in Fig. 2. I was quite happy with the output and I even customized and personalized my Scilab color scheme. I am now ready to do the activity!

Figure 2. A higher quality version of the centered circle image produced by Scilab
The first task was to do a centered square aperture, which is given by Fig. 3, together with the code used. It was fairly easy because of the symmetry that the square has. I just introduced the abs () or the absolute value function to optimize the code and make use of the square's xy symmetry.

Figure 3. Centered square synthetic image
The second task was to make a sinusoid along the x direction. I initially made a 2D sinusoidal image but I realized that it was just a plot and edited my code for it to be of a matrix type. I chose a sinusoid of frequency 10 and varied the code to the yt component as seen in Figure 4. I also realized that in Scilab, the axes are somehow tilted or rotated. A variation in the y axis corresponds to a variation in the x parameter in the code, and vice versa. The sinusoidal propagation along the x-axis can be further seen upon the introduction of the mesh() command for 3D images as seen in Figure 5.
Figure 4. A sinusoidal image propagating along the x direction
Figure 5. A sinusoidal 3D image that propagates along the x direction
The next task was to obtain a grating in the same orientation as the sinusoidal image. I thought that they were related and searched for a function that would either give 0 or 1 in Scilab. I then encountered the command round() that returned a rounded-off value. In this case, it's either I would offset my sinusoid or I would have to introduce the abs() function again to make it all positive. I chose the latter for simplicity and I could not think of a code that would offset the sinusoid martix. The results can be seen in Figures 6 and 7 wherein the 3D plot is introduced for better representation and comparison with the previous item. 

Figure 6. A grating synthetic image along the x direction

Figure 7. A 3D plot of a synthetic grating image along the x direction.
After that, an Annulus or a ring was made by introducing two apertures and modifying the centered circle code to have ones values between those two apertures as seen in the code and image found in Fig. 8.
Figure 8. An annulus synthetic image made by the introduction of two apertures
The next one was the hardest of the images to be obtained for it introduced a Gaussian transparency distribution. My initial action in seeing the problem was to search for the equation for the Gaussian distribution, given by:

Here, for simplicity, we chose mu = 0 and a quite high value sigma = 2 for a better representation of the Gaussian distribution transparency of the image. The equation was implemented in the code and a 3D plot using the mesh() command was used for better representation as seen in Figure 9.

Figure 9. A 2D and 3D plot of a circular aperture with a graded Gaussian transparency.
I observed that the last two images (ellipse and cross) were just like variations of the first two images (circle and square). The parametric equation of the ellipse was just substituted in the R equation of the code for the circle, while for the cross, you can see it as either a superposition of two rectangles or 5 squares placed side-by-side with varying centers. For the Square, I chose the 1st method for the latter method used up more lines. These can be seen in Figures 10 and 11.

Figure 10. A 2D synthetic image of an ellipse.
Figure 11. A 2D synthetic image of a cross.
Lastly, here comes the (more) fun part! Upon introducing, matrix multiplication and addition, you can come up with some quite cool images. Presenting:

THE EYE
(Made by multiplying the annulus with the ellipse and then adding the Gaussian blur)

LA CROSS
(Made by multiplying the square with the cross and the gradient with the Gaussian blur, before adding the sinusoidal image)
CHEVROLET ROULETTE
(Made by adding the matrix products of the ellipse and the cross, the sinusoid and the square, and the gradient with the Gaussian blur)
THE CASTLE
(Can you guess why this is called, the castle?)
See for yourself....




YES, THE CASTLE!
I had fun during the activity, especially in editing the 3D mesh for the righ colors for better visualization. I would like to give myself a score of 12/10 because I did not only did what was asked but also incorporated SIVP imshow() and mesh() commands, together with the intricate editing of the 3D plots :) See you!

Activity 2: Digital Scanning

In life, it is good to notice that contrary to the belief of second chances, there are just things that you have to let go and just focus on fully moving on. Well, that was my belief before, and I somehow realized through the years that, 
"one should not settle for less as long as you know that you haven't yet done your best."
I realized that there is a certain joy in trying to know the currently unknowable and there is a great satisfaction in adhering to the truth.  One of my mentors once said that you should not feel sad in the process of failing, because once you prove something or a method to be false, you can conclude that truly, it is false. But if you observe something to be true, you should also test its integrity in various systems, instances, limitations, and etc. In addition to what I learned from my mentor, I believe that though the outcome of a certain method or process is essential, what matters more for me is the journey. 

Here, in this activity, we try to recreate something that was made from the past. I observed that even with all the modern knowledge and sophisticated equipment, we can never fully recreate an image that was hand-made before. The variations and differences may decrease, but you can never fully recreate the unique material. Does this mean that we're a failure in doing so? The answer is NO! The journey, together with the fact that you reached a certain similarity and precision is already a form of both internal and external intellectual glory.

THE ACTIVITY

Our Applied Physics 186 class was tasked to recreate a hand-written plot from the old College of Science Archives through basic image processing softwares like Paint, GIMP, Photoshop, or ImageJ. Personally, I used the most basic of them all -- the old-time favorite: Paint. My obtained figure is shown below.

Figure 1. The initial hand-written plot scanned from the College of Science Journal Archives

The initial image scanned was not that aligned well so I had to align it using Windows Photoviewer. It was observed that it had a skewed angle of about 1 degree counterclockwise. After aligning the image, it was then cropped to the axes for ease of access (haha) and conversion as shown in Fig. 2 below.

Figure 2.  The cropped and aligned image for processing and digital scanning
 After the alignment and cropping, the figure is now ready for digital scanning! Through the use of the basic Paint software, the axis was scanned to relate the pixel values with the physical values presented in the plot. The scanning was done through approximations in pixel points of the plot as shown in Fig. 3 below. These pixel points were then analyzed through Kingsoft Spreadsheets (Figure 4a.) and corresponding physical values were then obtained to be approximately 0.048 and 0.0024 for y and x axes, respectively, seen in Figure 4b.

Figure 3. Digital scanning through the use of Paint. The red brush button was used to check the corresponding pixel locations of the tick marks of both X and Y axes.

 
Figure 4. The tabulation of pixel point locations in obtaining the conversion of each axis physical value. It was observed that 1 pixel is equivalent to about 0.048 moisture content percentage (y-axis) and 0.0024 relative humidity (x-axis). The true values of the x and y axes were also found out to have offset values of 1 and -679, respectively.

 After obtaining the conversion values, the data points were then scanned, converted and plotted. The initial plot, as seen in Fig. 5a was observed to be highly similar to the original plot. It is good to note that I also considered the skewness of the original plot from the depth variation in scanning the image. The photocopied plot was a little bit severed due to the rain and it was not perfectly flat upon scanning. Because of this limitation, higher xy values had greater offset than lower xy values. Noticing this limitation allowed me to recreate a much more precise plot. Fig. 5b shows the original plot overlayed in the background through the background edit option of the Spreadsheet. Fig. 5c and 5d just adds colors, axis labels and legends for better comparison. Lastly, Fig. 5f shows the recreated plot (colored) with the original image overlayed in the  background. I included polynomial fitting for better observation of plot trends. It can be observed that the original image quite aligns well with the recreated plot showing high precision and accuracy of pixel to point conversion.

Figure 5. The recreated plots showing the sequence of progresses and additions. The sequence starts (a) with a recreated grayscale plot of the original plot using Spreadsheets. (b) The original image was then overlayed in the background. (c) Plot colors, (d) axis labels, and a legend was then added for better comparison with the original image. Lastly, (d) polynomial fitting and final aesthetic edits of the recreated plot was then added for better trend observation and comparison.
I had fun and was quite entertained on how a very simple tool like Paint can be used as a good image processing software for basic digital scanning.  Because of that, I did not only did and understood the parts of the activity, but also tried to transcend what's needed by introducing an overlayed background image of the original plot, adding more colors for comparison, and polynomial fitting for better trend observation. Therefore, I rate myself a 12/10 for this activity. Enjoy and hope to see you soon!