Showing posts with label Matlab. Show all posts
Showing posts with label Matlab. Show all posts

Wednesday, February 17, 2010

Look Ma, I Created an Anaglyph!

I just stumbled across a pretty sweet Matlab-framework tool called Mirone that allows you to do all sorts of analyses to various grid/image formats.


If you do not have Matlab, there is a standalone version that runs on Windows. For Mac folks, it will run under Snow Leopard if you have Matlab 2009, but sadly does not seem to work in Matlab 2007 on 10.5.


I am currently running it on my Windows desktop at school using Matlab 2009b and I am pretty impressed. It has a bunch of image processing/geophysical tools at its disposal. In just a few minutes of playing, I was able to open a GMT grid and plot plate boundaries and earthquake epicenters on it.


What mainly got me excited about it was that I was also able to generate a pretty sweet anaglyph. If you have a pair of red-blue glasses, give it a looksie! The area is the Carlsberg Ridge in the Indian Ocean. I recommend double-clicking the image to open it in full-size.


Monday, August 24, 2009

GSF-reader now working!!

Okay, so I am really excited because my GSF-reader that I coded up in Matlab is now working!! It is not fully-functional yet (I still have to add some sonar-specific readers, and some other optional record types) but it works for one of the sample CARIS-generated GSF files I have.

One of the things holding me up was the fact that I did not realize records were padded with extra bytes to ensure the record size in bytes was divisible by 4. I am not sure why it matters if it is divisible by 4, but apparently it does to GSF.

My next issue to tackle is a Reson sonar-specific quality flag indicator that is written in bits, not bytes, and uses all kinds of bit shifts and masks (joy!). This record is no longer used, but some older GSF versions will include them. Once that is tackled, some of the other GSF files should start working as well.

Also, if I want others to be able to benefit from this work, I should probably eventually convert it over to Python. Having a Python-based GSF reader would be pretty sweet.

There already exists C-code of course that does all this, but I want to have something that generates separate records for the data so that I can play with the ping depths for example, or the backscatter intensities, in a familiar environment. The C-code is really written so that it can be incorporated into other programs. By writing a reader myself, I not only gain a better understanding of the data and how GSF stores them, but I read them into a program where I can readily perform mathematical analysis on them.

Tuesday, August 18, 2009

Writing a GSF-reader in Matlab

I recently ran across a couple issues with some GSF (Generic Sensor Format) files I have, so I decided to attempt to write a GSF reader in order to see what, exactly, I was dealing with. I have never written any kind of binary format reader before (though I did tinker around and add features to one written in Python), so I sort of jumped in the deep-end with this. I decided to use Matlab, since I am most comfortable with its scripting language at this point in time.

Here is a valuable lesson I just learned: Never take the binary specification file at face-value. Typos happen, and they certainly happened here. I have the GSF v. 3.01 specification file available from the SAIC website, and have been following it to write my reader. Everything has been going fine for the most part until I reached the attitude record.

According to the specification, the attitude record should look like this:
-->
Field Name
Description
Field Type
Count
NUM_MEASUREMENTS
Number of attitude measurements in this record.
I
2
ATTITUDE_TIME
Array of attitude measurement times
I
N*2*4
PITCH
Array of pitch measurements
I
N*4
ROLL
Array of roll measurements
I
N*4
HEAVE
Array of heave measurements
T
N*4
HEADING
Array of heading measurements
T
N*4
Attitude Record Size:
Variable


Should be pretty straightforward, so in my Matlab code I wrote the following function:
function [Num_meas, ATT_Time, ATT_Pitch, ATT_Roll, ATT_Heave, ATT_Heading] = readATTrecord(fid)
%% This function reads the Attitude record in a GSF file.

%% Element          Bytes   Type         Description
% Num_Measurements    2      int      number of attitude measurements (N)
% Attitude_Time       N*2*4  int      array of attitude meas. times
% Pitch               N*4    int      array of pitch measurements (hundreths of deg)
% Roll                N*4    int      array of roll measurements (hundreths of deg)
% Heave               N*4    char      array of heave measurements (hundreths of deg)
% Heading             N*4    char      array of heading measurements (hundreths of deg)

Num_meas = fread(fid,1,'int16');

for i = 1:Num_meas
ATT_Time(i,:) = fread(fid,2,'int32');
end
for i = 1:Num_meas
ATT_Pitch(i,:) = fread(fid,1,'int32');
end
for i = 1:Num_meas
ATT_Roll(i,:) = fread(fid,1,'int32');
end
for i = 1:Num_meas
ATT_Heave(i,:) = fread(fid,4,'char');
end
for i = 1:Num_meas
ATT_Heading(i,:) = fread(fid,4,'char');
end

This returned all sorts of funky data that made no sense, most notably the negative times and pitch angles of 120 degrees (that would be worse than the perfect storm!). I decided to look at the GSF library also available on the SAIC website in order to check out the source code. In the library directory, there is a file called GSF_dec.c, a C-code source file for decoding the GSF binary format. I looked up how the Attitude record was decoded in C, and saw this:
gsfDecodeAttitude(gsfAttitude *attitude, GSF_FILE_TABLE *ft, unsigned char *sptr)
{
unsigned char  *p = sptr;
gsfuLong        ltemp;
gsfuShort       stemp;
gsfsShort       signed_short;
int             i;
struct timespec basetime;
double          time_offset;

/* First four byte integer contains the observation time seconds */
memcpy(&ltemp, p, 4);
p += 4;
basetime.tv_sec = ntohl(ltemp);

/* Next four byte integer contains the observation time nanoseconds */
memcpy(&ltemp, p, 4);
p += 4;
basetime.tv_nsec = ntohl(ltemp);

/* Next two byte integer contains the number of measurements in the record */
memcpy(&stemp, p, 2);
p += 2;
attitude->num_measurements = ntohs(stemp);



>and a little farther down I saw:

/* Now loop to decode the attitude measurements */
for (i = 0; i <>num_measurements; i++)
{
/* Next two byte integer contains the time offset */
memcpy(&stemp, p, 2);
time_offset = ((double) ntohs (stemp)) / 1000.0;
p += 2;

LocalAddTimes (&basetime, time_offset, &attitude->attitude_time[i]);

/* Next two byte integer contains the pitch */
memcpy(&stemp, p, 2);
signed_short = (signed) ntohs(stemp);
attitude->pitch[i] = ((double) signed_short) / 100.0;
p += 2;

/* Next two byte integer contains the roll */
memcpy(&stemp, p, 2);
signed_short = (signed) ntohs(stemp);
attitude->roll[i] = ((double) signed_short) / 100.0;
p += 2;

/* Next two byte integer contains the heave */
memcpy(&stemp, p, 2);
signed_short = (signed) ntohs(stemp);
attitude->heave[i] = ((double) signed_short) / 100.0;
p += 2;

/* Next two byte integer contains the heading */
memcpy(&stemp, p, 2);
attitude->heading[i] = ((double) ntohs(stemp)) / 100.0;
p += 2;


The C-code shows that the specification table was completely wrong! There is only one reference time for the attitude reading, and the rest of the times are simply how many seconds past that reference time the rest of the measurements take place. Furthermore, the specifications stated that the Heave and Heading fields were text (something I thought was weird anyway) and in the C-code we clearly see they are integers. The specification also stated everything was 4-bytes, but it is actually only 2 (16 bits). This is very frustrating that the specification and the C-code do not match, especially since they both come from the same source.

At any rate, I am now able to write a working function that can be called in my main code:

function [ATT_Time, Num_meas, ATT_Offset, ATT_Pitch, ATT_Roll, ATT_Heave, ATT_Heading] = readATTrecord(fid)
%% This function reads the Attitude record in a GSF file.

%% Element          Bytes   Type         Description
% ATT_Time            2*4    int      time of attitude meas.
% Num_Measurements    2      int      number of attitude measurements (N)
% Time_Offset         N*2    int      time offset in seconds
% Pitch               N*2    int      array of pitch measurements (hundreths of deg)
% Roll                N*2    int      array of roll measurements (hundreths of deg)
% Heave               N*2    int      array of heave measurements (hundreths of deg)
% Heading             N*2    unit     array of heading measurements (hundreths of deg)

ATT_Time = fread(fid,2,'int32');
Num_meas = fread(fid,1,'int16');

for i = 1:Num_meas
ATT_Offset(i,:) = fread(fid,1,'int16');
end
for i = 1:Num_meas
ATT_Pitch(i,:) = fread(fid,1,'int16');
end
for i = 1:Num_meas
ATT_Roll(i,:) = fread(fid,1,'int16');
end
for i = 1:Num_meas
ATT_Heave(i,:) = fread(fid,1,'int16');
end
for i = 1:Num_meas
ATT_Heading(i,:) = fread(fid,1,'uint16');
end



Thursday, July 16, 2009

Image Processing in Matlab: Update 1: Filters

Okay, so I have been continuing to play around, trying to get my feet wet with image processing. I sort of jumped ahead here and begin fiddling with filters. Since I have the nadir artifact in my mosaic, I need to see if I can easily remove it.
I decided to use an adaptive filter since the noise I am most concerned is repetitive through the image (though not truly periodic as survey line spacing varies), so I do not need the same level of filtering over the whole image. I have been reading through the Matlab Image Processing documentation(pdf), and followed their directions for using "wiener2" which adaptively applies a Wiener linear filter to the image.

from the description under "help wiener2":

WIENER2 lowpass filters an intensity image that has been degraded by
constant power additive
noise. WIENER2 uses a pixel-wise adaptive Wiener
method based on statistics estimated from a local neighborhood of each
pixel.


I used a 10x10 pixel neighborhood for filtering. Then, I ran the Sobel edge detect algorithm as described in my previous post. The results are below:

on the left: orginal image on the right: wiener2 filtered image. Click to see full-size image so you can see the differences.
The nadir artifacts are clearly less noticeable in the filtered image, and indeed the edge detect algorithm is now starting to pick up on the actual different patches of seafloor. The outline of the bright patch in the middle can now be seen.

I then increased the size of my filter to 20x20 pixels. Check out how well the edge detect performs now.

Beginning Image Processing in Matlab

I am starting to explore image processing in Matlab, and so far I am impressed with how easy it seems. I began with a 2000 x 2000 pixel gray scale image of a backscatter mosaic from EM3002 multibeam sonar data. I saved this image as a .tif and loaded it into Matlab using the 'imread' function. Once loaded, using "imshow" will display the image.

One of the first things I tried was the histogram function "imhist", to see how spread out my pixel values were between 0 - 255. Clearly my image does not make use of the full range of colors. I then used "histeq" to stretch the intensity values between the full range of colors. Below are the resulting histogram and new higher contrast image.You can clearly see the artifacts from the nadir beam of the sonar where you get a brighter return. I figured this would be a good test of one of the edge detect algorithms to see if it picks them all out. I used the "edge" command and specified the Sobel method, leaving the threshold value blank. You can also specify a direction, however, I left it as the default, which looks for both horizontal and vertical edges. Here are the results:
Pretty cool. It almost looks like a trackline map of the survey!

Wednesday, April 8, 2009

Modeling Grating Lobes in Matlab

For a project for my underwater acoustics class, I am modeling acoustic pressure amplitudes and beam patterns in Matlab. The project involves a continuous line array that we are discretizing in order to calculate the contributing pressure of each point on the array to some arbitrary field point at range r. Basically we are breaking up the line source into n number of elements. When you do this, the spacing of your elements becomes an important factor. If your spacing is equal to, or greater than, one wavelength, you end up with grating lobes. Grating lobes have the same pressure amplitude as your main lobe, but they occur off-nadir and get progressively wider. Each grating lobe has its own associated side lobes.

Below are two examples of what I am talking about. In the first figure, I spaced my elements at a distance equal to 1 wavelength of the transducer. In the second figure, the spacing is 2 wavelengths. In both figures, the main lobe is centered on zero degrees, and you can see that the first side lobe occurs at -13.3 dB. In the first figure the grating lobes start at 90 degrees, and in the second they start at 30. In both cases, the side lobes of the grating lobes also start at -13.3 dB.


Sunday, March 8, 2009

Modeling a Lidar System: Part 1

Now that I am really starting to get into my research with lidar systems, I feel that it is important to understand as many of the intricacies and nuances of the system as possible, and how they interact with each other. Although I am certainly learning a lot from reading countless articles and texts, and having great directed study sessions with my chairs, I feel one of the best ways to learn is through a more experiential approach. Since I do not have full-fledged lidar system at my disposal to fly around with and image things (anyone offering?), I figure the next best thing I can do is theoretically model the system.

To get myself started, I found a paper by H. Michael Tulldahl and K. Ove Steinvall entitled: "Analytical waveform generation from small objects in lidar bathymetry," (App. Optics, v.38 n.6, 1999). The authors present a model to simulate received lidar waveforms in order to observe the influence of variously-shaped objects on the seabed. I am tweaking the modeled system parameters to match those of the systems I am working with, as well as the water-dependent parameters to match different water types. I am also not dealing with any objects on the seabed, and for now am assuming a flat bottom. I have only just started, so my model is nowhere near complete. Although this model is not the primary focus of my research, I see it as a way to help me understand not only what I am seeing in the data, but also predict features in the data that I might look for. Eventually I would like to have a user interface where I could simply select the lidar system and approximate water type (most likely based off Jerlov), and perhaps bed type and approximate roughness, and just hit "go!"

The model is currently in Matlab, though as it develops I may switch to a more object-oriented scripting language such as Python (I can hear Kurt applauding from here). The point is, models (and the more specifically, the development of models) can be a powerful learning tool. I wish that modeling itself (or at least an introduction to modeling) was taught as more of a core research tool, opposed to a special one.

Below is a snippet of some of the model output as it now stands. The top graph shows the volume backscattered power reaching the receiver. The middle graph shows the amount of power incident on the seabed (one-way travel), and the bottom graph shows the percentage of transmitted power returned to the receiver, all as a function of depth. In this case, the off-nadir angle of the laser is 0 degrees and I am looking only at the nadir beam. I am assuming pure seawater (just to make my initial attempts a little easier) and treating the water surface return of the green wavelength and the atmospheric loss as negligible.

Sunday, June 29, 2008

Perls, Pythons, and Rs...

I am not a coder, plain and simple. I may work in Matlab some, but even that involves a lot of trial and error for me. What would take some of my friends 5 minutes can take me an hour or more as I muddle my way through function calls and syntax. I am definitely keen to learn though. In my geodesy class we had a semester-long project in which we became a virtual GPS receiver and had to solve for different position fixes (basically using multiple iterations of a least-squares analysis). Most people pick Excel to do it in, including the prof. All that copy and paste and only one line in which to type your equations just gets me, and I cannot do it. I decided on Matlab. I definitely struggled with it a lot, but it was relatively easy to debug when there were issues, and in the end it actually worked.

Now one of my other profs is trying to sway me towards R instead of Matlab. The draw of R is that it is freeware, highly supported by users, and completely object-oriented. Apparently it is rapidly becoming the main coding software of statisticians. The drawback, for me obviously, is switching over and starting from scratch all over again. Other profs tell me to stick with Matlab, it is more mainstream and therefore more widely-accepted. I wonder though, which is better? Perhaps R is more intuitive, which would certainly be nice...

Perl versus Python is another deciding point for me. I recently worked with some Python code involving the same tide data from the blog below. Python seems relatively straightforward on first glance and is pretty powerful. It certainly ran through an entire month of 12-second tide data and converted all the raw temperatures, water levels, and time stamps in mere seconds. Mightily impressive. Then someone told me about Perl. It is also a great language and can do the same things as Python. If you look it up online, people rant and rave about both. A lot of folks seem to think that while Python is somewhat easier to script in than Perl, Perl is still the better choice. Python appears to be more user-friendly, but Perl can work with other programming languages (such as C++) better than Python. Also, it seems that while Perl can do everything Python can, the vice-versa is not necessarily true. My question is for someone who really has no experience with either, but wants to learn one to help with data processing, which one is better? Should I go for the one that is easier to use, or the one that has more compatibility? Does it even really matter for what I will be doing? How much coding will analyzing full-waveform lidar data even require?

Blah, why are there so many options?