SIURO | Volume 5 | SIAM
 

Text/HTML


SIAM Undergraduate Research Online

Volume 5


SIAM Undergraduate Research Online Volume 5

Tracking the movement of eigenvalues via a corresponding Evans function

Published electronically March 27, 2012
DOI: 10.1137/11S01089X

Author: Benjamin Lewis (Calvin College)
Sponsor: Todd Kapitula (Calvin College)

Abstract: In this paper we define the Evans function for Sturm-Liouville problems. We show that the Evans function is analytic in the spectral parameter, has zeros in one-to-one correspondence with the eigenvalues, and is under certain conditions what we call conjugate symmetric. We conclude by showing that the Evans function can be used to track the movement of the eigenvalues as the coefficients in theSturm-Liouville problem are perturbed.

Non-autonomous Logistic Equations and Optimization of Renewable Resources Management

Published electronically March 28, 2012
DOI: 10.1137/11S011262

Author: Mingli Zhong (University of Rochester)
Sponsor: Nsoki Mavinga (Swarthmore College)

Abstract: This paper concerns applications of the non-autonomous logistic equation to bioeconomic fishery models. We develop a generalized model which satisfies both biological and economic principles of optimization without overexploitation. We also provide conditions to optimize the revenue and to maximize the sustainable yield.

What Moves You: Using Legs for Vehicular Transportation

Published electronically April 24, 2012
DOI: 10.1137/10S010715

Authors: Jonathan Graf (Towson University) and Olga Stulov (University of New York at New Paltz)
Sponsor: James Sochacki (James Madison University)

Abstract: Most vehicles are transported by the rotation of wheels. The Department of Mathematics and Statistics and Department of Engineering at James Madison University are interested in developing vehicles that will be driven by the motion of legs rather than wheels. In this paper we discuss the motion of five different legs: first, we derive the equations of motion for each leg; second, we calculate the equations for velocity, acceleration, energy and power; third, we optimize the motion by minimizing energies and forces. In order to obtain these results, we developed a differential equation, solved it using the Parker-Sochacki Method and reached the optimal solutionusing Maple's minimization package.

Instability of Gravity Driven Flow of Liquid Crystal Films

Published electronically May 25, 2012
DOI: 10.1137/12S011519

Authors: Sean P. Naughton, Namrata K. Patel, and Ivana Seric (New Jersey Institute of Technology)
Sponsors: L. Kondic, T.-S. Lin, and L. J. Cummings (New Jersey Institute of Technology)

Abstract: This paper discusses modeling of spreading nematic liquid crystal films. We concentrate on gravity driven spreading and consider various instabilities which occur during the spreading. We find that nematic character of the spreading film leads to stronger instabilities of the film fronts, and that it also leads to surface instabilities. We also present results of physical experiments involving spreading nematic films and find good agreement with the theoretical and computational predictions.

A Maple Application for Testing Self-Adjointness on Quantum Graphs

Published electronically June 12, 2012
DOI: 10.1137/12S011490

Authors: Steven Coulter (The Pennsylvania State University) and Helene Dallmann (University of Gottingen)
Sponsors: Thomas Krainer and Michael Weiner (The Pennsylvania State University)  

Abstract: In this paper we consider linear ordinary elliptic differential operators with smooth coefficients on finite quantum graphs. We discuss criteria for the operator to be self-adjoint. This involves conditions on matrices representative of the boundary conditions at each vertex. The main point is the development of a Maple application to test these conditions.

Modeling and Numerical Simulation of the Nonlinear Dynamics of the Parametrically Forced String Pendulum

Published electronically June 14, 2012
DOI: 10.1137/11S011444

Author: Veronica Ciocanel (Duke University)
Sponsor: Thomas Witelski (Duke University)

Abstract: The string pendulum consists of a mass attached to the end of an inextensible string which is fastened to a support. Applying an external forcing to the pendulum's support is motivated by understanding the behavior of suspension bridges or of tethered structures during earthquakes. The forced string pendulum can go from taut to slack states and vice versa, and is capable of exhibiting interesting periodic and chaotic dynamics. The inextensibility of the string and its capacity to go slack make simulation and analysis of the system complicated. The string pendulum system is thus formulated here as a piecewise-smooth dynamical system using the method of Lagrange multipliers to obtain a system of differential algebraic equations (DAE) for the taut state. In order to develop a formulation for the forced string pendulum system, we first turn to similar but simpler pendulum systems, such as the classic rigid pendulum, the elastic spring pendulum and the elastic spring pendulum with piecewise constant stiffness. We perform a perturbation analysis for both the unforced and forced cases of the spring pendulum approximation, which shows that, for large stiffness, this is a reasonable model of the system. We also show that the spring pendulum with piecewise constant stiffness can be a good approximation of the string pendulum, in the limit of a large extension constant and a low compression constant. We indicate the behavior and stability of this simplified model by using numerical computations of the system's Lyapunov exponents. We then provide a comparison of the spring pendulum with piecewise constant stiffness with the formulation of the taut-slack pendulum using the DAE for the taut states and derived switching conditions to the slack state.

Statistical Modeling through Analytical and Monte Carlo Methods of the Fat Fraction in Magnetic Resonance Imaging (MRI)

Published electronically June 21, 2012
DOI: 10.1137/11S010931

Authors: Anne M. Calder, Eden A. Ellis, Li-Hsuan Huang and Kevin Park (California State University, Fullerton)
Sponsor: Angel Pineda (California State University, Fullerton)  

Abstract: Our project studies the quantification of the uncertainty in fat-fraction estimates using Magnetic Resonance Imaging (MRI). The measured fat fraction is | F |   ⁄ ( | | + | W |) ,where F is the fat signal and W is the water signal obtained using MRI. The fat and water signal magnitudes have a Rician distribution. However, the fat fraction has an unknown probability distribution. Knowing the fat-fraction probability distribution will provide us with a better understanding of the uncertainty of fat-fraction estimates used for the diagnosis of liver disease. Our current research focuses on finding the analytic distribution of the fat fraction and numerical simulation using Monte Carlo methods. In the analytic approach, we derived the probability density function of the fat fraction where the fat and water magnitudes follow a normal distribution (restricted to non-negative values) because the normal distribution approximates a Rician distribution for large signal-to- noise ratio (SNR). In the numerical approach, we applied Monte Carlo methods to optimize the fat-fraction estimation, compared analytic with numerical results, and found cases where current estimates of the fat fraction are inaccurate for low SNR.

Solving a Non-Linear Partial Differential Equation for the Simulation of Tumour Oxygenation

Published electronically June 21, 2012
DOI: 10.1137/11S011365

Authors: Julian Köllermeier, Lisa Kusch, and Thorsten Lajewski (RWTH Aachen University)
Sponsor: Martin Frank (RWTH Aachen University)  

Abstract: This paper describes a novel approach to simulate tumor oxygenation which is very important because of its relevance for radiotherapy. In fact, it is considered to be one of the most important factors that determine the failure of radiation treatment. As former simulations were not able to deal with sufficiently large areas of tissue, we developed and investigated new methods in order to enable the use for real applications. The main work of this project includes the development of a new software to simulate tumor oxygenation. We therefore describe new methods to generate a realistic distribution of blood vessels inside the tumor tissue. After that, we come up with different discretization approaches for the vessels' boundaries. Finally, we overcome the problem of excessive memory storage by developing a new dedicated data format. Due to the reduction of memory consumption, we were able to simulate much larger domains with a finite difference method than with commercial software. In fact, it is now possible to calculate the oxygenation of tissues measuring several square centimeters in area.

A Quantile Regression Study of Climate Change in Chicago, 1960-2010

Published electronically July 11, 2012
DOI: 10.1137/12S01174X

Author: Julien Leider (University of Illinois at Chicago)
Sponsor: Jing Wang (University of Illinois at Chicago)

Abstract: This study uses quantile regression combined with time series methods to analyze change in temperatures in Chicago during the period 1960-2010. It builds on previous work in applying quantile regression methods to climate data by Timofeev and Sterin (2010) and work by the Chicago Climate Task Force on analyzing climate change in Chicago. Data from the Chicago O’Hare Airport weather station archived by the National Climatic Data Center are used to look at changes in weekly average temperatures. The method described by Xiao et al. (2003) is used to remove autocorrelation in the data, together with the rank-score method with IID assumption to calculate confidence intervals, and nonparametric local linear quantile regression to estimate temperature trends. The results of this analysis indicate that the decade 1960-1969 was significantly colder than later decades around the middle of the yearly seasonal cycle at both the median and 95th percentile of the temperature distribution. This analysis does not find a statistically significant trend over the later decades, 1970-2010.

Moody's Mega Math Challenge 2012 Champion Paper-All Aboard: Can High Speed Rail Get Back on Track?

Published electronically July 19, 2012
DOI: 10.1137/12S011799
M3 Challenge Introduction

Authors: Stephen Guo, Vineel Chakradhar, Daniel Takash, Angela Zhou, and Kevin Zhou (High Technology High School, Lincroft, NJ)
Sponsor: Ellen LeBlanc (High Technology High School, Lincroft, NJ)

Summary: High speed rail across the country was expected to usher in economic prosperity, increased interconnectivity, and energy efficiency. Supporters maintain dreamy visions of stepping onto gleaming trains downtown, and stepping out mere hours later in another downtown – a few states and a few hundred miles over. However, others decry the necessary costs of building the required infrastructure. Who’s right? Is high speed rail worth it?

Our consulting firm was first tasked with projecting the number of passengers travelling on a series of potential high speed rail systems. To begin, we simplified the problem at hand and analyzed the existing rail infrastructure of the most populous metropolitan areas to choose pairs of cities to model. The choice to model city pairs allowed us to consider a more fundamental model, although it introduced potential issues when comparing our proposal to the original, more extensive HSIPR plan. We then projected the population growth of metropolitan areas, and calculated the proportion of travelers choosing between high speed rail, cars, and planes using a transportation demand (multinomial logit) model. We analyzed the only existing high speed rail in the United States, the Acela Express, to determine key modal choice factors (i.e. expected fare rates). Our consumer choice model was stable and relatively insensitive; small percent changes in inputs led to proportionally smaller changes in output consumer choice.

The cost of building high speed rail involves significant initial costs: land, raw materials, and construction, as well as annual variable costs of maintenance, labor, and power consumption. Each component of cost was independently determined, and the final cost was a function of the length of each proposed railway route and the projected mean travel speed.

To examine the claim that HSR conserves energy consumption, we analyzed two scenarios for projected energy consumption. In the first, HSRs were constructed and took market share from cars and planes. In the second, passengers chose only between travelling by car or plane. Our projections indicate that due to the significant energy cost of implementing high speed rail, we actually expect our energy consumption to increase by 600 million gallons of gasoline over 20 years, weakening the case for high speed rail.

We performed a cost-benefit analysis to generate recommendations for high speed rail with regards to each hypothetical pair of cities. Cost consisted of both the initial costs and 20-year operating costs, while the benefit consisted of the total expected revenue. Our analysis indicated that for eight of our chosen pairs, constructing high speed rail would result in significant losses. The exceptions are the Boston to New York and New York to Washington, DC lines. Incidentally, these are the pairs that compose the existing Acela Express. We finally ranked the lines that would run at a deficit based on their potential to reduce local traffic congestion, if for political reasons a line must be constructed and operated at a loss.

We have identified key metropolitan areas of interest consistent with future HSIPR plans, and have proposed a refocused railway system. Our analysis indicates that high speed railways beyond the Acela Express will not be profitable. Since the implementation of high speed rail will also increase energy consumption, we do not foresee a quantifiable benefit of high speed rail. We therefore discourage future funding towards the further development of high speed rail.

Finding Eigenvalues for Matrices Acting on Subspaces

Published electronically July 30, 2012
DOI: 10.1137/11S01092X

Author: Jakeniah Christiansen (Calvin College)
Sponsor: Todd Kapitula (Calvin College)

Abstract: Consider the eigenvalue problem QAQ →/x = λ→/x, where A is an n × n matrix and Q is projection matrix onto a subspace S of dimension − k. In this paper we construct a meromorphic function whose zeros correspond to the (generically) − k nonzero eigenvalues of QAQ. The construction of this function requires only that we know A and a basis for S, the orthogonal complement of S. The formulation of the function is assisted through the use of the Frobenius inner product; furthermore, this inner product allows us to directly compute the eigenvalue when k = 1 and n = 2. When n = 3 and k = 1 we carefully study four canonical cases, as well as one more general case.

The Alignment of Arbitrary Contours Using Area Difference Distance Measurement

Published electronically August 13, 2012
DOI: 10.1137/12S011817

Authors: Jessica De Silva and Karen Murata (California State University, Stanislaus)
Sponsor: Jung-Ha An (California State University, Stanislaus)   

Abstract: Advancements in medical imaging have allowed for analyzing the anatomy of the human body without the risks of surgery. One can use the methods in this paper to determine the health of a patient's brain or heart by comparing the anatomical contour to its ideal shape. Registration is an approach towards imaging which determines an optimal alignment between multiple images. In particular, the Area Difference Distance measurement is applied towards image registration to numerically determine an optimal alignment between arbitrary contours. The purpose of this paper is to propose the proof that the Area Difference Distance measurement[4] is a metric, illustrate optimal alignment using the Area Difference Procrustes method[2], and to show numerical simulations. This distance function takes into account two sets containing data points which represent their respective contours. The Area Difference Procrustes method is applied by aligning the arbitrary contour to a fixed contour. This optimal alignment requires minimizing the distance function in terms of rotating, scaling, and translating the arbitrary contour. The succeeding proof presented validates that these values, which optimize the distance between two contours, can be found with the sets of data points representing each contour. Once aligned, we can calculate the optimal Area Difference Distance between the contours. Through the use of MATLAB, synthetic data is applied to test the effectiveness of the optimization of the Area Difference Distance measurement.

A Multi-Numeric Method for Parabolic Problems Using an Adaptive Region-Swapping Approach

Published electronically August 30, 2012
DOI: 10.1137/12S011714

Author: Joseph Huchette (Rice University)
Sponsor: Beatrice Riviere (Rice University)

Abstract: Attempts to approximately solve parabolic convection-diffusion partial differential equations accurately with a minimum of computational cost motivates the investigation of a coupled multi-numeric method that takes advantage of an adaptive domain-partitioning approach. In this work, the finite volume method-a low cost, low accuracy method-is coupled with the discontinuous Galerkin method-a high cost, high accuracy method. For a fixed grid, the subsets of the domain on which each method is applied change at each time-step, with the intention of applying the more accurate method where necessary and the less costly method wherever else. Implementing this method for convection-dominated problems yields results that are qualitatively similar to that yielded by the sole application of the more accurate solution and that preserve the expected numerical convergence rates.

Geographic Profiling Through Six-Dimensional Nonparametric Density Estimation

Published electronically November 5, 2012
DOI: 10.1137/11S011274

Author: Austin Curtis Alleman (Santa Clara University)
Sponsor: George Mohler (Santa Clara University)

Abstract: Geographic profiling is the problem of identifying the location of the offender anchor point (offender residence, place of work, etc.) of a linked crime series using the spatial coordinates of the crimes or other information. A standard approach to the problem is 2D kernel density estimation, which relies on the assumption that the anchor point is located in close proximity to the crimes. Recently introduced Bayesian methods allow for a wider range of criminal behaviors, as well as the incorporation of geographic and demographic information. The complexity of these methods, however, make them computationally expensive when implemented. We have developed a nonparametric method for geographic profiling that allows for more complex criminal behaviors than 2D kernel density estimation, but is fast and easy to implement. For this purpose, crime locations and anchor point are considered as one data point in the space of all crime series. Dimension reduction is then used to construct a 6D probability density estimate of offender behavior using historical solved crime series data, from which an anchor point density corresponding to an unsolved series can be computed. We discuss the advantages and disadvantages of the method, as well as possible real-world implementation.

An Approach to Identify the Number of Clusters

Published electronically December 17, 2012
DOI: 10.1137/11S011419

Authors: Katelyn Gao (Massachusetts Institute of Technology), Heather Hardeman (University of Montevallo), Edward Lim (Johns Hopkins University), and Cristian Potter (East Carolina University)
Sponsors: Carl Meyer (North Carolina State University) and Ralph Abbey (North Carolina State University)

Abstract: In this technological age, vast amounts of data are generated. Various statistical methods are used to find patterns in data, including clustering. Many common methods for cluster analysis, such as k-means and Nonnegative Matrix Factorization, require input of the number of clusters in the data. However, usually that number is unknown. There exists a method that uses eigenvalues to compute the number of clusters, but sometimes it underestimates that number. In this paper, we propose a complementary method to identify the number of clusters. This method is used to analyze three data sets and gives fairly accurate estimates of the number of clusters.

Study of Free Alternative Numerical Computation Packages

Published electronically December 18, 2012
DOI: 10.1137/12S011787

Author: Matthew Brewster (University of Maryland, Baltimore County)
Sponsor: Matthias Gobert (University of Maryland, Baltimore County)   

Abstract: Matlab is the most popular commercial package for numerical computations in mathematics, statistics, the sciences, engineering, and other fields. Octave, FreeMat, and Scilab are free numerical computational packages that have many of the same features as Matlab. They are available to download on the Linux, Windows, and Mac OS X operating systems. We investigate whether these packages are viable alternatives to Matlab for uses in teaching and research. We compare the packages under Linux on one compute node with two quad-core Intel Nehalem processors (2.66 GHz, 8 MB cache) and 24 GB of memory that is part of an 86-node distributed-memory cluster. After performing both usability and performance tests on Matlab, Octave, FreeMat, and Scilab, we conclude that Octave is the most usable and most powerful freely available numerical computation package. Both FreeMat and Scilab exhibited some incompatibility with Matlab and some performance problems in our tests. Therefore, we conclude that Octave is the best viable alternative to Matlab because not only was it fully compatible with Matlab, but it also exhibited the best performance. This paper reports on work done while working for the REU Site: Interdisciplinary Program in High Performance Computing at the University of Maryland, Baltimore County.

Spectral Clustering: An empirical study of Approximation Algorithms and its Application to the Attrition Problem

Published electronically December 21, 2012
DOI: 10.1137/12S012094

Authors: A. Thompson, B. Cung, T. Jin, and J. Ramirez (University of Nebraska – Lincoln)
Sponsors: D. Needell and C. Boutsidis (Claremont McKenna College)

Abstract: Clustering is the problem of separating a set of objects into groups (called clusters) so that objects within the same cluster are more similar to each other than to those in different clusters. Spectral clustering is a now well-known method for clustering which utilizes the spectrum of the data similarity matrix to perform this separation. Since the method relies on solving an eigenvector problem, it is computationally expensive for large datasets. To overcome this constraint, approximation methods have been developed which aim to reduce running time while maintaining accurate classification. In this article, we summarize and experimentally evaluate several approximation methods for spectral clustering. From an applications standpoint, we employ spectral clustering to solve the so-called attrition problem, where one aims to identify from a set of employees those who are likely to voluntarily leave the company from those who are not. Our study sheds light on the empirical performance of existing approximate spectral clustering methods and shows the applicability of these methods in an important business optimization related problem.