Published electronically January 13, 2019DOI: 10.1137/18S016801
Authors: Rachel Han (University of British Columbia) and Chingyi Tsoi (Hong Kong Baptist University)Sponsor: Colin Macdonald (University of British Columbia)
Abstract: We demonstrate an application of the closest point method to numerically computing the truncated spectrum of the Laplace-Beltrami operator. This is known as the “Shape DNA" and it can be used to identify objects in various applications. We prove a result about the null-eigenvectors of the numerical discretization. We also investigate the effectiveness of the method with respect to invariants of the Shape DNA. Finally we experiment with clustering similar objects via a multi-dimensional scaling algorithm.
Published electronically February 12, 2019DOI: 10.1137/18S017314
Author: Kaitlyn Eekhoff (Calvin College)Sponsor: Todd Kapitula (Calvin College)
Abstract: Mean-field type ODE models for opinion dynamics often assume that the entire population is comprised of congregators, who are agreeable. On the other hand, a contrarian opinion dynamics ODE model assumes the population has two personality types: congregators, and contrarians, who are disagreeable. In this paper we broadly study how contrarians influence the ability of the population to form a fixed and stable opinion. In particular, we re-examine the dynamics associated with the model introduced by Tanabe and Masuda  by looking at how the parameters effect the formation of stable periodic solutions (whose existence implies there is no fixed consensus opinion). Afterwards, we refine and analyze the model under two new hypotheses: (a) the contrarians bow to peer pressure and change their personality type to congregators if a large enough proportion of the entire population agrees on an opinion, and (b) there are zealots associated with one of the opinions. We conclude with a brief discussion on possible extensions of this work.
Published electronically February 18, 2019DOI: 10.1137/17S016166
Author: Brittany Alexander (Texas Tech University)Sponsor: Leif Ellingson (Texas Tech University)
Abstract: Using a combination of polling data and previous election results, FiveThirtyEight successfully predicted the Electoral College distribution in the presidential election in 2008 with 98% accuracy and in 2012 with 100% accuracy. This study applies a Bayesian analysis of polls, assuming a normal distribution of poll results using a normal conjugate prior. The data were taken from the Huffington Post's Pollster. States were divided into categories based on past results and current demographics. Each category used a different poll source for the prior. This model was originally used to predict the 2016 election, but later it was applied to the poll data for 2008 and 2012. For 2016, the model had 88% accuracy for the 50 states. For 2008 and 2012, the model had the same Electoral College Prediction as FiveThirtyEight. The method of using state and national polls as a prior in election prediction seems promising and further study is needed.
Published electronically March 7, 2019DOI: 10.1137/18S017077
Author: Shengding Sun (The University of North Carolina at Chapel Hill)Sponsor: Nancy Rodriguez (The University of North Carolina at Chapel Hill)
Abstract: We study the dynamics of smoking behavior of agents with a stochastic lattice-based model, assuming that each agent occupies a node and is influenced by its neighbors. This mechanism is adapted from the PSQ smoking model, which is based on a system of ordinary differential equations. The difference in this model is that, more realistically, potential smokers are only influenced by nearby current smokers, instead of all smokers. In addition, the stochasticity of this model also accounts better for the randomness in real world smoking behavior. It is shown here that the quantitative estimates of this new lattice model are significantly different from the previous numerical results obtained in other works using the ODE model. This suggests that taking locality into account affects the model behavior. The critical exponents of this new lattice smoking model under von Neumann neighborhood condition are calculated and verified to be the same as the classic SIRS epidemic model, which classifies this model as belonging to the directed percolation class. We also consider the model in continuum setting, and solve the system numerically using a particular convolution kernel. To the author's knowledge this is the first time where this widely used and discussed PSQ smoking model is incorporated into the lattice-based setting, and our results show that this changes the quantitative behavior of the PSQ model significantly.
Published electronically March 7, 2019DOI: 10.1137/18S017260
Authors: Jun Hee Kim, Eun Kyung Kwon, and Qian Sha (Carnegie Mellon University)Sponsor: Brian Junker (Carnegie Mellon University)
Published electronically April 25, 2019DOI: 10.1137/18S017363
Authors: Mallory Gaspard, Peter Craig, and Erik Bergland (Rensselaer Polytechnic Institute)Sponsor: Peter Kramer (Rensselaer Polytechnic Institute)
Abstract: We study the language shift and competition between the twelve most prominent world-languages while accounting for factors affecting these trends such as governmental influences, migration between nations, and the interaction between competing languages. To model these effects, we propose an integro-differential equation, which is a partial differential equation (PDE), that takes the aforementioned factors into account and predicts the fate of these languages with regards to time and geography. We also carry out a stability analysis of our proposed model under certain circumstances.
In the first part of the investigation, following the establishment of our integro-differential equation model, we also construct a weighted digraph in Python using the United Nations Migrant Data from 1990-2017 to identify the geographic locations and languages that act as keystones in the global language network. In addition, we execute a numerical simulation of our PDE model in Python, to model the projected future language shifts over time and compare the results from our model to the centrality calculations carried out on our digraph. From the numerical simulations, we predict that the number of monolingual Hindustani speakers will show the greatest growth. Also in terms of the number of first language speakers, English will pass Spanish and Russian will pass Bengali. Furthermore, from our model, it is estimated that in the next fifty years, we can expect to see a rise in the number of English speakers, which will remain clear second beneath Mandarin. We can also expect to see a decrease in the number of Bengali speakers.
Published electronically May 13, 2019DOI: 10.1137/18S017557
Authors: Linjun Huang (University of California, Davis)Sponsor: Qingtian Zhang (University of California, Davis)
Abstract: We construct a global conservative weak solution to the Cauchy problem for the non-linear variational wave equation vtt-c(v)(c(v)vx)x+1/2g(v) = 0 where g(v) is defined in (2.5) and c(.) is any smooth function with uniformly positive bounded value. This wave equation is derived from a wave system modelling nematic liquid crystals in a constant electric field.
Published electronically June 3, 2019DOI: 10.1137/18S017545
Authors: Emily MacIndoe (University of Mary Washington)Sponsor: Leo Lee (University of Mary Washington)
Abstract: The Susceptible-Infected-Virus (SIV) model is a compartmental model to describe within-host dynamics of a viral infection. We apply the SIV model to the human immunodeficiency virus (HIV); in particular, we present analytical solutions to two versions of the model. The first version includes only terms related to the susceptible cell-virus particle interaction and virus production, while the second includes those terms in addition to the infected cell death rate. An analytical solution, although more challenging and time-consuming than numerical methods, has the advantage of giving exact, rather than approximate, results. These results contribute to our understanding of virus dynamics and could be used to develop better treatment options. The approach used to solve each model involved first isolating one of the dependent variables, that is, deriving an equation that involves only one of the variables and its derivatives. Next, various substitutions were used to bring the equation to a more easily solvable form. For the first model, an exact solution is obtained in the form of an implicit equation. For the second model, we give an analytical solution generated by an iterative method.
Published electronically June 7, 2019DOI: 10.1137/19S1259870M3 Challenge Introduction
Authors: Eric Chai, Gustav Hansen, Emily Jiang, Kylie Lui, and Jason Yan (High Technology High School, Lincroft, NJ)Sponsor: Raymond Eng (High Technology High School, Lincroft, NJ)
Abstract: In recent years, substance abuse has intensified to an alarming degree in the United States. In particular, the rise of vaping, a new form of nicotine consumption, is dangerously exposing drug abuse to a new generation. With the need to understand how substance use spreads and impacts individuals differently, our team seeks to provide a report with mathematically-founded insights on this prevalent issue.
The repercussions of substance abuse are reverberating and remain with an individual for life. However, drugs not only severely affect the user but also cause extensive societal harm. Increased understanding of the projected spread and impact of substance abuse, as well as the underlying factors that lead to poor judgement, are needed to optimize measures to restrict consumption. Ultimately, we believe that our models provide novel insight into the nationwide issue of substance use and abuse.
Published electronically June 27, 2019DOI: 10.1137/18S01757
Author: Ruben Ascoli (Thomas Jefferson High School for Science and Technology)Sponsor: Tyrus Berry (George Mason University)
Abstract: This paper develops the process of using Richardson Extrapolation to improve the Kernel Density Estimation method, resulting in a more accurate (lower Mean Squared Error) estimate of a probability density function for a distribution of data in Rd given a set of data from the distribution. The method of Richardson Extrapolation is explained, showing how to ﬁx conditioning issues that arise with higher-order extrapolations. Then, it is shown why higher-order estimators do not always provide the best estimate, and it is discussed how to choose the optimal order of the estimate. It is shown that given n one-dimensional data points, it is possible to estimate the probability density function with a mean squared error value on the order of only n−1 ln(n). Finally, this paper introduces a possible direction of future research that could further minimize the mean squared error.
Published electronically July 8, 2019DOI: 10.1137/19S019085
Authors: Theren Williams, Zachary Smith, and Drew Seewald (University of Michigan, Dearborn)Sponsors: Dr. Kevsha P. Pokhrel and Dr. Taysseer Sharaf (University of Michigan, Dearborn)
Abstract: With cancer as a leading cause of death in the United States, the study of its related data is imperative due to the potential patient benefits. This paper examines the Surveillance, Epidemiology, and End Results program (SEER) research data of reported cancer diagnoses from 1973-2014 for the incidence of leukemia in young (0-19 years) patients in the United States. The aim is to identify variables, such as prior cancers and treatment, with a unique impact on survival time and five-year survival probabilities using visualizations and different machine learning techniques. This goal culminated in building multiple models to predict the patient's hazard. The two most insightful models constructed were both neural networks. One network used discrete survival time as a covariate to predict one conditional hazard per patient. The prediction rate is nearly 95% for testing datasets. The other network built hazards for discrete time intervals without survival time as a covariate and predicted with lower accuracy, but captured variable effects from initial testing better.
Published electronically July 9, 2019DOI: 10.1137/17S016518
Authors: Louisa Lee and Siyu Zhang (Northwestern University)Sponsor: Vicky Chuqiao Yang (Northwestern University)
Published electronically July 18, 2019.DOI: 10.1137/18S017430
Author: Theodore Weinberg (University of Maryland, Baltimore County)Sponsor: Bedrich Sousedik (University of Maryland, Baltimore County)Abstract: We develop a fast implementation of the mixed finite element method for the Darcy's problem discretized by lowest-order Raviart-Thomas finite elements using Matlab. The implementation is based on the so-called vectorized approach applied to the computation of the finite element matrices and assembly of the global finite element matrix. The code supports both 2D and 3D domains, and the finite elements can be triangular, rectangular, tetrahedral or hexahedral. The code can also be easily modified to import user-provided meshes. We comment on our freely available code and present a performance comparison with the standard approach.
Published electronically August 14, 2019DOI: 10.1137/19S019115
Authors: Jenna Guenther and Morgan Wolf (James Madison University)Sponsor: Dr. Paul Warne (James Madison University)
Abstract: The Parker-Sochacki Method (PSM) allows the numerical approximation of solutions to a polynomial initial value ordinary differential equation or system (IVODE) using an algebraic power series method. PSM is equivalent to a modified Picard iteration and provides an efficient, recursive computation of the coefficients of the Taylor polynomial at each step. To date, PSM has largely concentrated on fixed step methods. We develop and test an adaptive stepping scheme that, for many IVODEs, enhances the accuracy and efficiency of PSM. PSM Adaptive (PSMA) is compared to its fixed step counterpart and to standard Runge-Kutta (RK) foundation algorithms using three example IVODEs. In comparison, PSMA is shown to be competitive, often outperforming these methods in terms of accuracy, number of steps, and execution time. A library of functions is also presented that allows access to PSM techniques for many non-polynomial IVODEs without having to first rewrite these in the necessary polynomial form, making PSM a more practical tool.