Friday, Sept. 14 2007
2:10pm, 100 Neville Hall
Prof. Sat GuptaDepartment of Mathematics and Statistics, University of North Carolina at Greensboro
Estimation of the Mean and Variance of a Population under Simple Random Sampling
It may appear at first sight that there is nothing to talk about when it comes to estimating the mean and the variance of a population. That is true to some extent. If no other information is available, then ordinary sample mean and sample variance provide good estimators for the population mean and variance respectively. However, when auxiliary information is available, there are many clever ways of improving the estimation by exploiting the auxiliary information. Ratio estimation, regression estimation, difference estimation and product estimation are some of the techniques that may improve quality of estimation. In this talk, we will discuss several such estimation techniques and will present some recent and forthcoming results.

Friday, Oct. 19 2007
3:10pm, 140 Bennett Hall
A. ArneodoLaboratoire Joliot Curie and Laboratoire de Physique, Ecole Normale Supérieure de Lyon, France
Low frequency rhythms in human DNA sequences: from genome-wide sequence analysis to the modeling of replication in higher eukaryotes
Understanding how chromatin is spatially and dynamically organized in the nucleus of eukaryotic cells and how this affects genome functions is one of the main challenges of cell biology. Recent technical progress in live cell imaging have confirmed that the structure and dynamics of chromatin play an essential role in regulating many biological processes, such as gene activity, DNA replication, recombination and DNA damage repair. The emerging view is that genomes are compartmentalized at the level of chromosome territories in mammalian nuclei, into subchromosal structural domains that are likely to be fundamental functional units that coordinate the spatial organization and timing of replication and transcription. To which extent one may learn about the higher order structure and dynamics of chromatin directly from the primary DNA sequence and its functional landmarks, is a question of fundamental and practical importance.
In this talk, we explore the large-scale compositional heterogeneity of human autosomal chromosomes through the optics of the wavelet transform (WT) microscope. We show that the GC content displays relaxational nonlinear oscillations with two main frequencies corresponding to 100 kb and 400 kb which are well recognized characteristic sizes of chromatin loops and loop domains involved in the hierarchical folding of the chromatin fiber. These frequencies are also remarkably similar to the size of mammalian replicons. When further investigating deviations from intrastrand equimolarities between A and T and between G and C, we corroborate the existence of these two fundamental frequencies as the footprints of the replication and/or transcription mutation biais and we show that the observed nonlinear oscillations enlighten a remarkable cooperative organization of gene location and orientation. When further investigating the intergenic and transcribed regions flanking experimentally identified human replication origins and the corresponding mouse and dog homologous regions, we reveal that for 7 of 9 of these known origins, the (TA+GC) skew displays rather sharp upward jumps, with a linear decreasing profile in between two successive jumps. We present a model of replication with well positioned replication origins and random terminations that accounts for the observed characteristic serrated skew profiles. We further use the singularity tracking ability of the WT to develop a methodology to detect the origins of replication. We report the discovery of 1024 putative origins of replications in the human genome. The statistical analysis of the distribution of sense and anti-sense genes around these origins strongly suggests that the origins of replication play a fundamental role in the organization of mammmalian genomes. Taken together, these analyses show that replication and gene expression are likely to be regulated by the structure and dynamics of the chromatin fiber.


Thursday, Oct. 25 2007
3:10pm, 100 Neville Hall
Dr. Cheng PengDepartment of Mathematics and Statistics, University of Southern Maine
Theory and Applications of Quantile-based Process Capability Indices
Unlike process mean-variance based process capability indices (mvPCI) such as Cp, Cpk (Cpl and Cpu for one sided specification) that implicitly assume normality/symmetry of the underlying process, quantile-based process capability indices (qPCI) use process quantiles as the key characteristics in the definition to avoid the unrealistic normality/symmetry assumption. The introduction of process quantiles to the definition of PCIs substantially increases the complexity of PCIs in both theory and computation since both quantile estimation and density estimation are needed in studying the asymptotic property of qPCIs, which also make the analysis more computationally intensive. The theoretical research on qPCI is very limited in existing literature. In this talk, we will focus on the theory and the practical implementations of qPCIs using Vannman’s (1995) superstructures.


Thursday, Feb. 7 2008
2:00pm, 105 DPC
Dr. Natasha SpeerDivision of Science and Mathematics Education, Michigan State University
Shedding light on some complexities of teaching college mathematics: Influences of knowledge and beliefs on a differential equations teacher’s practices
The more that educational researchers learn about teaching and the factors that shape what teachers do in their classrooms, the more complex the processes appear. Although teachers of college mathematics typically do not participate in extensive professional development related to instruction, they still acquire the knowledge and skills to plan lessons and create learning opportunities for students. One currently under-examined question is: What resources (for example, knowledge and beliefs) do teachers of college mathematics have at their disposal as they teach, what resources are necessary, and how are these resources acquired during their careers? College mathematics teachers typically possess extremely strong and deep knowledge of mathematics content. This fact makes the study of such teachers a potentially very fruitful avenue toward understanding the other factors that influence teaching practices. I will present findings from some of my current work that is focused on identifying and analyzing the resources that shaped the instructional practices of a college mathematics teacher as he taught a new (to him) version of an undergraduate differential equations course. In particular, I will present findings from an investigation of the knowledge of student thinking (one element of pedagogical content knowledge), mathematical knowledge for teaching, and beliefs about teaching that appeared to shape the learning opportunities he created for the students in his class. I will focus in particular on whole class discussions and how his knowledge and beliefs interacted, leading to more and less mathematically productive discussions.


Thursday, Feb. 14 2008
3:15pm, 107 DPC
Dr. Jennifer NollDept. of Mathematics and Statistics, Portland State University
Graduate Teaching Assistants’ Statistical Knowledge for Teaching
Research in statistics education has blossomed over the past two decades, yet there is relatively little research investigating what knowledge is necessary and sufficient to teach statistics well. In addition, despite the fact that TAs’ role in undergraduate statistics education is integral, the research community knows very little about their knowledge of statistics and of teaching statistics. This study explores graduate teaching assistants’ (TAs’) statistical knowledge for teaching. Through a task-based web survey and a series of task-based interviews, I investigated the ways in which TAs reason about sampling tasks, and how they think about teaching and student learning in relation to sampling ideas. Specifically, I discuss: (1) tensions TAs appeared to experience between their knowledge of theoretical probability models and their expectations of experimental data; (2) a spectrum of reasoning about statistical inference that ranged from no conception of repeated sampling to strong conceptions of repeated sampling; and, (3) a model for what statistical knowledge for teaching sampling concepts might look like. I discuss the implications of research on TAs’ statistical knowledge for teaching on graduate and undergraduate education and directions for future research.


Monday, Feb. 18 2008
3:10pm, 117 DPC
Dr. Marius IonescuDept. of Mathematics, Cornell University
Fractals and Markov Operators in C*-Algebras
Topological dynamical systems have been traditionally a rich source for problems in functional analysis in general and operator algebras in particular. In my talk I am going to present the interplay between a class of irreversible dynamical systems acting on fractals, the so called iterated function systems, and operators algebras. I will insist on the Markov operator point of view for these iterated function systems. This means that I am going to study the operator that an iterated function system determines on the space of continuous functions on the underlying fractal. More generally, I will argue that such continuous Markov operators determine a large class of C*-algebras studied by many people in the last decade.


Thursday, Feb. 21 2008
3:10pm, 119 Barrows
Dr. Ralf SchifflerDept. of Mathematics and Statistics, University of Massachusetts at Amherst
Quiver Representations: Basic Facts and Some Recent Developments
This talk is an introduction to quiver representations. A quiver (or oriented graph) Q=(Q_0,Q_1) is a set of vertices Q_0 and set of arrows Q_1 such that each arrow a in Q_1 starts at some vertex s(a) in Q_0 and ends at some vertex t(a) in Q_0. A representation of the quiver Q consists of a vector space V_i for each vertex i in Q_0 and a linear map f_a from V_{s(a)} to V_{t(a)} for each arrow a in Q_1. We will introduce the category of representations of such a quiver Q and consider an equivalent realization as module category over the path algebra of Q. We will also illustrate some recent developments that connect quiver representations to cluster algebras.


Thursday, March 20 2008
3:10pm, 119 Barrows
Dr. Reinier BrokerMicrosoft Corporation
Constructing Elliptic Curves of Prescribed Order
Elliptic curves have become increasingly important during the last 20 years. They play a key role in Wiles’ proof of Fermat’s last theorem, and they are one of the foundations of modern cryptography: every cell phone contains an elliptic curve nowadays.
There are various efficient algorithms to count the number of points of a given elliptic curve over a finite field. In this talk I will consider the inverse problem of constructing elliptic curves of prescribed order. I’ll present a solution that easily handles the sizes occuring in cryptographic practice. Many examples will be given.


Thursday, April 3 2008
2:10pm, 421 Neville Hall
Na WangDept. of Mathematics and Statistics, University of Maine
Estimation of Extra Risk and Benchmark Dose in Dose Response Models
Thesis defense; Advisor: Dr. Ramesh C. Gupta
An important goal in quantitative risk/safety analysis of chemical toxins or pharmaceutical agents is determination of toxic risk posed by exposure to the agent. For purpose of assessing exposure risks, the extra risk function is defined as the risk above the background level corrected for non-response in the unexposed population. Our interest, in this thesis, is in statistical methods for obtaining upper confidence limits on the extra risk and lower confidence limits on the dose level at which a certain benchmark risk is achieved. Existing method on the above problem is examined and several examples from literature are provided. In addition, a new method of obtaining the desired confidence intervals is investigated and the results are compared with those obtained by the existing method.


Thursday, April 10 2008
2:10pm, 421 Neville Hall
Wes VilesDept. of Mathematics and Statistics, University of Maine
Reliability Functions of an Extended Generalized Inverse Gaussian Distribution
Recently, A. Al-Zamel, I. Ali, and S. Kalla (2002) considered a probability density function involving the product of two confluent hypergeometric functions. One of its special cases is an extended generalized inverse Gaussian distribution. The extended generalized inverse Gaussian distribution is a family of distributions involving four parameters. Its particular cases include the inverse Gaussian, generalized inverse Gaussian, and Gamma distributions.
It is well known that these distributions have been extensively employed in analyzing life-testing data. The failure rate and the mean residual life functions are, in general, non-monotonic.
In this talk we will investigate the monotonicity of the failure rate and the mean residual life functions. Existing results, from the literature will be employed wherever applicable, see Glaser (1980) and Gupta and Warren (2001). Results for multiple turning points, of the mean residual life function, will be investigated.


Thursday, May 1 2008
8:00 and 9:00am, 421 Neville Hall
Jeremy GrantUMaine Mathematics Master’s Candidate.  Advisor: Andre Khalil.
Thesis defense: Wavelet-Based Characterization of Mouse Chromosome Territories
Part 1: Radiation exposure is an occupational hazard for military personnel, some health care professionals, airport security screeners, and medical patients, with some individuals at risk for acute, high-dose exposures. Therefore, the biological effects of radiation, especially the potential for chromosome damage, are major occupational and health concerns. However, the biophysical mechanisms of chromosome instability subsequent to radiation induced DNA damage are poorly understood. It is clear that interphase chromosomes occupy discrete structural and functional subnuclear domains, termed chromosome territories (CT), which may be organized into “neighborhoods” comprising groups of specific CTs. We directly evaluated the relationship between chromosome positioning, neighborhood composition, and translocation partner choice in primary lymphocytes, using a cell-based system in which we could induce multiple, concentrated DNA breaks via high-dose irradiation. We show that CT neighborhoods comprise heterologous chromosomes, within which inter-CT distances directly relate to translocation partner choice.
Part 2: While chromosome size and gene density appear to influence positioning, the biophysical mechanisms behind CT localization, especially the relationship between morphology and positioning, remain obscure. One reason for this has been the difficulty in imaging, segmenting, and analyzing structures with variable or imprecise boundaries. This prompted us to develop a novel approach, based on the two-dimensional (2D) wavelet-transform modulus maxima (WTMM) method, adapted to perform objective and rigorous CT segmentation from nuclear background. Using the WTMM method in combination with numerical simulation models, we show that CTs have a highly nonspherical 3D morphology, that CT positioning is nonrandom, and favors heterologous CT groupings.


Friday, May 2 2008
2:10pm, 108 Neville Hall
Chenglu DaiDept. of Mathematics and Statistics, University of Maine
UMaine Master’s Candidate.  Advisor: Bill Hatleman.
Thesis defense: Application of Lagrange Multiplier Method Numerically to Finding Profile Likelihood Intervals for Abundance
In many ecological research studies, abundance data are skewed and contain more zeros than might be expected. Therefore, it’s been recently advocated that profile likelihood confidence intervals, which are generally not symmetric, and with a conditional model, which allows one to separately model presence and abundance given presence, are used in such studies. Furthermore, David Fletcher and Malcolm Faddy (2007) show how to calculate the profile likelihood confidence intervals, using the Lagrange Multiplier method numerically.
In this talk, we will start from introducing the concepts of the likelihood function, confidence intervals, profile likelihood and Lagrange Multiplier method. Then, the connection between this method and profile likelihood will be explained in detail. Specifically, we will investigate how to use this Lagrange Multiplier method numerically and under what conditions this numerical method is theoretically correct. During the whole process, some examples may be added to make it more understandable. This talk features the combination of statistics, numerical analysis, Calculus and topology.


Wednesday, May 14 2008
2:10pm, 421 Neville Hall
Wes VilesDept. of Mathematics and Statistics, University of Maine
Rollercoaster Failure Rates and Mean Residual Life Functions
Thesis defense; Advisor: Dr. Ramesh C. Gupta
The investigation in this paper was motivated by an extended generalized inverse Gaussian (EGIG) distribution which has more than one turning point of the failure rate for certain values of the parameters. We present some general results for studying the relationship between the change points of Glaser’s eta function, the failure rate and the MRLF. Also, we established an ordering between the number of change points of Glaser’s eta function, the hazard rate and the MRLF. These results are used to investigate, in detail, the monotonicity of the three functions in the case of the EGIG.
The EGIG model has one additional parameter than the generalized inverse Gaussian (GIG) model having three parameters; see Jorgensen (1982). For the EGIG model, the maximum likelihood estimation of the four parameters is discussed and a score test is developed for testing the importance of the additional parameter. An example is provided to illustrate that the EGIG model fits the data better than the GIG of Jorgensen (1982).