Refine
Document Type
- Doctoral Thesis (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Statistik (5) (remove)
Institute
- Institut für Mathematik und Informatik (5) (remove)
In this thesis, we elaborate upon Bayesian changepoint analysis, whereby our focus is on three big topics: approximate sampling via MCMC, exact inference and uncertainty quantification. Besides, modeling matters are discussed in an ongoing fashion. Our findings are underpinned through several changepoint examples with a focus on a well-log drilling data.
Die vorliegende Arbeit ist im Bereich der parameterfreien Statistik anzusiedeln und beschäftigt sich mit der Anwendung von ordinalen Verfahren auf Zeitreihen und Bilddaten. Die Basis bilden dabei die sogenannten ordinalen Muster in ein bzw. zwei Dimensionen. Der erste Hauptteil der Arbeit gibt einen Überblick über die breiten Einsatzmöglichkeiten ordinaler Muster in der Zeitreihenanalyse. Mit ihrer Hilfe wird bei simulierten gebrochenen Brownschen Bewegungen der Hurst-Exponenten geschätzt und anhand von EEG-Daten eine Klassifikationsaufgabe gelöst. Des Weiteren wird die auf der Verteilung der ordinalen Muster beruhende Permutationsentropie eingesetzt, um in Magnetresonanztomographie (MRT)-Ruhedaten Kopfbewegungen der Probanden zu detektieren. Der zweite Hauptteil der Arbeit befasst sich mit der Erweiterung der ordinalen Muster auf zwei Dimensionen, um sie für Bilddaten nutzbar zu machen. Nach einigen Betrachtungen an fraktalen Oberflächen steht eine automatisierte und robuste Einschätzung der Qualität struktureller MRT-Daten im Vordergrund.
Approaches to the Analysis of Proteomics and Transcriptomics Data based on Statistical Methodology
(2014)
Recent developments in genomics and molecular biology led to the generation of an enormous amount of complex data of different origin. This is demonstrated by a number of published results from microarray experiments in Gene Expression Omnibus. The number was growing in exponential pace over the last decade. The challenge of interpreting these vast amounts of data from different technologies led to the development of new methods in the fields of computational biology and bioinformatics. Researchers often want to represent biological phenomena in the most detailed and comprehensive way. However, due to the technological limitations and other factors like limited resources this is not always possible. On one hand, more detailed and comprehensive research generates data of high complexity that is very often difficult to approach analytically, however, giving bioinformatics a chance to draw more precise and deeper conclusions. On the other hand, for low-complexity tasks the data distribution is known and we can fit a mathematical model. Then, to infer from this mathematical model, researchers can use well-known and standard methodologies. In return for using standard methodologies, the biological questions we are answering might not be unveiling the whole complexity of the biological meaning. Nowadays it is a standard that a biological study involves generation of large amounts of data that needs to be analyzed with a statistical inference. Sometimes data challenge researchers with low complexity task that can be performed with standard and popular methodologies as in Proteomic analysis of mouse oocytes reveals 28 candidate factors of the "reprogrammome". There, we established a protocol for proteomics data that involves preprocessing of the raw data and conducting Gene Ontology overrepresentation analysis utilizing hypergeometric distribution. In cases, where the data complexity is high and there are no published frameworks a researcher could follow, randomization can be an approach to exploit. In two studies by The mouse oocyte proteome escapes maternal aging and CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we showed how randomization can be performed for distinct complex tasks. In The mouse oocyte proteome escapes maternal aging we constructed a random sample of semantic similarity score between oocyte transcriptome and random transcriptome subset of oocyte proteome size. Therefore, we could calculate whether the proteome is representative of the trancriptome. Further, we established a novel framework for Gene Ontology overrepresentation that involves randomization testing. Every Gene Ontology term is tested whether randomly reassigning all gene labels of belonging to or not belonging to this term will decrease the overall expression level in this term. In CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we validated CellFateScout against other well-known bioinformatics tools. We stated the question whether our plugin is able to predict small molecule effects better in terms of expression signatures. For this, we constructed a protocol that uses randomization testing. We assess here if the small molecule effect described as a (set of) active signaling pathways, as detected by our plugin or other bioinformatics tools, is significantly closer to known small molecule targets than a random path.
We introduce a multi-step machine learning approach and use it to classify data from EEG-based brain computer interfaces. This approach works very well for high-dimensional EEG data. First all features are divided into subgroups and linear discriminant analysis is used to obtain a score for each subgroup. Then it is applied to subgroups of the resulting scores. This procedure is iterated until there is only one score remaining and this one is used for classification. In this way we avoid estimation of the high-dimensional covariance matrix of all features. We investigate the classifification performance with special attention to the small sample size case. For the normal model, we study the asymptotic error rate when dimension p and sample size n tend to infinity. This indicates how to defifine the sizes of subgroups at each step. In addition we present a theoretical error bound for the spatio-temporal normal model with separable covariance matrix, which results in a recommendation on how subgroups should be formed for this kind of data. Finally some techniques, for example wavelets and independent component analysis, are used to extract features of some kind of EEG-based brain computer interface data.
Parsimonious Histograms
(2010)
The dissertation is concerned with the construction of data driven histograms. Histograms are the most elementary density estimators at all. However, they require the specification of the number and width of the bins. This thesis provides two new construction methods delivering adaptive histograms where the required parameters are determined automatically. Both methods follow the principle of parsimony, i.e. the histograms are solutions of predetermined optimization problems. In both cases, but under different aspects, the number of bins is minimized. The dissertation presents the algorithms that solve the optimization problems and illustrates them by a number of numerical experiments. Important properties of the estimators are shown. Finally, the new developed methods are compared with standard methods by an extensive simulation study. By means of synthetic samples of different size and distribution the histograms are evaluated by special performance criteria. As one main result, the proposed methods yield histograms with considerably fewer bins and with an excellent ability of peak detection.