Doctoral Thesis
Refine
Year of publication
- 2014 (4) (remove)
Document Type
- Doctoral Thesis (4) (remove)
Language
- English (4) (remove)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- Algebra (1)
- Bioinformatik (1)
- CRF (1)
- Expressionsdaten (1)
- Funktionalanalysis (1)
- Hochdurchsatz (1)
- Interaktionsfläche (1)
- Labelingproblem (1)
- Proteinwechselwirkungsstelle (1)
- Randomisierung (1)
- Rechenmethoden (1)
- Statistik (1)
- Vorhersage (1)
- aging (1)
- bioinformatics (1)
- biologischer Quellen (1)
- conditional random field (1)
- drug (1)
- mRNA (1)
- microarray (1)
- oocyte (1)
- probabilistic interface labeling problem (1)
- protein (1)
- proteome (1)
- randomization (1)
- reprogramming (1)
- small molecule (1)
- statistics (1)
Institute
- Institut für Mathematik und Informatik (4) (remove)
In the PhD-thesis a conditional random field approach and its implementation is presented to predict the interaction sites of protein homo- and heterodimers using the spatial structure of one protein partner from a complex. The method includes a substantially simple edge feature model. A novel node feature class is introduced that is called -change in free energy-. The Online Large-Margin algorithm is adapted in order to train the model parameters given a classified reference set of proteins. A significantly higher prediction accuracy is achieved by combining our new node feature class with the standard node feature class relative accessible surface area. The quality of the predictions is measured by computing the area under the receiver operating characteristic.
The constructions of Lévy processes from convolution semigroups and of product systems from subproduct systems respectively, are formally quite similar. Since there are many more comparable situations in quantum stochastics, we formulate a general categorial concept (comonoidal systems), construct corresponding inductive systems and show under suitable assumptions general properties of the corresponding inductive limits. Comonoidal systems in different tensor categories play a role in all chapters of the thesis. Additive deformations are certain comonoidal systems of algebras. These are obtained by deformation of the algebra structure of a bialgebra. If the bialgebra is even a Hopf algebra, then compatibility with the antipode automatically follows. This remains true also in the case of braided Hopf algebras. Subproduct systems are comonoidal systems of Hilbert spaces. In the thesis we deal with the question, what are the possible dimensions of finite-dimensional subproduct systems. In discrete time, this can be reduced to the combinatorial problem of determining the complexities of factorial languages. We also discuss the rational and continuous time case. A further source for comonoidal systems are universal products, which are used in quantum probability to model independence. For the (r,s)-products, which were recently introduced by S. Lachs, we determine the corresponding product of representations by use of a generalized GNS-construction.
High-throughput expression data have become the norm in molecular biology research. However, the analysis of expression data is statistically and computationally challenging and has not kept up with their generation. This has resulted in large amounts of unexplored data in public repositories. After pre-processing and quality control, the typical gene expression analysis workflow follows two main steps. First, the complexity of the data is reduced by removing the genes that are redundant or irrelevant for the biological question that motivated the experiment, using a feature selection method. Second, relevant genes are investigated to extract biological information that could aid in the interpretation of the results. Different methods, such as functional annotation, clustering, network analysis, and/or combinations thereof are useful for the latter purpose. Here, I investigated and presented solutions to three problems encountered in the expression data analysis workflow. First, I worked on reducing complexity of high-throughput expression data by selecting relevant genes in the context of the sample classification problem. The sample classification problem aims to assign unknown samples into one of the known classes, such as healthy and diseased. For this purpose, I developed the relative signal-to-noise ratio (rSNR), a novel feature selection method which was shown to perform significantly better than other methods with similar objectives. Second, to better understand complex phenotypes using high-throughput expression data, I developed a pipeline to identify the underlying biological units, as well as their interactions. These biological units were assumed to be represented by groups of genes working in synchronization to perform a given function or participate in common biological processes or pathways. Thus, to identify biological units, those genes that had been identified as relevant to the phenotype under consideration through feature selection methods were clustered based on both their functional annotations and expression profiles. Relationships between the associated biological functions, processes, and/or pathways were investigated by means of a co-expression network. The developed pipeline provides a new perspective to the analysis of high-throughput expression data by investigating interactions between biological units. Finally, I contributed to a project where a network describing pluripotency in mouse was used to infer the corresponding network in human. Biological networks are context-specific. Combining network information with high-throughput expression data can explain the control mechanisms underlying changes and maintenance of complex phenotypes. The human network was constructed on the basis of orthology between mouse and human genes and proteins. It was validated with available data in the literature. The methods and strategies proposed here were mainly trained and tested on microarray expression data. However, they can be easily adapted to next-generation sequencing and proteomics data.
Approaches to the Analysis of Proteomics and Transcriptomics Data based on Statistical Methodology
(2014)
Recent developments in genomics and molecular biology led to the generation of an enormous amount of complex data of different origin. This is demonstrated by a number of published results from microarray experiments in Gene Expression Omnibus. The number was growing in exponential pace over the last decade. The challenge of interpreting these vast amounts of data from different technologies led to the development of new methods in the fields of computational biology and bioinformatics. Researchers often want to represent biological phenomena in the most detailed and comprehensive way. However, due to the technological limitations and other factors like limited resources this is not always possible. On one hand, more detailed and comprehensive research generates data of high complexity that is very often difficult to approach analytically, however, giving bioinformatics a chance to draw more precise and deeper conclusions. On the other hand, for low-complexity tasks the data distribution is known and we can fit a mathematical model. Then, to infer from this mathematical model, researchers can use well-known and standard methodologies. In return for using standard methodologies, the biological questions we are answering might not be unveiling the whole complexity of the biological meaning. Nowadays it is a standard that a biological study involves generation of large amounts of data that needs to be analyzed with a statistical inference. Sometimes data challenge researchers with low complexity task that can be performed with standard and popular methodologies as in Proteomic analysis of mouse oocytes reveals 28 candidate factors of the "reprogrammome". There, we established a protocol for proteomics data that involves preprocessing of the raw data and conducting Gene Ontology overrepresentation analysis utilizing hypergeometric distribution. In cases, where the data complexity is high and there are no published frameworks a researcher could follow, randomization can be an approach to exploit. In two studies by The mouse oocyte proteome escapes maternal aging and CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we showed how randomization can be performed for distinct complex tasks. In The mouse oocyte proteome escapes maternal aging we constructed a random sample of semantic similarity score between oocyte transcriptome and random transcriptome subset of oocyte proteome size. Therefore, we could calculate whether the proteome is representative of the trancriptome. Further, we established a novel framework for Gene Ontology overrepresentation that involves randomization testing. Every Gene Ontology term is tested whether randomly reassigning all gene labels of belonging to or not belonging to this term will decrease the overall expression level in this term. In CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we validated CellFateScout against other well-known bioinformatics tools. We stated the question whether our plugin is able to predict small molecule effects better in terms of expression signatures. For this, we constructed a protocol that uses randomization testing. We assess here if the small molecule effect described as a (set of) active signaling pathways, as detected by our plugin or other bioinformatics tools, is significantly closer to known small molecule targets than a random path.