Refine
Year of publication
Document Type
- Doctoral Thesis (36)
- Article (30)
- Final Thesis (1)
Language
- English (67) (remove)
Has Fulltext
- yes (67)
Is part of the Bibliography
- no (67)
Keywords
- - (21)
- Statistik (4)
- fractal (4)
- permutation entropy (3)
- Algebra (2)
- Bioinformatik (2)
- Bundle Gerbes (2)
- Fraktal (2)
- Funktionalanalysis (2)
- Hierarchie (2)
Institute
- Institut für Mathematik und Informatik (67) (remove)
Publisher
- MDPI (14)
- Frontiers Media S.A. (6)
- Springer Nature (4)
- BioMed Central (BMC) (2)
- Oxford University Press (1)
- Wiley (1)
A lot of research data has become available since the outbreak of the COVID-19
pandemic in 2019. Connecting this data is essential for the understanding of the
SARS-CoV-2 virus and the fight against the pandemic.
Amongst biological and biomedical research data, computational models targeting
COVID-19 have been emerging and their number is growing constantly. They are a
central part of the field of Systems Biology, which aims to understand the mechanisms
and behaviour of biological systems. Model predictions help to understand the
mechanisms of the novel coronavirus and the life-threatening disease it is causing.
Both biomedical research data and modelling data regarding COVID-19 have
previously been stored in separated domain-specific graph databases. MaSyMoS,
short for Management System for Models and Simulations, is a graph database for
storing simulation studies of biological and biochemical systems. The CovidGraph
project integrates research data regarding COVID-19 and the coronavirus family
from various data resources in a knowledge graph.
In this thesis, we integrate simulation models from MaSyMoS, including models
targeting COVID-19, into the CovidGraph. Therefore, we present a concept for
the integration of simulation studies and the linkage through ontology terms and
reference publications in the CovidGraph. Ultimately, we connect data from the field
of systems biology and biomedical research data in a graph database.
This thesis revolves around a new concept of independence of algebras. The independence nicely fits into the framework of universal products, which have been introduced to classify independence relations in quantum probability theory; the associated product is called (r,s)-product and depends on two complex parameters r and s. Based on this product, we develop a theory which works without using involutive algebras or states. The following aspects are considered: 1. Classification: Universal products are defined on the free product of algebras (the coproduct in the category of algebras) and model notions of independence in quantum probability theory. We distinguish universal products according to their behaviour on elements of length two, calling them (r,s)-universal products with complex parameters r and s respectively. In case r and s equal 1, Muraki was able to show that there exist exactly five universal products (Muraki’s five). For r equals s nonzero we get five one parameter families (q-Muraki’s five). We prove that in the case r not equal to s the (r,s)-product, a two parameter deformation of the Boolean product, is the only universal product satisfying our set of axioms. The corresponding independence is called (r,s)-independence. 2. Dual pairs and GNS construction: By use of the GNS construction, one can associate a product of representations with every positive universal product. Since the (r,s)-product does not preserve positivity, we need a substitute for the usual GNS construction for states on involutive algebras. In joint work with M. Gerhold, the product of representations associated with the (r,s)-product was determined, whereby we considered representations on dual pairs instead of Hilbert spaces. This product of representations is - as we could show - essentially different from the Boolean product. 3. Reduction and quantum Lévy processes: U. Franz introduced a category theoretical concept which allows a reduction of the Boolean, monotone and antimonotone independence to the tensor independence. This existing reduction could be modified in order to apply to the (r,s)-independence. Quantum Lévy processes with (r,s)-independent increments can, in analogy with the tensor case, be realized as solutions of quantum stochastic differential equations. To prove this theorem, the previously mentioned reduction principle in the sense of U. Franz and a generalization of M. Schürmann’s theory for symmetric Fock spaces over dual pairs are used. As the main result, we obtain the realization of every (r,s)-Lévy process as solution of a quantum stochastic differential equation. When one, more generally, defines Lévy processes in a categorial way using U. Franz’s definition of independence for tensor categories with inclusions, compatibility of the inclusions with the tensor category structure plays an important role. For this thesis such a compatibility condition was formulated and proved to be equivalent to the characterization proposed by M. Gerhold. 4. Limit distributions: We work with so-called dual semigroups in the sense of D. V. Voiculescu (comonoids in the tensor category of algebras with free product). The polynomial algebra with primitive comultiplication is an example for such a dual semigroup. We use a "weakened" reduction which we call reduction of convolution and which essentially consists of a cotensor functor constructed from the symmetric tensor algebra. It turns dual semigroups into commutative bialgebras and also translates the convolution exponentials. This method, which can be nicely described in the categorial language, allows us to formulate central limit theorems for the (r,s)-independence and to calculate the correponding limit distributions (convergence in moments). We calculate the moments appearing in the central limit theorem for the (r,s)-product: The even moments are homogeneous polynomials in r and s with the Eulerian numbers as coefficients; the odd moments vanish. The moment sequence that we get from the central limit theorem for an arbitrary universal product is the moment sequence of a probability measure on the real line if and only if r equals s greater or equal to 1. In this case we present an explicit formula for the probability measure.
A New Kind of Permutation Entropy Used to Classify Sleep Stages from Invisible EEG Microstructure
(2017)
We consider Iterated Function Systems (IFS) on the real line and on the complex plane. Every IFS defines a self-similar measure supported on a self-similar set. We study the transfer operator (which acts on the space of continuous functions on the self-similar set) and the Hutchinson operator (which acts on the space of Borel regular measures on the self-similar set). We show that the transfer operator has an infinitely countable set of polynomial eigenfunctions. These eigenfunctions can be regarded as generalized Bernoulli polynomials. The polynomial eigenfuctions define a polynomial approximation of the self-similar measure. We also study the moments of the self-similar measure and give recursions for computing them. Further, we develop a numerical method based on Markov chains to study the spectrum of the Hutchinson and transfer operators. This method provides numerical approximations of the invariant measure for which we give error bounds in terms of the Wasserstein-distance. The standard example in this thesis is the parametric family of Bernoulli convolutions.
Self-affine tiles and fractals are known as examples in analysis and topology, as models of quasicrystals and biological growth, as unit intervals of generalized number systems, and as attractors of dynamical systems. The author has implemented a software which can find new examples and handle big databases of self-affine fractals. This thesis establishes the algebraic foundation of the algorithms of the IFStile package. Lifting and projection of algebraic and rational iterated function systems and many properties of the resulting attractors are discussed.
Anaplasma phagocytophilum and Anaplasma ovis–Emerging Pathogens in the German Sheep Population
(2021)
Knowledge on the occurrence of pathogenic tick-borne bacteria Anaplasma phagocytophilum and Anaplasma ovis is scarce in sheep from Germany. In 2020, owners from five flocks reported ill thrift lambs and ewes with tick infestation. Out of 67 affected sheep, 55 animals were clinically examined and hematological values, blood chemistry and fecal examinations were performed to investigate the underlying disease causes. Serological tests (cELISA, IFAT) and qPCR were applied to all affected sheep to rule out A. phagocytophilum and A. ovis as a differential diagnosis. Ticks were collected from selected pastures and tested by qPCR. Most animals (n = 43) suffered from selenium deficiency and endoparasites were detected in each flock. Anaplasma spp. antibodies were determined in 59% of examined sheep. Seventeen animals tested positive for A. phagocytophilum by qPCR from all flocks and A. phagocytophilum was also detected in eight pools of Ixodes ricinus. Anaplasma phagocytophilum isolates from sheep and ticks were genotyped using three genes (16S rRNA, msp4 and groEL). Anaplasma ovis DNA was identified in six animals from one flock. Clinical, hematological and biochemical changes were not significantly associated with Anaplasma spp. infection. The 16S rRNA analysis revealed known variants of A. phagocytophilum, whereas the msp4 and groEL showed new genotypes. Further investigations are necessary to evaluate the dissemination and health impact of both pathogens in the German sheep population particularly in case of comorbidities.
In phylogenetics, evolutionary relationships of different species are represented by phylogenetic trees.
In this thesis, we are mainly concerned with the reconstruction of ancestral sequences and the accuracy of this reconstruction given a rooted binary phylogenetic tree.
For example, we wish to estimate the DNA sequences of the ancestors given the observed DNA sequences of today living species.
In particular, we are interested in reconstructing the DNA sequence of the last common ancestor of all species under consideration. Note that this last common ancestor corresponds to the root of the tree.
There exist various methods for the reconstruction of ancestral sequences.
A widely used principle for ancestral sequence reconstruction is the principle of parsimony (Maximum Parsimony).
This principle means that the simplest explanation it the best.
Applied to the reconstruction of ancestral sequences this means that a sequence which requires the fewest evolutionary changes along the tree is reconstructed.
Thus, the number of changes is minimized, which explains the name of Maximum Parsimony.
Instead of estimating a whole DNA sequence, Maximum Parsimony considers each position in the sequence separately. Thus in the following, each sequence position is regarded separately, and we call a single position in a sequence state.
It can happen that the state of the last common ancestor is reconstructed unambiguously, for example as A. On the other hand, Maximum Parsimony might be indecisive between two DNA nucleotides, say for example A and C.
In this case, the last common ancestor will be reconstructed as {A,C}.
Therefore we consider, after an introduction and some preliminary definitions, the following question in Section 3: how many present-day species need to be in a certain state, for example A, such that the Maximum Parsimony estimate of the last common ancestor is also {A}?
The answer of this question depends on the tree topology as well as on the number of different states.
In Section 4, we provide a sufficient condition for Maximum Parsimony to recover the ancestral state at the root correctly from the observed states at the leaves.
The so-called reconstruction accuracy for the reconstruction of ancestral states is introduced in Section 5. The reconstruction accuracy is the probability that the true root state is indeed reconstructed and always takes two processes into account: on the one hand the approach to reconstruct ancestral states, and on the other hand the way how the states evolve along the edges of the tree. The latter is given by an evolutionary model.
In the present thesis, we focus on a simple symmetric model, the Neyman model.
The symmetry of the model means for example that a change from A to C is equally likely than a change from C to A.
Intuitively, one could expect that the reconstruction accuracy it the highest when all present-day species are taken into account. However, it has long been known that the reconstruction accuracy improves when some taxa are disregarded for the estimation.
Therefore, the question if there exits at least a lower bound for the reconstruction accuracy arises, i.e. if it is best to consider all today living species instead of just one for the reconstruction.
This is bad news for Maximum Parsimony as a criterion for ancestral state reconstruction, and therefore the question if there exists at least a lower bound for the reconstruction accuracy arises.
In Section 5, we start with considering ultrametric trees, which are trees where the expected number of substitutions from the root to each leaf is the same.
For such trees, we investigate a lower bound for the reconstruction accuracy, when the number of different states at the leaves of the tree is 3 or 4.
Subsequently in Section 6, in order to generalize this result, we introduce a new method for ancestral state reconstruction: the coin-toss method.
We obtain new results for the reconstruction accuracy of Maximum Parsimony by relating Maximum Parsimony to the coin-toss method.
Some of these results do not require the underlying tree to be ultrametric.
Then, in Section 7 we investigate the influence of specific tree topologies on the reconstruction accuracy of Maximum Parsimony. In particular, we consider balanced and imbalanced trees as the balance of a tree may have an influence on the reconstruction accuracy.
We end by introducing the Colless index in Section 8, an index which measures the degree of balance a rooted binary tree can have, and analyze its extremal properties.
Influenza A Virus (IAV) infection followed by bacterial pneumonia often leads to hospitalization and death in individuals from high risk groups. Following infection, IAV triggers the process of viral RNA replication which in turn disrupts healthy gut microbial community, while the gut microbiota plays an instrumental role in protecting the host by evolving colonization resistance. Although the underlying mechanisms of IAV infection have been unraveled, the underlying complex mechanisms evolved by gut microbiota in order to induce host immune response following IAV infection remain evasive. In this work, we developed a novel Maximal-Clique based Community Detection algorithm for Weighted undirected Networks (MCCD-WN) and compared its performance with other existing algorithms using three sets of benchmark networks. Moreover, we applied our algorithm to gut microbiome data derived from fecal samples of both healthy and IAV-infected pigs over a sequence of time-points. The results we obtained from the real-life IAV dataset unveil the role of the microbial families Ruminococcaceae, Lachnospiraceae, Spirochaetaceae and Prevotellaceae in the gut microbiome of the IAV-infected cohort. Furthermore, the additional integration of metaproteomic data enabled not only the identification of microbial biomarkers, but also the elucidation of their functional roles in protecting the host following IAV infection. Our network analysis reveals a fast recovery of the infected cohort after the second IAV infection and provides insights into crucial roles of Desulfovibrionaceae and Lactobacillaceae families in combating Influenza A Virus infection. Source code of the community detection algorithm can be downloaded from https://github.com/AniBhar84/MCCD-WN.
Approaches to the Analysis of Proteomics and Transcriptomics Data based on Statistical Methodology
(2014)
Recent developments in genomics and molecular biology led to the generation of an enormous amount of complex data of different origin. This is demonstrated by a number of published results from microarray experiments in Gene Expression Omnibus. The number was growing in exponential pace over the last decade. The challenge of interpreting these vast amounts of data from different technologies led to the development of new methods in the fields of computational biology and bioinformatics. Researchers often want to represent biological phenomena in the most detailed and comprehensive way. However, due to the technological limitations and other factors like limited resources this is not always possible. On one hand, more detailed and comprehensive research generates data of high complexity that is very often difficult to approach analytically, however, giving bioinformatics a chance to draw more precise and deeper conclusions. On the other hand, for low-complexity tasks the data distribution is known and we can fit a mathematical model. Then, to infer from this mathematical model, researchers can use well-known and standard methodologies. In return for using standard methodologies, the biological questions we are answering might not be unveiling the whole complexity of the biological meaning. Nowadays it is a standard that a biological study involves generation of large amounts of data that needs to be analyzed with a statistical inference. Sometimes data challenge researchers with low complexity task that can be performed with standard and popular methodologies as in Proteomic analysis of mouse oocytes reveals 28 candidate factors of the "reprogrammome". There, we established a protocol for proteomics data that involves preprocessing of the raw data and conducting Gene Ontology overrepresentation analysis utilizing hypergeometric distribution. In cases, where the data complexity is high and there are no published frameworks a researcher could follow, randomization can be an approach to exploit. In two studies by The mouse oocyte proteome escapes maternal aging and CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we showed how randomization can be performed for distinct complex tasks. In The mouse oocyte proteome escapes maternal aging we constructed a random sample of semantic similarity score between oocyte transcriptome and random transcriptome subset of oocyte proteome size. Therefore, we could calculate whether the proteome is representative of the trancriptome. Further, we established a novel framework for Gene Ontology overrepresentation that involves randomization testing. Every Gene Ontology term is tested whether randomly reassigning all gene labels of belonging to or not belonging to this term will decrease the overall expression level in this term. In CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we validated CellFateScout against other well-known bioinformatics tools. We stated the question whether our plugin is able to predict small molecule effects better in terms of expression signatures. For this, we constructed a protocol that uses randomization testing. We assess here if the small molecule effect described as a (set of) active signaling pathways, as detected by our plugin or other bioinformatics tools, is significantly closer to known small molecule targets than a random path.