Refine
Year of publication
Document Type
- Doctoral Thesis (36)
- Article (30)
- Final Thesis (1)
Language
- English (67) (remove)
Has Fulltext
- yes (67)
Is part of the Bibliography
- no (67)
Keywords
- - (21)
- Statistik (4)
- fractal (4)
- permutation entropy (3)
- Algebra (2)
- Bioinformatik (2)
- Bundle Gerbes (2)
- Fraktal (2)
- Funktionalanalysis (2)
- Hierarchie (2)
Institute
- Institut für Mathematik und Informatik (67) (remove)
Publisher
- MDPI (14)
- Frontiers Media S.A. (6)
- Springer Nature (4)
- BioMed Central (BMC) (2)
- Oxford University Press (1)
- Wiley (1)
A lot of research data has become available since the outbreak of the COVID-19
pandemic in 2019. Connecting this data is essential for the understanding of the
SARS-CoV-2 virus and the fight against the pandemic.
Amongst biological and biomedical research data, computational models targeting
COVID-19 have been emerging and their number is growing constantly. They are a
central part of the field of Systems Biology, which aims to understand the mechanisms
and behaviour of biological systems. Model predictions help to understand the
mechanisms of the novel coronavirus and the life-threatening disease it is causing.
Both biomedical research data and modelling data regarding COVID-19 have
previously been stored in separated domain-specific graph databases. MaSyMoS,
short for Management System for Models and Simulations, is a graph database for
storing simulation studies of biological and biochemical systems. The CovidGraph
project integrates research data regarding COVID-19 and the coronavirus family
from various data resources in a knowledge graph.
In this thesis, we integrate simulation models from MaSyMoS, including models
targeting COVID-19, into the CovidGraph. Therefore, we present a concept for
the integration of simulation studies and the linkage through ontology terms and
reference publications in the CovidGraph. Ultimately, we connect data from the field
of systems biology and biomedical research data in a graph database.
This thesis revolves around a new concept of independence of algebras. The independence nicely fits into the framework of universal products, which have been introduced to classify independence relations in quantum probability theory; the associated product is called (r,s)-product and depends on two complex parameters r and s. Based on this product, we develop a theory which works without using involutive algebras or states. The following aspects are considered: 1. Classification: Universal products are defined on the free product of algebras (the coproduct in the category of algebras) and model notions of independence in quantum probability theory. We distinguish universal products according to their behaviour on elements of length two, calling them (r,s)-universal products with complex parameters r and s respectively. In case r and s equal 1, Muraki was able to show that there exist exactly five universal products (Muraki’s five). For r equals s nonzero we get five one parameter families (q-Muraki’s five). We prove that in the case r not equal to s the (r,s)-product, a two parameter deformation of the Boolean product, is the only universal product satisfying our set of axioms. The corresponding independence is called (r,s)-independence. 2. Dual pairs and GNS construction: By use of the GNS construction, one can associate a product of representations with every positive universal product. Since the (r,s)-product does not preserve positivity, we need a substitute for the usual GNS construction for states on involutive algebras. In joint work with M. Gerhold, the product of representations associated with the (r,s)-product was determined, whereby we considered representations on dual pairs instead of Hilbert spaces. This product of representations is - as we could show - essentially different from the Boolean product. 3. Reduction and quantum Lévy processes: U. Franz introduced a category theoretical concept which allows a reduction of the Boolean, monotone and antimonotone independence to the tensor independence. This existing reduction could be modified in order to apply to the (r,s)-independence. Quantum Lévy processes with (r,s)-independent increments can, in analogy with the tensor case, be realized as solutions of quantum stochastic differential equations. To prove this theorem, the previously mentioned reduction principle in the sense of U. Franz and a generalization of M. Schürmann’s theory for symmetric Fock spaces over dual pairs are used. As the main result, we obtain the realization of every (r,s)-Lévy process as solution of a quantum stochastic differential equation. When one, more generally, defines Lévy processes in a categorial way using U. Franz’s definition of independence for tensor categories with inclusions, compatibility of the inclusions with the tensor category structure plays an important role. For this thesis such a compatibility condition was formulated and proved to be equivalent to the characterization proposed by M. Gerhold. 4. Limit distributions: We work with so-called dual semigroups in the sense of D. V. Voiculescu (comonoids in the tensor category of algebras with free product). The polynomial algebra with primitive comultiplication is an example for such a dual semigroup. We use a "weakened" reduction which we call reduction of convolution and which essentially consists of a cotensor functor constructed from the symmetric tensor algebra. It turns dual semigroups into commutative bialgebras and also translates the convolution exponentials. This method, which can be nicely described in the categorial language, allows us to formulate central limit theorems for the (r,s)-independence and to calculate the correponding limit distributions (convergence in moments). We calculate the moments appearing in the central limit theorem for the (r,s)-product: The even moments are homogeneous polynomials in r and s with the Eulerian numbers as coefficients; the odd moments vanish. The moment sequence that we get from the central limit theorem for an arbitrary universal product is the moment sequence of a probability measure on the real line if and only if r equals s greater or equal to 1. In this case we present an explicit formula for the probability measure.
A New Kind of Permutation Entropy Used to Classify Sleep Stages from Invisible EEG Microstructure
(2017)
We consider Iterated Function Systems (IFS) on the real line and on the complex plane. Every IFS defines a self-similar measure supported on a self-similar set. We study the transfer operator (which acts on the space of continuous functions on the self-similar set) and the Hutchinson operator (which acts on the space of Borel regular measures on the self-similar set). We show that the transfer operator has an infinitely countable set of polynomial eigenfunctions. These eigenfunctions can be regarded as generalized Bernoulli polynomials. The polynomial eigenfuctions define a polynomial approximation of the self-similar measure. We also study the moments of the self-similar measure and give recursions for computing them. Further, we develop a numerical method based on Markov chains to study the spectrum of the Hutchinson and transfer operators. This method provides numerical approximations of the invariant measure for which we give error bounds in terms of the Wasserstein-distance. The standard example in this thesis is the parametric family of Bernoulli convolutions.
Self-affine tiles and fractals are known as examples in analysis and topology, as models of quasicrystals and biological growth, as unit intervals of generalized number systems, and as attractors of dynamical systems. The author has implemented a software which can find new examples and handle big databases of self-affine fractals. This thesis establishes the algebraic foundation of the algorithms of the IFStile package. Lifting and projection of algebraic and rational iterated function systems and many properties of the resulting attractors are discussed.
Anaplasma phagocytophilum and Anaplasma ovis–Emerging Pathogens in the German Sheep Population
(2021)
Knowledge on the occurrence of pathogenic tick-borne bacteria Anaplasma phagocytophilum and Anaplasma ovis is scarce in sheep from Germany. In 2020, owners from five flocks reported ill thrift lambs and ewes with tick infestation. Out of 67 affected sheep, 55 animals were clinically examined and hematological values, blood chemistry and fecal examinations were performed to investigate the underlying disease causes. Serological tests (cELISA, IFAT) and qPCR were applied to all affected sheep to rule out A. phagocytophilum and A. ovis as a differential diagnosis. Ticks were collected from selected pastures and tested by qPCR. Most animals (n = 43) suffered from selenium deficiency and endoparasites were detected in each flock. Anaplasma spp. antibodies were determined in 59% of examined sheep. Seventeen animals tested positive for A. phagocytophilum by qPCR from all flocks and A. phagocytophilum was also detected in eight pools of Ixodes ricinus. Anaplasma phagocytophilum isolates from sheep and ticks were genotyped using three genes (16S rRNA, msp4 and groEL). Anaplasma ovis DNA was identified in six animals from one flock. Clinical, hematological and biochemical changes were not significantly associated with Anaplasma spp. infection. The 16S rRNA analysis revealed known variants of A. phagocytophilum, whereas the msp4 and groEL showed new genotypes. Further investigations are necessary to evaluate the dissemination and health impact of both pathogens in the German sheep population particularly in case of comorbidities.
In phylogenetics, evolutionary relationships of different species are represented by phylogenetic trees.
In this thesis, we are mainly concerned with the reconstruction of ancestral sequences and the accuracy of this reconstruction given a rooted binary phylogenetic tree.
For example, we wish to estimate the DNA sequences of the ancestors given the observed DNA sequences of today living species.
In particular, we are interested in reconstructing the DNA sequence of the last common ancestor of all species under consideration. Note that this last common ancestor corresponds to the root of the tree.
There exist various methods for the reconstruction of ancestral sequences.
A widely used principle for ancestral sequence reconstruction is the principle of parsimony (Maximum Parsimony).
This principle means that the simplest explanation it the best.
Applied to the reconstruction of ancestral sequences this means that a sequence which requires the fewest evolutionary changes along the tree is reconstructed.
Thus, the number of changes is minimized, which explains the name of Maximum Parsimony.
Instead of estimating a whole DNA sequence, Maximum Parsimony considers each position in the sequence separately. Thus in the following, each sequence position is regarded separately, and we call a single position in a sequence state.
It can happen that the state of the last common ancestor is reconstructed unambiguously, for example as A. On the other hand, Maximum Parsimony might be indecisive between two DNA nucleotides, say for example A and C.
In this case, the last common ancestor will be reconstructed as {A,C}.
Therefore we consider, after an introduction and some preliminary definitions, the following question in Section 3: how many present-day species need to be in a certain state, for example A, such that the Maximum Parsimony estimate of the last common ancestor is also {A}?
The answer of this question depends on the tree topology as well as on the number of different states.
In Section 4, we provide a sufficient condition for Maximum Parsimony to recover the ancestral state at the root correctly from the observed states at the leaves.
The so-called reconstruction accuracy for the reconstruction of ancestral states is introduced in Section 5. The reconstruction accuracy is the probability that the true root state is indeed reconstructed and always takes two processes into account: on the one hand the approach to reconstruct ancestral states, and on the other hand the way how the states evolve along the edges of the tree. The latter is given by an evolutionary model.
In the present thesis, we focus on a simple symmetric model, the Neyman model.
The symmetry of the model means for example that a change from A to C is equally likely than a change from C to A.
Intuitively, one could expect that the reconstruction accuracy it the highest when all present-day species are taken into account. However, it has long been known that the reconstruction accuracy improves when some taxa are disregarded for the estimation.
Therefore, the question if there exits at least a lower bound for the reconstruction accuracy arises, i.e. if it is best to consider all today living species instead of just one for the reconstruction.
This is bad news for Maximum Parsimony as a criterion for ancestral state reconstruction, and therefore the question if there exists at least a lower bound for the reconstruction accuracy arises.
In Section 5, we start with considering ultrametric trees, which are trees where the expected number of substitutions from the root to each leaf is the same.
For such trees, we investigate a lower bound for the reconstruction accuracy, when the number of different states at the leaves of the tree is 3 or 4.
Subsequently in Section 6, in order to generalize this result, we introduce a new method for ancestral state reconstruction: the coin-toss method.
We obtain new results for the reconstruction accuracy of Maximum Parsimony by relating Maximum Parsimony to the coin-toss method.
Some of these results do not require the underlying tree to be ultrametric.
Then, in Section 7 we investigate the influence of specific tree topologies on the reconstruction accuracy of Maximum Parsimony. In particular, we consider balanced and imbalanced trees as the balance of a tree may have an influence on the reconstruction accuracy.
We end by introducing the Colless index in Section 8, an index which measures the degree of balance a rooted binary tree can have, and analyze its extremal properties.
Influenza A Virus (IAV) infection followed by bacterial pneumonia often leads to hospitalization and death in individuals from high risk groups. Following infection, IAV triggers the process of viral RNA replication which in turn disrupts healthy gut microbial community, while the gut microbiota plays an instrumental role in protecting the host by evolving colonization resistance. Although the underlying mechanisms of IAV infection have been unraveled, the underlying complex mechanisms evolved by gut microbiota in order to induce host immune response following IAV infection remain evasive. In this work, we developed a novel Maximal-Clique based Community Detection algorithm for Weighted undirected Networks (MCCD-WN) and compared its performance with other existing algorithms using three sets of benchmark networks. Moreover, we applied our algorithm to gut microbiome data derived from fecal samples of both healthy and IAV-infected pigs over a sequence of time-points. The results we obtained from the real-life IAV dataset unveil the role of the microbial families Ruminococcaceae, Lachnospiraceae, Spirochaetaceae and Prevotellaceae in the gut microbiome of the IAV-infected cohort. Furthermore, the additional integration of metaproteomic data enabled not only the identification of microbial biomarkers, but also the elucidation of their functional roles in protecting the host following IAV infection. Our network analysis reveals a fast recovery of the infected cohort after the second IAV infection and provides insights into crucial roles of Desulfovibrionaceae and Lactobacillaceae families in combating Influenza A Virus infection. Source code of the community detection algorithm can be downloaded from https://github.com/AniBhar84/MCCD-WN.
Approaches to the Analysis of Proteomics and Transcriptomics Data based on Statistical Methodology
(2014)
Recent developments in genomics and molecular biology led to the generation of an enormous amount of complex data of different origin. This is demonstrated by a number of published results from microarray experiments in Gene Expression Omnibus. The number was growing in exponential pace over the last decade. The challenge of interpreting these vast amounts of data from different technologies led to the development of new methods in the fields of computational biology and bioinformatics. Researchers often want to represent biological phenomena in the most detailed and comprehensive way. However, due to the technological limitations and other factors like limited resources this is not always possible. On one hand, more detailed and comprehensive research generates data of high complexity that is very often difficult to approach analytically, however, giving bioinformatics a chance to draw more precise and deeper conclusions. On the other hand, for low-complexity tasks the data distribution is known and we can fit a mathematical model. Then, to infer from this mathematical model, researchers can use well-known and standard methodologies. In return for using standard methodologies, the biological questions we are answering might not be unveiling the whole complexity of the biological meaning. Nowadays it is a standard that a biological study involves generation of large amounts of data that needs to be analyzed with a statistical inference. Sometimes data challenge researchers with low complexity task that can be performed with standard and popular methodologies as in Proteomic analysis of mouse oocytes reveals 28 candidate factors of the "reprogrammome". There, we established a protocol for proteomics data that involves preprocessing of the raw data and conducting Gene Ontology overrepresentation analysis utilizing hypergeometric distribution. In cases, where the data complexity is high and there are no published frameworks a researcher could follow, randomization can be an approach to exploit. In two studies by The mouse oocyte proteome escapes maternal aging and CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we showed how randomization can be performed for distinct complex tasks. In The mouse oocyte proteome escapes maternal aging we constructed a random sample of semantic similarity score between oocyte transcriptome and random transcriptome subset of oocyte proteome size. Therefore, we could calculate whether the proteome is representative of the trancriptome. Further, we established a novel framework for Gene Ontology overrepresentation that involves randomization testing. Every Gene Ontology term is tested whether randomly reassigning all gene labels of belonging to or not belonging to this term will decrease the overall expression level in this term. In CellFateScout - a bioinformatics tool for elucidating small molecule signaling pathways that drive cells in a specific direction we validated CellFateScout against other well-known bioinformatics tools. We stated the question whether our plugin is able to predict small molecule effects better in terms of expression signatures. For this, we constructed a protocol that uses randomization testing. We assess here if the small molecule effect described as a (set of) active signaling pathways, as detected by our plugin or other bioinformatics tools, is significantly closer to known small molecule targets than a random path.
In this thesis, we elaborate upon Bayesian changepoint analysis, whereby our focus is on three big topics: approximate sampling via MCMC, exact inference and uncertainty quantification. Besides, modeling matters are discussed in an ongoing fashion. Our findings are underpinned through several changepoint examples with a focus on a well-log drilling data.
A slice is an intersection of a hyperplane and a self-similar set. The main purpose of this work is the mathematical description of slices. A suitable tool to describe slices are branching dynamical systems. Such systems are a generalisation of ordinary discrete dynamical systems for multivalued maps. Simple examples are systems arising from Bernoulli convolutions and beta-representations. The connection between orbits of branching dynamical systems and slices is demsonstrated and conditions are derived under which the geometry of a slice can be computed. A number of interesting 2-d and 3-d slices through 3-d and 4-d fractals is discussed.
Abstract
Cellular stress has been associated with inflammation, yet precise underlying mechanisms remain elusive. In this study, various unrelated stress inducers were employed to screen for sensors linking altered cellular homeostasis and inflammation. We identified the intracellular pattern recognition receptors NOD1/2, which sense bacterial peptidoglycans, as general stress sensors detecting perturbations of cellular homeostasis. NOD1/2 activation upon such perturbations required generation of the endogenous metabolite sphingosine‐1‐phosphate (S1P). Unlike peptidoglycan sensing via the leucine‐rich repeats domain, cytosolic S1P directly bound to the nucleotide binding domains of NOD1/2, triggering NF‐κB activation and inflammatory responses. In sum, we unveiled a hitherto unknown role of NOD1/2 in surveillance of cellular homeostasis through sensing of the cytosolic metabolite S1P. We propose S1P, an endogenous metabolite, as a novel NOD1/2 activator and NOD1/2 as molecular hubs integrating bacterial and metabolic cues.
We present classical and hybrid modeling approaches for genetic regulatory networks focusing on promoter analysis for negatively and positively autoregulated networks. The main aim of this thesis is to introduce an alternative mathematical approach to model gene regulatory networks based on piecewise deterministic Markov processes (PDMP). During somitogenesis, a process describing the early segmentation in vertebrates, molecular oscillators play a crucial role as part of a segmentation clock. In mice, these oscillators are called Hes1 and Hes7 and are commonly modeled by a system of two delay differential equations including a Hill function, which describes gene repression by their own gene products. The Hill coefficient, which is a measure of nonlinearity of the binding processes in the promoter, is assumed to be equal to two, based on the fact that Hes1 and Hes7 form dimers.However, by standard arguments applied to binding analysis, we show that a higher Hill coefficient is reasonable. This leads to results different from those in literature which requires a more sophisticated model. For the Hes7 oscillator we present a system of ordinary differential equations including a Michaelis-Menten term describing a nonlinear degradation of the proteins by the ubiquitinpathway. As demonstrated by the Hes1 and Hes7 oscillator, promoter behavior can have strong influence on the dynamical behavior of genetic networks. Since purely deterministic systems cannot reveal phenomenons caused by the inherent random fluctuations, we propose a novel approach based on PDMPs. Such models allow to model binding processes of transcription factors to binding sites in a promoter as random processes, where all other processes like synthesis, degradation or dimerization of the gene products are modeled in deterministic manner. We present and discuss a simulation algorithm for PDMPs and apply it to three types of genetic networks: an unregulated gene, a toggle switch, and a positively autoregulated network. The different regulation characteristics are analyzed and compared by numerical means. Furthermore, we determine analytical solutions of the stationary distributions of one negatively, and three positively autoregulated networks. Based on these results, we analyze attenuation of noise in a negative feedback loop, and the question of graded or binary response in autocatalytic networks.
The goal of this doctoral thesis is to create and to implement methods for fully automatic segmentation applications in magnetic resonance images and datasets. The work introduces into technical and physical backgrounds of magnetic resonance imaging (MRI) and summarizes essential segmentation challenges in MRI data including technical malfunctions and ill-posedness of inverse segmentation problems. Theoretical background knowledge of all the used methods that are adapted and extended to combine them for problem-specific segmentation applications are explained in more detail. The first application for the implemented solutions in this work deals with two-dimensional tissue segmentation of atherosclerotic plaques in cardiological MRI data. The main part of segmentation solutions is designed for fully automatic liver and kidney parenchyma segmentation in three-dimensional MRI datasets to ensure computer-assisted organ volumetry in epidemiological studies. The results for every application are listed, described and discussed before important conclusions are drawn. Among several applied methods, the level set method is the main focus of this work and is used as central segmentation concept in the most applications. Thus, its possibilities and limitations for MRI data segmentation are analyzed. The level set method is extended by several new ideas to overcome possible limitations and it is combined as important part of modularized frameworks. Additionally, a new approach for probability map generation is presented in this thesis, which reduces data dimensionality of multiple MR-weightings and incorporates organ position probabilities in a probabilistic framework. It is shown, that essential organ features (i.e. MR-intensity distributions, locations) can be well represented in the calculated probability maps. Since MRI data are produced by using multiple MR- weightings, the used dimensionality reduction technique is very helpful to generate a single probability map, which can be used for further segmentation steps in a modularized framework.
Simple Summary
Monitoring animal behavior provides an indicator of their health and welfare. For this purpose, video surveillance is an important method to get an unbiased insight into behavior, as animals often show different behavior in the presence of humans. However, manual analysis of video data is costly and time-consuming. For this reason, we present a method for automated analysis using computer vision—a method for teaching the computer to see like a human. In this study, we use computer vision to detect red foxes and their body posture (lying, sitting, or standing). With this data we are able to monitor the animals, determine their activity, and identify their behavior.
Abstract
The behavior of animals is related to their health and welfare status. The latter plays a particular role in animal experiments, where continuous monitoring is essential for animal welfare. In this study, we focus on red foxes in an experimental setting and study their behavior. Although animal behavior is a complex concept, it can be described as a combination of body posture and activity. To measure body posture and activity, video monitoring can be used as a non-invasive and cost-efficient tool. While it is possible to analyze the video data resulting from the experiment manually, this method is time consuming and costly. We therefore use computer vision to detect and track the animals over several days. The detector is based on a neural network architecture. It is trained to detect red foxes and their body postures, i.e., ‘lying’, ‘sitting’, and ‘standing’. The trained algorithm has a mean average precision of 99.91%. The combination of activity and posture results in nearly continuous monitoring of animal behavior. Furthermore, the detector is suitable for real-time evaluation. In conclusion, evaluating the behavior of foxes in an experimental setting using computer vision is a powerful tool for cost-efficient real-time monitoring.
We apply the charge simulation method (CSM) in order to compute the logarithmic capacity of compact sets consisting of (infinitely) many “small” components. This application allows to use just a single charge point for each component. The resulting method therefore is significantly more efficient than methods based on discretizations of the boundaries (for example, our own method presented in Liesen et al. (Comput. Methods Funct. Theory 17, 689–713, 2017)), while maintaining a very high level of accuracy. We study properties of the linear algebraic systems that arise in the CSM, and show how these systems can be solved efficiently using preconditioned iterative methods, where the matrix-vector products are computed using the fast multipole method. We illustrate the use of the method on generalized Cantor sets and the Cantor dust.
Phylogenetic (i.e., leaf-labeled) trees play a fundamental role in evolutionary research. A typical problem is to reconstruct such trees from data like DNA alignments (whose columns are often referred to as characters), and a simple optimization criterion for such reconstructions is maximum parsimony. It is generally assumed that this criterion works well for data in which state changes are rare. In the present manuscript, we prove that each binary phylogenetic tree T with n ≥ 20k leaves is uniquely defined by the set Ak (T), which consists of all characters with parsimony score k on T. This can be considered as a promising first step toward showing that maximum parsimony as a tree reconstruction criterion is justified when the number of changes in the data is relatively small.
Simple Summary
Paratuberculosis is a disease which affects ruminants worldwide. Many countries have implemented certification and monitoring systems to control the disease, particularly in dairy herds. Monitoring herds certified as paratuberculosis non-suspect is an important component of paratuberculosis herd certification programs. The challenge is to detect the introduction or reintroduction of the infectious agent as early as possible with reasonable efforts but high certainty. In our study, we evaluated different low-cost testing schemes in herds where the share of infected animals was low, resulting in a low within-herd prevalence of animals shedding the bacteria that causes paratuberculosis in their feces. The test methods used were repeated pooled milk samples and fecal samples from the barn environment. Our study showed that numerous repetitions of different samples are necessary to monitor such herds with sufficiently high certainty. In the case of herds with a very low prevalence, our study showed that a combination of different sampling approaches is required.
Abstract
An easy-to-use and affordable surveillance system is crucial for paratuberculosis control. The use of environmental samples and milk pools has been proven to be effective for the detection of Mycobacterium avium subsp. paratuberculosis (MAP)-infected herds, but not for monitoring dairy herds certified as MAP non-suspect. We aimed to evaluate methods for the repeated testing of large dairy herds with a very low prevalence of MAP shedders, using different sets of environmental samples or pooled milk samples, collected monthly over a period of one year in 36 herds with known MAP shedder prevalence. Environmental samples were analyzed by bacterial culture and fecal PCR, and pools of 25 and 50 individual milk samples were analyzed by ELISA for MAP-specific antibodies. We estimated the cumulative sensitivity and specificity for up to twelve sampling events by adapting a Bayesian latent class model and taking into account the between- and within-test correlation. Our study revealed that at least seven repeated samplings of feces from the barn environment are necessary to achieve a sensitivity of 95% in herds with a within-herd shedder prevalence of at least 2%. The detection of herds with a prevalence of less than 2% is more challenging and, in addition to numerous repetitions, requires a combination of different samples.
Discovering Latent Structure in High-Dimensional Healthcare Data: Toward Improved Interpretability
(2022)
This cumulative thesis describes contributions to the field of interpretable machine learning in the healthcare domain. Three research articles are presented that lie at the intersection of biomedical and machine learning research. They illustrate how incorporating latent structure can provide a valuable compression of the information hidden in complex healthcare data.
Methodologically, this thesis gives an overview of interpretable machine learning and the discovery of latent structure, including clusters, latent factors, graph structure, and hierarchical structure. Different workflows are developed and applied to two main types of complex healthcare data (cohort study data and time-resolved molecular data). The core result builds on Bayesian networks, a type of probabilistic graphical model. On the application side, we provide accurate predictive or discriminative models focusing on relevant medical conditions, related biomarkers, and their interactions.
Self-similar sets with the open set condition, the linear objects of fractal geometry, have been considered mainly for crystallographic data. Here we introduce new symmetry classes in the plane, based on rotation by irrational angles. Examples without characteristic directions, with strong connectedness and small complexity, were found in a computer-assisted search. They are surprising since the rotations are given by rational matrices, and the proof of the open set condition usually requires integer data. We develop a classification of self-similar sets by symmetry class and algebraic numbers. Examples are given for various quadratic number fields.
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
(2020)
Background: Computational tools for the investigation of transcriptional regulation, in particular of transcription factor binding sites (TFBS), in evolutionary context are developed. Existing sequence based tools prediction such binding sites do not consider their actual functionality, although it is known that besides the base sequence many other aspects are relevant for binding and for the effects of that binding. In particular in Eukaryotes a perfectly matching sequence motif is neither necessary nor sufficient for a functional transcription factor binding site. Published work in the field of transcriptional regulation frequently focus on the prediction of putative transcription factor binding sites based on sequence similarity to known binding sites. Furthermore, among the related software, only a small number implements visualization of the evolution of transcription factor binding sites or the integration of other regulation related data. The interface of many tools is made for computer scientists, although the actual interpretation of their outcome needs profound biological background knowledge. Results and Discussion: The tool presented in this thesis, "ReXSpecies" is a web application. Therefore, it is ready to use for the end user without installation providing a graphical user interface. Besides extensive automation of analyses of transcriptional regulation (the only necessary input are the genomic coordinates of a regulatory region), new techniques to visualize the evolution of transcription factor binding sites were developed. Furthermore, an interface to genome browsers was implemented to enable scientists to comprehensively analyze their regulatory regions with respect to other regulation relevant data. ReXSpecies contains a novel algorithm that searches for evolutionary conserved patterns of transcription factor binding sites, which could imply functionality. Such patterns were verified using some known transcription factor binding sites of genes involved in pluripotency. In the appendix, efficiency and correctness of the used algorithm are discussed. Furthermore, a novel algorithm to color phylogenetic trees intuitively is presented. In the thesis, new possibilities to render evolutionary conserved sets of transcription factor binding sites are developed. The thesis also discusses the evolutionary conservation of regulation and its context dependency. An important source of errors in the analysis of regulatory regions using comparative genetics is probably to find and to align homologous regulatory regions. Some alternatives to using sequence similarity alone are discussed. Outlook: Other possibilities to find (functional) homologous regulatory regions (besides whole-genome-alignments currently used) are BLAST searches, local alignments, homology databases and alignment-free approaches. Using one ore more of these alternatives could reduce the number of artifacts by reduction of the number of regions that are erroneously declared homologous. To achieve more robust predictions of transcription, the author suggests to use other regulation related data besides sequence data only. Therefore, the use and extension of existing tools, in particular of systems biology, is proposed.
Background
The Earth Biogenome Project has rapidly increased the number of available eukaryotic genomes, but most released genomes continue to lack annotation of protein-coding genes. In addition, no transcriptome data is available for some genomes.
Results
Various gene annotation tools have been developed but each has its limitations. Here, we introduce GALBA, a fully automated pipeline that utilizes miniprot, a rapid protein-to-genome aligner, in combination with AUGUSTUS to predict genes with high accuracy. Accuracy results indicate that GALBA is particularly strong in the annotation of large vertebrate genomes. We also present use cases in insects, vertebrates, and a land plant. GALBA is fully open source and available as a docker image for easy execution with Singularity in high-performance computing environments.
Conclusions
Our pipeline addresses the critical need for accurate gene annotation in newly sequenced genomes, and we believe that GALBA will greatly facilitate genome annotation for diverse organisms.
Twisted topological K-theory is a twisted version of topological K-theory in the sense of twisted generalized cohomology theories. It was pioneered by Donavan and Karoubi in 1970 where they used bundles of central simple graded algebras to model twists of K-theory. By the end of the last century physicists realised that D-brane charges in the field of string theory may be studied in terms of twisted K-theory. This rekindled interest in the topic lead to a wave of new models for the twists and new ways to realize the respective twisted K-theory groups. The state-of-the-art models today use bundles of projective unitary operators on separable Hilbert spaces as twists and K-groups are modeled by homotopy classes of sections of certain bundles of Fredholm operators. From a physics perspective these treatments are not optimal yet: they are intrinsically infinite-dimensional and these models do not immediately allow the inclusion of differential data like forms and connections.
In this thesis we introduce the 2-stack of k-algebra gerbes. Objects, 1-morphisms and 2-morphisms consist of finite-dimensional geometric data simultaneously generalizing bundle gerbes and bundles of central simple graded k-algebras for k either the field of real numbers or the field of complex numbers. We construct an explicit isomorphism from equivalence classes of k-algebra gerbes over a space X to the full set of twists of real K-theory and complex K-theory respectively. Further, we model relative twisted K-groups for compact spaces X and closed subspaces Y twisted by algebra gerbes. These groups are modeled directly in terms of 1-morphisms and 2-morphisms of algebra gerbes over X. We exhibit a relation to the K-groups introduced by Donavan and Karoubi and we translate their fundamental isomorphism -- an isomorphism relating K-groups over Thom spaces with K-groups twisted by Clifford algebra bundles -- to the new setting. With the help of this fundamental isomorphism we construct an explicit Thom isomorphism and explicit pushforward homomorphisms for smooth maps between compact manifolds, without requiring these maps to be K-oriented. Further -- in order to treat K-groups for non-torsion twists -- we implement a geometric cocycle model, inspired by a related geometric cycle model developed by Baum and Douglas for K-homology in 1982, and construct an assembly map for this model.
Geometric T-Duality
(2022)
From a physicists point of view T-duality is a relation connecting string
theories on different spacetimes. Mathematically speaking, T-duality should be a symmetric relation on
the space of toroidal string backgrounds. Such a background consists of: a smooth manifold M; a torus bundle E over M - the total space modelling spacetime; a Riemannian metric g on E - modelling the field of gravity; a U(1)-bundle gerbe G with connection over E - modelling the Kalb-
Ramond field.
But as of now no complete model for T-duality exists. The three most notable
approaches for T-duality are given by the differential approaches by Buscher in the form of the Buscher rules and by Bouwknegt, Evslin and Mathai in the form of T-duality with H-flux on the one hand, and by the topological approach given by Bunke, Rumpf and Schick which is known as topological T-duality. In this thesis we combine these different approaches to form the first model for T-duality over complete geometric toroidal string backgrounds and we will introduce an example for this geometric T-duality inspired by the Hopf bundle.
The classical Buscher rules d escribe T-duality for metrics and B-fields in a topologically trivial setting. On the other hand, topological T-duality addresses aspects of non-trivial topology while neglecting metrics and B-fields. In this article, we develop a new unifying framework for both aspects.
Background
An important initial phase of arguably most homology search and alignment methods such as required for genome alignments is seed finding. The seed finding step is crucial to curb the runtime as potential alignments are restricted to and anchored at the sequence position pairs that constitute the seed. To identify seeds, it is good practice to use sets of spaced seed patterns, a method that locally compares two sequences and requires exact matches at certain positions only.
Results
We introduce a new method for filtering alignment seeds that we call geometric hashing. Geometric hashing achieves a high specificity by combining non-local information from different seeds using a simple hash function that only requires a constant and small amount of additional time per spaced seed. Geometric hashing was tested on the task of finding homologous positions in the coding regions of human and mouse genome sequences. Thereby, the number of false positives was decreased about million-fold over sets of spaced seeds while maintaining a very high sensitivity.
Conclusions
An additional geometric hashing filtering phase could improve the run-time, accuracy or both of programs for various homology-search-and-align tasks.
A common task in natural sciences is to
describe, characterize, and infer relations between discrete
objects. A set of relations E on a set of objects V can
naturally be expressed as a graph G = (V, E). It is
therefore often convenient to formalize problems in natural
sciences as graph theoretical problems.
In this thesis we will examine a number of problems found in
life sciences in particular, and show how to use graph theoretical
concepts to formalize and solve the presented problems. The
content of the thesis is a collection of papers all
solving separate problems that are relevant to biology
or biochemistry.
The first paper examines problems found in self-assembling
protein design. Designing polypeptides, composed of concatenated
coiled coil units, to fold into polyhedra turns out
to be intimately related to the concept of 1-face embeddings in
graph topology. We show that 1-face embeddings can be
canonicalized in linear time and present algorithms to enumerate
pairwise non-isomorphic 1-face embeddings in orientable surfaces.
The second and third paper examine problems found in evolutionary
biology. In particular, they focus on
inferring gene and species trees directly from sequence data
without any a priori knowledge of the trees topology. The second
paper characterize when gene trees can be inferred from
estimates of orthology, paralogy and xenology relations when only
partial information is available. Using this characterization an
algorithm is presented that constructs a gene tree consistent
with the estimates in polynomial time, if one exists. The
shown algorithm is used to experimentally show that gene trees
can be accurately inferred even in the case that only 20$\%$ of
the relations are known. The third paper explores how to
reconcile a gene tree with a species tree in a biologically
feasible way, when the events of the gene tree are known.
Biologically feasible reconciliations are characterized using
only the topology of the gene and species tree. Using this
characterization an algorithm is shown that constructs a
biologically feasible reconciliation in polynomial time, if one
exists.
The fourth and fifth paper are concerned with with the analysis
of automatically generated reaction networks. The fourth paper
introduces an algorithm to predict thermodynamic properties of
compounds in a chemistry. The algorithm is based on
the well known group contribution methods and will automatically
infer functional groups based on common structural motifs found
in a set of sampled compounds. It is shown experimentally that
the algorithm can be used to accurately
predict a variety of molecular properties such as normal boiling
point, Gibbs free energy, and the minimum free energy of RNA
secondary structures. The fifth and final paper presents a
framework to track atoms through reaction networks generated by a
graph grammar. Using concepts found in semigroup theory, the
paper defines the characteristic monoid of a reaction network. It
goes on to show how natural subsystems of a reaction network organically
emerge from the right Cayley graph of said monoid. The
applicability of the framework is proven by applying it to the
design of isotopic labeling experiments as well as to the
analysis of the TCA cycle.
In the PhD-thesis a conditional random field approach and its implementation is presented to predict the interaction sites of protein homo- and heterodimers using the spatial structure of one protein partner from a complex. The method includes a substantially simple edge feature model. A novel node feature class is introduced that is called -change in free energy-. The Online Large-Margin algorithm is adapted in order to train the model parameters given a classified reference set of proteins. A significantly higher prediction accuracy is achieved by combining our new node feature class with the standard node feature class relative accessible surface area. The quality of the predictions is measured by computing the area under the receiver operating characteristic.
Trade of cattle between farms forms a complex trade network. We investigate partitions of this network for cattle trade in Germany. These partitions are groups of farms with similar properties and they are inferred directly from the trade pattern between farms. We make use of a rather new method known as stochastic block modeling (SBM) in order to divide the network into smaller units. SBM turns out to outperform the more established community detection method in the context of disease control in terms of trade restriction. Moreover, SBM is also superior to geographical based trade restrictions and could be a promising approach for disease control.
Tuberculosis (TB) has tremendous public health relevance. It most frequently affects the lung and is characterized by the development of unique tissue lesions, termed granulomas. These lesions encompass various immune populations, with macrophages being most extensively investigated. Myeloid derived suppressor cells (MDSCs) have been recently identified in TB patients, both in the circulation and at the site of infection, however their interactions with Mycobacterium tuberculosis (Mtb) and their impact on granulomas remain undefined. We generated human monocytic MDSCs and observed that their suppressive capacities are retained upon Mtb infection. We employed an in vitro granuloma model, which mimics human TB lesions to some extent, with the aim of analyzing the roles of MDSCs within granulomas. MDSCs altered the structure of and affected bacterial containment within granuloma-like structures. These effects were partly controlled through highly abundant secreted IL-10. Compared to macrophages, MDSCs activated primarily the NF-κB and MAPK pathways and the latter largely contributed to the release of IL-10 and replication of bacteria within in vitro generated granulomas. Moreover, MDSCs upregulated PD-L1 and suppressed proliferation of lymphocytes, albeit with negligible effects on Mtb replication. Further comprehensive characterization of MDSCs in TB will contribute to a better understanding of disease pathogenesis and facilitate the design of novel immune-based interventions for this deadly infection.
Liver diseases are important causes of morbidity and mortality worldwide. The aim of
this study was to identify differentially expressed microRNAs (miRNAs), target genes, and key
pathways as innovative diagnostic biomarkers in liver patients with different pathology and functional
state. We determined, using RT-qPCR, the expression of 472 miRNAs in 125 explanted livers from
subjects with six different liver pathologies and from control livers. ANOVA was employed to
obtain differentially expressed miRNAs (DEMs), and miRDB (MicroRNA target prediction database)
was used to predict target genes. A miRNA–gene differential regulatory (MGDR) network was
constructed for each condition. Key miRNAs were detected using topological analysis. Enrichment
analysis for DEMs was performed using the Database for Annotation, Visualization, and Integrated
Discovery (DAVID). We identified important DEMs common and specific to the different patient
groups and disease progression stages. hsa-miR-1275 was universally downregulated regardless
the disease etiology and stage, while hsa-let-7a*, hsa-miR-195, hsa-miR-374, and hsa-miR-378 were
deregulated. The most significantly enriched pathways of target genes controlled by these miRNAs
comprise p53 tumor suppressor protein (TP53)-regulated metabolic genes, and those involved in
regulation of methyl-CpG-binding protein 2 (MECP2) expression, phosphatase and tensin homolog
(PTEN) messenger RNA (mRNA) translation and copper homeostasis. Our findings show a novel
panel of deregulated miRNAs in the liver tissue from patients with different liver pathologies. These
miRNAs hold potential as biomarkers for diagnosis and staging of liver diseases.
Convolutional Neural Network-based image classification models are the current state-of-the-art for solving image classification problems. However, obtaining and using such a model to solve a specific image classification problem presents several challenges in practice. To train the model, we need to find good hyperparameter values for training, such as initial model weights or learning rate. However, finding these values is usually a non-trivial process. Another problem is that the training data used for model training is often class-imbalanced in practice. This usually has a negative impact on model training. However, not only is it challenging to obtain a Convolutional Neural Network-based model, but also to use the model after model training. After training, the model might be applied to images that were drawn from a data distribution that is different from the data distribution the training data was drawn from. These images are typically referred to as out-of-distribution samples. Unfortunately, Convolutional Neural Network-based image classification models typically fail to predict the correct class for out-of-distribution samples without warning, which is problematic when such a model is used for safety-critical applications. In my work, I examined whether information from the layers of a Convolutional Neural Network-based image classification model (pixels and activations) can be used to address all of these issues. As a result, I suggest a method for initializing the model weights based on image patches, a method for balancing a class-imbalanced dataset based on layer activations, and a method for detecting out-of-distribution samples, which is also based on layer activations. To test the proposed methods, I conducted extensive experiments using different datasets. My experiments showed that layer information (pixels and activations) can indeed be used to address all of the aforementioned challenges when training and using Convolutional Neural Network-based image classification models.
Independence is a basic concept of probability theory and statistics. In a lot of fields of sciences, dependency of different variables is gained lots of attention from scientists. A measure, named information dependency, is proposed to express the dependency of a group of random variables. This measure is defined as the Kullback-Leibler divergence of a joint distribution with respect to a product-marginal distribution of these random variables. In the bivariate case, this measure is known as mutual information of two random variables. Thus, the measure information dependency has a strong relationship with the Information Theory. The thesis aims to give a thorough study of the information dependency from both mathematical and practical viewpoints. Concretely, we would like to research three following problems: 1. Proving that the information dependency is a useful tool to express the dependency of a group of random variables by comparing it with other measures of dependency. 2. Studying the methods to estimate the information dependency based on the samples of a group of random variables. 3. Investigating how the Independent Component Analysis problem, an interesting problem in statistics, can be solved using information dependency.
High-throughput expression data have become the norm in molecular biology research. However, the analysis of expression data is statistically and computationally challenging and has not kept up with their generation. This has resulted in large amounts of unexplored data in public repositories. After pre-processing and quality control, the typical gene expression analysis workflow follows two main steps. First, the complexity of the data is reduced by removing the genes that are redundant or irrelevant for the biological question that motivated the experiment, using a feature selection method. Second, relevant genes are investigated to extract biological information that could aid in the interpretation of the results. Different methods, such as functional annotation, clustering, network analysis, and/or combinations thereof are useful for the latter purpose. Here, I investigated and presented solutions to three problems encountered in the expression data analysis workflow. First, I worked on reducing complexity of high-throughput expression data by selecting relevant genes in the context of the sample classification problem. The sample classification problem aims to assign unknown samples into one of the known classes, such as healthy and diseased. For this purpose, I developed the relative signal-to-noise ratio (rSNR), a novel feature selection method which was shown to perform significantly better than other methods with similar objectives. Second, to better understand complex phenotypes using high-throughput expression data, I developed a pipeline to identify the underlying biological units, as well as their interactions. These biological units were assumed to be represented by groups of genes working in synchronization to perform a given function or participate in common biological processes or pathways. Thus, to identify biological units, those genes that had been identified as relevant to the phenotype under consideration through feature selection methods were clustered based on both their functional annotations and expression profiles. Relationships between the associated biological functions, processes, and/or pathways were investigated by means of a co-expression network. The developed pipeline provides a new perspective to the analysis of high-throughput expression data by investigating interactions between biological units. Finally, I contributed to a project where a network describing pluripotency in mouse was used to infer the corresponding network in human. Biological networks are context-specific. Combining network information with high-throughput expression data can explain the control mechanisms underlying changes and maintenance of complex phenotypes. The human network was constructed on the basis of orthology between mouse and human genes and proteins. It was validated with available data in the literature. The methods and strategies proposed here were mainly trained and tested on microarray expression data. However, they can be easily adapted to next-generation sequencing and proteomics data.
Interactive Visualization for the Exploration of Aligned Biological Networks and Their Evolution
(2011)
Network Visualization is a widely used tool in biology. The biological networks, as protein-interaction-networks are important for many aspects in life. Today biologists use the comparison of networks of different species (network alignment) to understand the networks in more detail and to understand the underlying evolution. The goal of this work is to develop a visualization software that is able to visualize network alignments and also their evolution. The presented software is the first software for such visualization tasks. It uses 3D graphics and also animations for the dynamic visualization of evolution. This work consists of a review of the Related Work, a chapter about our Graph-based Approach for Interactive Visualization of Evolving Network Alignments, an explanation of the Graph Layout Algorithm and some hints for the Software System.
The history of Mathematics has been lead in part by the desire for generalization: once an object was given and had been understood, there was the desire to find a more general version of it, to fit it into a broader framework. Noncommutative Mathematics fits into this description, as its interests are objects analoguous to vector spaces, or probability spaces, etc., but without the commonsense interpretation that those latter objects possess. Indeed, a space can be described by its points, but also and equivalently, by the set of functions on this space. This set is actually a commutative algebra, sometimes equipped with some more structure: *-algebra, C*-algebra, von Neumann algebras, Hopf algebras, etc. The idea that lies at the basis of noncommutative Mathematics is to replace such algebras by algebras that are not necessarily commutative any more and to interpret them as "algebras of functions on noncommutative spaces". Of course, these spaces do not exist independently from their defining algebras, but facts show that a lot of the results holding in (classical) probability or (classical) group theory can be extended to their noncommutative counterparts, or find therein powerful analogues. The extensions of group theory into the realm of noncommutative Mathematics has long been studied and has yielded the various quantum groups. The easiest version of them, the compact quantum groups, consist of C*-algebras equipped with a *-homomorphism &Delta with values in the tensor product of the algebra with itself and verifying some coassociativity condition. It is also required that the compact quantum group verifies what is known as quantum cancellation property. It can be shown that (classical) compact groups are indeed a particular case of compact quantum groups. The area of compact quantum groups, and of quantum groups at large, is a fruitful area of research. Nevertheless, another generalization of group theory could be envisioned, namely by taking a comultiplication &Delta taking values not in the tensor product but rather in the free product (in the category of unital *-algebras). This leads to the theory of dual groups in the sense of Voiculescu, also called H-algebras by Zhang. These objects have not been so thoroughly studied as their quantum counterparts. It is true that they are not so flexible and that we therefore do not know many examples of them and showing that some relations cannot exist in the dual group case because they do not pass the coproduct. Nevertheless, I have been interested during a great part of my PhD work by these objects and I have made some progress towards their understanding, especially regarding quantum Lévy processes defined on them and Haar states.
Background
The alignment of large numbers of protein sequences is a challenging task and its importance grows rapidly along with the size of biological datasets. State-of-the-art algorithms have a tendency to produce less accurate alignments with an increasing number of sequences. This is a fundamental problem since many downstream tasks rely on accurate alignments.
Results
We present learnMSA, a novel statistical learning approach of profile hidden Markov models (pHMMs) based on batch gradient descent. Fundamentally different from popular aligners, we fit a custom recurrent neural network architecture for (p)HMMs to potentially millions of sequences with respect to a maximum a posteriori objective and decode an alignment. We rely on automatic differentiation of the log-likelihood, and thus, our approach is different from existing HMM training algorithms like Baum–Welch. Our method does not involve progressive, regressive, or divide-and-conquer heuristics. We use uniform batch sampling to adapt to large datasets in linear time without the requirement of a tree. When tested on ultra-large protein families with up to 3.5 million sequences, learnMSA is both more accurate and faster than state-of-the-art tools. On the established benchmarks HomFam and BaliFam with smaller sequence sets, it matches state-of-the-art performance. All experiments were done on a standard workstation with a GPU.
Conclusions
Our results show that learnMSA does not share the counterintuitive drawback of many popular heuristic aligners, which can substantially lose accuracy when many additional homologs are input. LearnMSA is a future-proof framework for large alignments with many opportunities for further improvements.
Gram-negative bacteria secrete lipopolysaccharides (LPS), leading to a host immune
response of proinflammatory cytokine secretion. Those proinflammatory cytokines are
TNF-α and IFN-γ, which induce the production of indoleamine 2,3-dioxygenase (IDO). IDO production is increased during severe sepsis, and septic shock. High IDO
levels are associated with increased mortality. This enzyme catalyzes the degradation of tryptophan (TRP) to kynurenine (KYN) along the kynurenine pathway (KP).
KYN is further degraded to kynurenic acid (KYNA). Increased IDO levels accompany
with increased levels of KYNA, which is associated with immunoparalysis.
Due to its central role, the KP is a potential target of therapeutic intervention.
The degradation of TRP to KYN by IDO was intervened by 1-Methyltryptophan (1-
MT), which is assumed to inhibit IDO. By administering 1-MT, the survival of
1-MT-treated mice suffering from sepsis increased compared to mice not treated with
1-MT. The levels of downstream metabolites such as KYN and KYNA were
expected to be decreased. Surprisingly, in healthy mice and pigs, an increase in KYNA
after 1-MT administration was reported. Those unexpected metabolite alterations after 1-MT administration, and the mode of action, were not the focus of recent
research. Hence, there is no explanation for KYNA increase, while KYN did not change.
This thesis aims to postulate a possible degradation pathway of 1-MT along the KP
with the help of ordinary differential equation (ODE) systems.
Moreover, the developed ODE models were used to determine the ability of 1-MT to
inhibit IDO in vivo. Therefore, a multiplicity of ODE models were developed, including
a model of the KP, an extension by lipopolysaccharide (LPS) administration, and 1-MT
administration.
Moreover, seven ODE models were developed, all considering possible degradation pathways of 1-MT. The most likely degradation pathway was combined with the ODE model
of LPS administration, including the inhibitory effects of 1-MT.
Those models consist of several dependent equations describing the dynamics of the KP.
For each component of the KP, one equation describes the alterations over time. Equations for TRP, KYN, KYNA, and quinolinic acid (QUIN) were developed.
Moreover, the alterations of serotonin (SER) were also included. All together belong
to the TRP metabolism. They include the degradation of TRP to SER and to KYN,
which is further degraded to KYNA and QUIN. Every degradation is catalyzed by an enzyme. Therefore, Michaelis-Menten (MM) equations were used employing the substrate
constant Km and the maximal degradation velocity Vmax. To reduce the complexity of
parameter calculation, Km values of the different enzymes were fixed to literature values.
The remaining parameters of the equations were determined so that the trajectories of
the calculated metabolite levels correspond to data. The parameters of different models were determined. To propose a degradation pathway of 1-MT leading to increased
KYNA levels, seven models were developed and compared. The most likely model was
extended to test whether the inhibitory effects of 1-MT on IDO can be determined.
Three different approaches determined the ODE model parameters of the different hypothesis of 1-MT degradation. In the first approach, ODE model parameters were fixed
to values fitted to an independent data set. In the second approach, parameters were
fitted to a subset of the data set, which was used for simulations of the different hypotheses. The third approach calculated ODE model parameters 100 times without
fixed parameters. The parameter set ending up in trajectories of the TRP metabolites,
which have the smallest distance to the data, was assumed to be the most likely. The
ODE model parameters were fitted to data measured in pigs. Two different
experimental models delivered data used in this thesis. The first experimental model
activates IDO by LPS administration in pigs. The second one combines the IDO
activation by LPS with the administration of 1-MT in pigs.
The most likely hypothesis, according to approach 1 was the degradation of 1-MT to
KYNA and TRP. For the second data set the most likely one was the direct degradation of 1-MT to KYNA. With approach 2 the most likely degradation pathways were
the combination of all degradation pathways and the degradation of 1-MT to TRP and
TRP to KYNA. With approach 3 the most likely way of KYNA increase was given by
the direct degradation of 1-MT to KYNA. In summary, the three approaches revealed
hypothesis 2, the direct degradation of 1-MT to KYNA most frequently. A cell-free
assay validated this result. This experiment combined 1-MT or TRP with or without
the enzyme kynurenine aminotransferase (KAT). KAT was already shown to degrade
TRP directly to KYNA. The levels of TRP, KYN and KYNA were measured. The
highest KYNA levels were yielded with an assay adding KAT to 1-MT, corresponding
to hypothesis 2. The models describing the inhibitory effects of 1-MT revealed that
the model without inhibitory effects of 1-MT on IDO was more likely for all three approaches.
The correctness of hypothesis 2 has to be confirmed by further in vitro experiments. It
also has to be investigated which reactions promote the degradation of 1-MT to KYNA.
The missing inhibitory properties of 1-MT on IDO, determined by the in silico ODE
models, align with previous research. It was shown that the saturation of 1-MT was too
low, e.g. in pigs, to inhibit IDO efficiently.
In this study, the first possible degradation pathway of 1-MT along the KP is proposed.
The reliability of the results depends on the quality of the experimental data, and the
season, when data were measured. Moreover, the results vary between the different
approaches of parameter fitting. Different approaches of parameter fitting have to be
included in the analysis to get more evidence for the correctness of the results.
Spatial variation in survival has individual fitness consequences and influences population dynamics. It proximately and ultimately impacts space use including migratory connectivity. Therefore, knowing spatial patterns of survival is crucial to understand demography of migrating animals. Extracting information on survival and space use from observation data, in particular dead recovery data, requires explicitly identifying the observation process. The main aim of this work is to establish a modeling framework which allows estimating spatial variation in survival, migratory connectivity and observation probability using dead recovery data. We provide some biological background on survival and migration and a short methodological overview of how similar situations are modeled in literature.
Afterwards, we provide REML-like estimators for discrete space and show identifiability of all three parameters using the characteristics of the multinomial distribution. Moreover, we formulate a model in continuous space using mixed binomial point processes. The continuous model assumes a constant recovery probability over space. To drop this strict assumption, we develop an optimization procedure combining the discrete and continuous space model. Therefore, we use penalized M-splines. In simulation studies we demonstrate the performance of the estimators for all three model approaches. Furthermore, we apply the models to real-world data sets of European robins \textit{Erithacus rubecula} and ospreys \textit{Pandion haliaetus}.
We discuss how this study can be embedded in the framework of animal movement and the capture mark recapture/recovery methodology. It can be seen as a contribution and an extension to distance sampling, local stationary everyday movement and dispersal. We emphasize the importance of having a mathematically clearly formulated modeling framework for applied methods. Moreover, we comment on model assumptions and their limits. In the future, it would be appealing to extend this framework to the full annual cycle and carry-over effects.
We introduce a multi-step machine learning approach and use it to classify data from EEG-based brain computer interfaces. This approach works very well for high-dimensional EEG data. First all features are divided into subgroups and linear discriminant analysis is used to obtain a score for each subgroup. Then it is applied to subgroups of the resulting scores. This procedure is iterated until there is only one score remaining and this one is used for classification. In this way we avoid estimation of the high-dimensional covariance matrix of all features. We investigate the classifification performance with special attention to the small sample size case. For the normal model, we study the asymptotic error rate when dimension p and sample size n tend to infinity. This indicates how to defifine the sizes of subgroups at each step. In addition we present a theoretical error bound for the spatio-temporal normal model with separable covariance matrix, which results in a recommendation on how subgroups should be formed for this kind of data. Finally some techniques, for example wavelets and independent component analysis, are used to extract features of some kind of EEG-based brain computer interface data.
Neutrophils in Tuberculosis: Cell Biology, Cellular Networking and Multitasking in Host Defense
(2021)
Neutrophils readily infiltrate infection foci, phagocytose and usually destroy microbes. In
tuberculosis (TB), a chronic pulmonary infection caused by Mycobacterium tuberculosis (Mtb),
neutrophils harbor bacilli, are abundant in tissue lesions, and their abundances in blood correlate
with poor disease outcomes in patients. The biology of these innate immune cells in TB is complex.
Neutrophils have been assigned host-beneficial as well as deleterious roles. The short lifespan of
neutrophils purified from blood poses challenges to cell biology studies, leaving intracellular
biological processes and the precise consequences of Mtb–neutrophil interactions ill-defined. The
phenotypic heterogeneity of neutrophils, and their propensity to engage in cellular cross-talk and
to exert various functions during homeostasis and disease, have recently been reported, and such
observations are newly emerging in TB. Here, we review the interactions of neutrophils with Mtb,
including subcellular events and cell fate upon infection, and summarize the cross-talks between
neutrophils and lung-residing and -recruited cells. We highlight the roles of neutrophils in TB
pathophysiology, discussing recent findings from distinct models of pulmonary TB, and emphasize
technical advances that could facilitate the discovery of novel neutrophil-related disease
mechanisms and enrich our knowledge of TB pathogenesis
The innate immune system relies on families of pattern recognition receptors (PRRs)
that detect distinct conserved molecular motifs from microbes to initiate antimicrobial responses.
Activation of PRRs triggers a series of signaling cascades, leading to the release of pro-inflammatory
cytokines, chemokines and antimicrobials, thereby contributing to the early host defense against
microbes and regulating adaptive immunity. Additionally, PRRs can detect perturbation of cellular
homeostasis caused by pathogens and fine-tune the immune responses. Among PRRs, nucleotide
binding oligomerization domain (NOD)-like receptors (NLRs) have attracted particular interest in the
context of cellular stress-induced inflammation during infection. Recently, mechanistic insights into
the monitoring of cellular homeostasis perturbation by NLRs have been provided. We summarize
the current knowledge about the disruption of cellular homeostasis by pathogens and focus on NLRs
as innate immune sensors for its detection. We highlight the mechanisms employed by various
pathogens to elicit cytoskeleton disruption, organelle stress as well as protein translation block, point
out exemplary NLRs that guard cellular homeostasis during infection and introduce the concept of
stress-associated molecular patterns (SAMPs). We postulate that integration of information about
microbial patterns, danger signals, and SAMPs enables the innate immune system with adequate
plasticity and precision in elaborating responses to microbes of variable virulence.
Mathematical phylogenetics provides the theoretical framework for the reconstruction and analysis of phylogenetic trees and networks. The underlying theory is based on various mathematical disciplines, ranging from graph theory to probability theory.
In this thesis, we take a mostly combinatorial and graph-theoretical position and study different problems concerning phylogenetic trees and networks.
We start by considering phylogenetic diversity indices that rank species for conservation. Two such indices for rooted trees are the Fair Proportion index and the Equal Splits index, and we analyze how different they can be from each other and under which circumstances they coincide. Moreover, we define and investigate analogues of these indices for unrooted trees.
Subsequently, we study the Shapley value of unrooted trees, another popular phylogenetic diversity index. We show that it may fail as a prioritization criterion in biodiversity conservation and is outcompeted by an existing greedy approach. Afterwards, we leave the biodiversity setting and consider the Shapley value as a tree reconstruction tool. Here, we show that non-isomorphic trees may have permutation-equivalent Shapley transformation matrices and identical Shapley values, implying that the Shapley value cannot reliably be employed in tree reconstruction.
In addition to phylogenetic diversity indices, another class of indices frequently discussed in mathematical phylogenetics, is the class of balance indices. In this thesis, we study one of the oldest and most popular of them, namely the Colless index for rooted binary trees. We focus on its extremal values and analyze both its maximum and minimum values as well as the trees that achieve them.
Having analyzed various questions regarding phylogenetic trees, we finally turn to phylogenetic networks. We focus on a certain class of phylogenetic networks, namely tree-based networks, and consider this class both in a rooted and in an unrooted setting.
First, we prove the existence of a rooted non-binary universal tree-based network with n leaves for all positive integers n, that is, we show that there exists a rooted non-binary tree-based network with $n$ leaves that has every non-binary phylogenetic tree on the same leaf set as a base tree.
Finally, we study unrooted tree-based networks and introduce a class of networks that are necessarily tree-based, namely edge-based networks. We show that edge-based networks are closely related to a family of graphs in classical graph theory, so-called generalized series-parallel graphs, and explore this relationship in full detail.
In summary, we add new insights into existing concepts in mathematical phylogenetics, answer open questions in the literature, and introduce new concepts and approaches. In doing so, we make a small but relevant contribution to current research in mathematical phylogenetics.
The geometric arena here is a smooth manifold of dimension n equipped with a Riemannian or pseudo-Riemannian metric and an affine connection. Field theories following from a variational principle are considered on this basis. In this context, all invariants which are quadratic in the curvature are determined. The work derives several manifestly covariant formulas for the Euler-Lagrange derivatives or the field equations. Some of these field theories can be interpreted as gravitational theories alternatively to Einstein´s general relativity theory. The work also touches the difficult problem to define and to calculate energy and momentum of a gravitational field.
The constructions of Lévy processes from convolution semigroups and of product systems from subproduct systems respectively, are formally quite similar. Since there are many more comparable situations in quantum stochastics, we formulate a general categorial concept (comonoidal systems), construct corresponding inductive systems and show under suitable assumptions general properties of the corresponding inductive limits. Comonoidal systems in different tensor categories play a role in all chapters of the thesis. Additive deformations are certain comonoidal systems of algebras. These are obtained by deformation of the algebra structure of a bialgebra. If the bialgebra is even a Hopf algebra, then compatibility with the antipode automatically follows. This remains true also in the case of braided Hopf algebras. Subproduct systems are comonoidal systems of Hilbert spaces. In the thesis we deal with the question, what are the possible dimensions of finite-dimensional subproduct systems. In discrete time, this can be reduced to the combinatorial problem of determining the complexities of factorial languages. We also discuss the rational and continuous time case. A further source for comonoidal systems are universal products, which are used in quantum probability to model independence. For the (r,s)-products, which were recently introduced by S. Lachs, we determine the corresponding product of representations by use of a generalized GNS-construction.
This thesis deals with thickness optimization of shells. The overall task is to find an optimal thickness distribution in order to minimize the deformation of a loaded shell with prescribed volume. In addition, lower and upper bounds for the thickness are given. The shell is made of elastic, isotropic, homogeneous material. The deformation is modeled using equations from Linear Elasticity. Here, a basic shell model based on the Reissner-Mindlin assumption is used. Both the stationary and the dynamic case are considered. The continuity and the Gâteaux-differentiability of the control-to-state operator is investigated. These results are applied to the reduced objective with help of adjoint theory. In addition, techniques from shape optimization are compared to the optimal control approach. In the following, the theoretical results are applied to cylindrical shells and an efficient numerical implementation is presented. Finally, numerical results are shown and analyzed for different examples.