### Refine

#### Year of publication

#### Language

- English (22) (remove)

#### Keywords

- Statistik (3)
- Algebra (2)
- Bioinformatik (2)
- Fraktal (2)
- Funktionalanalysis (2)
- SelbstÃ¤hnlichkeit (2)
- fractal (2)
- self-similarity (2)
- (generalized) linear mixed model (1)
- (verallgemeinertes) lineares gemischtes Modell (1)

#### Institute

- Institut fÃ¼r Mathematik und Informatik (22) (remove)

The study of sow reproduction traits is important in livestock science and production to increase animal survival and economic efficiency. This work deals with the detection of different effects on within-litter variance of birth weight by applying different statistical models with different distributional assumptions. The piglets within one litter were separated by sex. The trait of sow was formed from the sample variances of birth weights within litter separated by sex to consider the sex effect on mean birth weight. A linear mixed model (LMM) approach was fitted to the logarithmized sample variance and the sample standard deviation. A generalized linear mixed model with gamma distributed residuals and log-link function was applied to the untransformed sample variance. Appropriate weights were constructed to account for individual litter sizes. Models were compared by analysing data from Landrace and Large White. The estimates of heritability for the different traits ranged from 6-14%. The LMM for the weighted standard deviation of birth weights was identified as most suitable in terms of residual normality. Furthermore, the impact of pigletsÂ´ sex on birth weight variability was tested, but it was only proved for one practical dataset. Additionally, we analysed the influence of including or not including birth weights of stillborn piglets on the estimates of variance components of birth weight variability. With omitted stillborns the estimates of heritability resulted in about 2% higher values than in investigations of total born piglets. We were interested in the presence of the random boar effect on birth weight variability. The corresponding variance component was tested via restricted likelihood ratio test. Among others, the null distribution of the test statistic was approximated by parametric bootstrap simulations which were computational intensive. We picked up a two-parametric approach from literature and proposed a three-parametric approach to approximate the null distribution of the test statistic. We have analysed correlated data in balanced (simulated data) and unbalanced (empirical data) designs. The two-parametric approach using a scaled mixture of chisquare-distributions as well as a three-parametric approach, that uses a mixture of the point mass at zero and a gamma distribution, behaved most solid in all investigations and were most powerful in the simulation study.

We present classical and hybrid modeling approaches for genetic regulatory networks focusing on promoter analysis for negatively and positively autoregulated networks. The main aim of this thesis is to introduce an alternative mathematical approach to model gene regulatory networks based on piecewise deterministic Markov processes (PDMP). During somitogenesis, a process describing the early segmentation in vertebrates, molecular oscillators play a crucial role as part of a segmentation clock. In mice, these oscillators are called Hes1 and Hes7 and are commonly modeled by a system of two delay differential equations including a Hill function, which describes gene repression by their own gene products. The Hill coefficient, which is a measure of nonlinearity of the binding processes in the promoter, is assumed to be equal to two, based on the fact that Hes1 and Hes7 form dimers.However, by standard arguments applied to binding analysis, we show that a higher Hill coefficient is reasonable. This leads to results different from those in literature which requires a more sophisticated model. For the Hes7 oscillator we present a system of ordinary differential equations including a Michaelis-Menten term describing a nonlinear degradation of the proteins by the ubiquitinpathway. As demonstrated by the Hes1 and Hes7 oscillator, promoter behavior can have strong influence on the dynamical behavior of genetic networks. Since purely deterministic systems cannot reveal phenomenons caused by the inherent random fluctuations, we propose a novel approach based on PDMPs. Such models allow to model binding processes of transcription factors to binding sites in a promoter as random processes, where all other processes like synthesis, degradation or dimerization of the gene products are modeled in deterministic manner. We present and discuss a simulation algorithm for PDMPs and apply it to three types of genetic networks: an unregulated gene, a toggle switch, and a positively autoregulated network. The different regulation characteristics are analyzed and compared by numerical means. Furthermore, we determine analytical solutions of the stationary distributions of one negatively, and three positively autoregulated networks. Based on these results, we analyze attenuation of noise in a negative feedback loop, and the question of graded or binary response in autocatalytic networks.

Parsimonious Histograms
(2010)

The dissertation is concerned with the construction of data driven histograms. Histograms are the most elementary density estimators at all. However, they require the specification of the number and width of the bins. This thesis provides two new construction methods delivering adaptive histograms where the required parameters are determined automatically. Both methods follow the principle of parsimony, i.e. the histograms are solutions of predetermined optimization problems. In both cases, but under different aspects, the number of bins is minimized. The dissertation presents the algorithms that solve the optimization problems and illustrates them by a number of numerical experiments. Important properties of the estimators are shown. Finally, the new developed methods are compared with standard methods by an extensive simulation study. By means of synthetic samples of different size and distribution the histograms are evaluated by special performance criteria. As one main result, the proposed methods yield histograms with considerably fewer bins and with an excellent ability of peak detection.

Self-similar sets are a class of fractals which can be rigorously defined and treated by mathematical methods. Their theory has been developed in n-dimensional space, but we have just a few good examples of self-similar sets in three-dimensional space. This thesis has two different aims. First, to extend fractal constructions from two-dimensional space to three-dimensional space. Second, to study some of the properties of these fractals such as finite type, disk-likeness, ball-likeness, and the Hausdorff dimension of boundaries. We will use the neighbor graph tool for creating new fractals, and studying their properties.

Background: Computational tools for the investigation of transcriptional regulation, in particular of transcription factor binding sites (TFBS), in evolutionary context are developed. Existing sequence based tools prediction such binding sites do not consider their actual functionality, although it is known that besides the base sequence many other aspects are relevant for binding and for the effects of that binding. In particular in Eukaryotes a perfectly matching sequence motif is neither necessary nor sufficient for a functional transcription factor binding site. Published work in the field of transcriptional regulation frequently focus on the prediction of putative transcription factor binding sites based on sequence similarity to known binding sites. Furthermore, among the related software, only a small number implements visualization of the evolution of transcription factor binding sites or the integration of other regulation related data. The interface of many tools is made for computer scientists, although the actual interpretation of their outcome needs profound biological background knowledge. Results and Discussion: The tool presented in this thesis, "ReXSpecies" is a web application. Therefore, it is ready to use for the end user without installation providing a graphical user interface. Besides extensive automation of analyses of transcriptional regulation (the only necessary input are the genomic coordinates of a regulatory region), new techniques to visualize the evolution of transcription factor binding sites were developed. Furthermore, an interface to genome browsers was implemented to enable scientists to comprehensively analyze their regulatory regions with respect to other regulation relevant data. ReXSpecies contains a novel algorithm that searches for evolutionary conserved patterns of transcription factor binding sites, which could imply functionality. Such patterns were verified using some known transcription factor binding sites of genes involved in pluripotency. In the appendix, efficiency and correctness of the used algorithm are discussed. Furthermore, a novel algorithm to color phylogenetic trees intuitively is presented. In the thesis, new possibilities to render evolutionary conserved sets of transcription factor binding sites are developed. The thesis also discusses the evolutionary conservation of regulation and its context dependency. An important source of errors in the analysis of regulatory regions using comparative genetics is probably to find and to align homologous regulatory regions. Some alternatives to using sequence similarity alone are discussed. Outlook: Other possibilities to find (functional) homologous regulatory regions (besides whole-genome-alignments currently used) are BLAST searches, local alignments, homology databases and alignment-free approaches. Using one ore more of these alternatives could reduce the number of artifacts by reduction of the number of regions that are erroneously declared homologous. To achieve more robust predictions of transcription, the author suggests to use other regulation related data besides sequence data only. Therefore, the use and extension of existing tools, in particular of systems biology, is proposed.

Interactive Visualization for the Exploration of Aligned Biological Networks and Their Evolution
(2011)

Network Visualization is a widely used tool in biology. The biological networks, as protein-interaction-networks are important for many aspects in life. Today biologists use the comparison of networks of different species (network alignment) to understand the networks in more detail and to understand the underlying evolution. The goal of this work is to develop a visualization software that is able to visualize network alignments and also their evolution. The presented software is the first software for such visualization tasks. It uses 3D graphics and also animations for the dynamic visualization of evolution. This work consists of a review of the Related Work, a chapter about our Graph-based Approach for Interactive Visualization of Evolving Network Alignments, an explanation of the Graph Layout Algorithm and some hints for the Software System.

The goal of this doctoral thesis is to create and to implement methods for fully automatic segmentation applications in magnetic resonance images and datasets. The work introduces into technical and physical backgrounds of magnetic resonance imaging (MRI) and summarizes essential segmentation challenges in MRI data including technical malfunctions and ill-posedness of inverse segmentation problems. Theoretical background knowledge of all the used methods that are adapted and extended to combine them for problem-specific segmentation applications are explained in more detail. The first application for the implemented solutions in this work deals with two-dimensional tissue segmentation of atherosclerotic plaques in cardiological MRI data. The main part of segmentation solutions is designed for fully automatic liver and kidney parenchyma segmentation in three-dimensional MRI datasets to ensure computer-assisted organ volumetry in epidemiological studies. The results for every application are listed, described and discussed before important conclusions are drawn. Among several applied methods, the level set method is the main focus of this work and is used as central segmentation concept in the most applications. Thus, its possibilities and limitations for MRI data segmentation are analyzed. The level set method is extended by several new ideas to overcome possible limitations and it is combined as important part of modularized frameworks. Additionally, a new approach for probability map generation is presented in this thesis, which reduces data dimensionality of multiple MR-weightings and incorporates organ position probabilities in a probabilistic framework. It is shown, that essential organ features (i.e. MR-intensity distributions, locations) can be well represented in the calculated probability maps. Since MRI data are produced by using multiple MR- weightings, the used dimensionality reduction technique is very helpful to generate a single probability map, which can be used for further segmentation steps in a modularized framework.

The geometric arena here is a smooth manifold of dimension n equipped with a Riemannian or pseudo-Riemannian metric and an affine connection. Field theories following from a variational principle are considered on this basis. In this context, all invariants which are quadratic in the curvature are determined. The work derives several manifestly covariant formulas for the Euler-Lagrange derivatives or the field equations. Some of these field theories can be interpreted as gravitational theories alternatively to EinsteinÂ´s general relativity theory. The work also touches the difficult problem to define and to calculate energy and momentum of a gravitational field.

Independence is a basic concept of probability theory and statistics. In a lot of fields of sciences, dependency of different variables is gained lots of attention from scientists. A measure, named information dependency, is proposed to express the dependency of a group of random variables. This measure is defined as the Kullback-Leibler divergence of a joint distribution with respect to a product-marginal distribution of these random variables. In the bivariate case, this measure is known as mutual information of two random variables. Thus, the measure information dependency has a strong relationship with the Information Theory. The thesis aims to give a thorough study of the information dependency from both mathematical and practical viewpoints. Concretely, we would like to research three following problems: 1. Proving that the information dependency is a useful tool to express the dependency of a group of random variables by comparing it with other measures of dependency. 2. Studying the methods to estimate the information dependency based on the samples of a group of random variables. 3. Investigating how the Independent Component Analysis problem, an interesting problem in statistics, can be solved using information dependency.

We introduce a multi-step machine learning approach and use it to classify data from EEG-based brain computer interfaces. This approach works very well for high-dimensional EEG data. First all features are divided into subgroups and linear discriminant analysis is used to obtain a score for each subgroup. Then it is applied to subgroups of the resulting scores. This procedure is iterated until there is only one score remaining and this one is used for classification. In this way we avoid estimation of the high-dimensional covariance matrix of all features. We investigate the classifification performance with special attention to the small sample size case. For the normal model, we study the asymptotic error rate when dimension p and sample size n tend to infinity. This indicates how to defifine the sizes of subgroups at each step. In addition we present a theoretical error bound for the spatio-temporal normal model with separable covariance matrix, which results in a recommendation on how subgroups should be formed for this kind of data. Finally some techniques, for example wavelets and independent component analysis, are used to extract features of some kind of EEG-based brain computer interface data.