### Refine

#### Year of publication

#### Language

- English (22) (remove)

#### Keywords

- Statistik (3)
- Algebra (2)
- Bioinformatik (2)
- Fraktal (2)
- Funktionalanalysis (2)
- SelbstĂ¤hnlichkeit (2)
- fractal (2)
- self-similarity (2)
- (generalized) linear mixed model (1)
- (verallgemeinertes) lineares gemischtes Modell (1)

#### Institute

- Institut fĂĽr Mathematik und Informatik (22) (remove)

Self-affine tiles and fractals are known as examples in analysis and topology, as models of quasicrystals and biological growth, as unit intervals of generalized number systems, and as attractors of dynamical systems. The author has implemented a software which can find new examples and handle big databases of self-affine fractals. This thesis establishes the algebraic foundation of the algorithms of the IFStile package. Lifting and projection of algebraic and rational iterated function systems and many properties of the resulting attractors are discussed.

As the tree of life is populated with sequenced genomes ever more densely, the new challenge is the accurate and consistent annotation of entire clades of genomes. In my dissertation, I address this problem with a new approach to comparative gene finding that takes a multiple genome alignment of closely related species and simultaneously predicts the location and structure of protein-coding genes in all input genomes, thereby exploiting negative selection and sequence conservation. The model prefers potential gene structures in the different genomes that are in agreement with each other, orâ€”if notâ€”where the exon gains and losses are plausible given the species tree. The multi-species gene finding problem is formulated as a binary labeling problem on a graph. The resulting optimization problem is NP hard, but can be efficiently approximated using a subgradient-based dual decomposition approach.
I tested the novel approach on whole-genome alignments of 12 vertebrate and 12 Drosophila species. The accuracy was evaluated for human, mouse and Drosophila melanogaster and compared to competing methods. Results suggest that the new method is well-suited for annotation of a large number of genomes of closely related species within a clade, in particular, when RNA-Seq data are available for many of the genomes. The transfer of existing annotations from one genome to another via the genome alignment is more accurate than previous approaches that are based on protein-spliced alignments, when the genomes are at close to medium distances. The method is implemented in C++ as part of the gene finder AUGUSTUS.

We consider Iterated Function Systems (IFS) on the real line and on the complex plane. Every IFS defines a self-similar measure supported on a self-similar set. We study the transfer operator (which acts on the space of continuous functions on the self-similar set) and the Hutchinson operator (which acts on the space of Borel regular measures on the self-similar set). We show that the transfer operator has an infinitely countable set of polynomial eigenfunctions. These eigenfunctions can be regarded as generalized Bernoulli polynomials. The polynomial eigenfuctions define a polynomial approximation of the self-similar measure. We also study the moments of the self-similar measure and give recursions for computing them. Further, we develop a numerical method based on Markov chains to study the spectrum of the Hutchinson and transfer operators. This method provides numerical approximations of the invariant measure for which we give error bounds in terms of the Wasserstein-distance. The standard example in this thesis is the parametric family of Bernoulli convolutions.

The history of Mathematics has been lead in part by the desire for generalization: once an object was given and had been understood, there was the desire to find a more general version of it, to fit it into a broader framework. Noncommutative Mathematics fits into this description, as its interests are objects analoguous to vector spaces, or probability spaces, etc., but without the commonsense interpretation that those latter objects possess. Indeed, a space can be described by its points, but also and equivalently, by the set of functions on this space. This set is actually a commutative algebra, sometimes equipped with some more structure: *-algebra, C*-algebra, von Neumann algebras, Hopf algebras, etc. The idea that lies at the basis of noncommutative Mathematics is to replace such algebras by algebras that are not necessarily commutative any more and to interpret them as "algebras of functions on noncommutative spaces". Of course, these spaces do not exist independently from their defining algebras, but facts show that a lot of the results holding in (classical) probability or (classical) group theory can be extended to their noncommutative counterparts, or find therein powerful analogues. The extensions of group theory into the realm of noncommutative Mathematics has long been studied and has yielded the various quantum groups. The easiest version of them, the compact quantum groups, consist of C*-algebras equipped with a *-homomorphism &Delta with values in the tensor product of the algebra with itself and verifying some coassociativity condition. It is also required that the compact quantum group verifies what is known as quantum cancellation property. It can be shown that (classical) compact groups are indeed a particular case of compact quantum groups. The area of compact quantum groups, and of quantum groups at large, is a fruitful area of research. Nevertheless, another generalization of group theory could be envisioned, namely by taking a comultiplication &Delta taking values not in the tensor product but rather in the free product (in the category of unital *-algebras). This leads to the theory of dual groups in the sense of Voiculescu, also called H-algebras by Zhang. These objects have not been so thoroughly studied as their quantum counterparts. It is true that they are not so flexible and that we therefore do not know many examples of them and showing that some relations cannot exist in the dual group case because they do not pass the coproduct. Nevertheless, I have been interested during a great part of my PhD work by these objects and I have made some progress towards their understanding, especially regarding quantum LĂ©vy processes defined on them and Haar states.

This thesis deals with thickness optimization of shells. The overall task is to find an optimal thickness distribution in order to minimize the deformation of a loaded shell with prescribed volume. In addition, lower and upper bounds for the thickness are given. The shell is made of elastic, isotropic, homogeneous material. The deformation is modeled using equations from Linear Elasticity. Here, a basic shell model based on the Reissner-Mindlin assumption is used. Both the stationary and the dynamic case are considered. The continuity and the GĂ˘teaux-differentiability of the control-to-state operator is investigated. These results are applied to the reduced objective with help of adjoint theory. In addition, techniques from shape optimization are compared to the optimal control approach. In the following, the theoretical results are applied to cylindrical shells and an efficient numerical implementation is presented. Finally, numerical results are shown and analyzed for different examples.

Today the process of improving technology and software allows to create, save and explore massive data sets in little time. "Big Data" are everywhere such as in social networks, meteorology, customersâ€™ behaviour â€“ and in biology. The Omics research field, standing for the organism-wide data exploration and analysis, is an example of biological research that has to deal with "Big Data" challenges. Possible challenges are for instance effcient storage and cataloguing of the data sets and finally the qualitative analysis and exploration of the information. In the last decade largescale genome-wide association studies and high-throughput techniques became more effcient, more profitable and less expensive. As a consequence of this rapid development, it is easier to gather massive amounts of genomic and proteomic data. However, these data need to get evaluated, analysed and explored. Typical questions that arise in this context include: which genes are active under sever al physical states, which proteins and metabolites are available, which organisms or cell types are similar or different in their enzymesâ€™or genesâ€™ behaviour. For this reason and because a scientist of any "Big Data" research field wants to see the data, there is an increasing need of clear, intuitively understandable and recognizable visualization to explore the data and confirm thesis. One way to get an overview of the data sets is to cluster it. Taxonomic trees and functional classification schemes are hierarchical structures used by biologists to organize the available biological knowledge in a systematic and computer readable way (such as KEGG, GO and FUNCAT). For example, proteins and genes could be clustered according to their function in an organism. These hierarchies tend to be rather complex, and many comprise thousands of biological entities. One approach for a space-filling visualization of these hierarchical structured data sets is a treemap. Existing algorithms for producing treemaps struggle with large data sets and have several other problems. This thesis addresses some of these problems and is structured as follows. After a short review of the basic concepts from graph theory some commonly used types of treemaps and a classification of treemaps according to information visualization aspects is presented in the first chapter of this thesis. The second chapter of this thesis provides several methods to improve treemap constructions. In certain applications the researcher wants to know, how the entities in a hierarchical structure are related to each other (such as enzymes in a metabolic pathway). Therefore in the 3 third chapter of this thesis, the focus is on the construction of a suitable layout overlaying an existing treemap. This gives rise to optimization problems on geometric graphs. In addition, from a practical point of view, options for enhancing the display of the computed layout are explored to help the user perform typical tasks in this context more effciently. One important aspect of the problems on geometric graphs considered in the third chapter of the thesis is that crossings of edges in a network structure are to be minimized while certain other properties such as connectedness are maintained. Motivated by this, in the fourth chapter of this thesis, related combinatorial and computational problems are explored from a more theoretical point of view. In particular some light is shed on properties of crossing-free spanning trees in geometric graphs.

A slice is an intersection of a hyperplane and a self-similar set. The main purpose of this work is the mathematical description of slices. A suitable tool to describe slices are branching dynamical systems. Such systems are a generalisation of ordinary discrete dynamical systems for multivalued maps. Simple examples are systems arising from Bernoulli convolutions and beta-representations. The connection between orbits of branching dynamical systems and slices is demsonstrated and conditions are derived under which the geometry of a slice can be computed. A number of interesting 2-d and 3-d slices through 3-d and 4-d fractals is discussed.

In the PhD-thesis a conditional random field approach and its implementation is presented to predict the interaction sites of protein homo- and heterodimers using the spatial structure of one protein partner from a complex. The method includes a substantially simple edge feature model. A novel node feature class is introduced that is called -change in free energy-. The Online Large-Margin algorithm is adapted in order to train the model parameters given a classified reference set of proteins. A significantly higher prediction accuracy is achieved by combining our new node feature class with the standard node feature class relative accessible surface area. The quality of the predictions is measured by computing the area under the receiver operating characteristic.

The constructions of LĂ©vy processes from convolution semigroups and of product systems from subproduct systems respectively, are formally quite similar. Since there are many more comparable situations in quantum stochastics, we formulate a general categorial concept (comonoidal systems), construct corresponding inductive systems and show under suitable assumptions general properties of the corresponding inductive limits. Comonoidal systems in different tensor categories play a role in all chapters of the thesis. Additive deformations are certain comonoidal systems of algebras. These are obtained by deformation of the algebra structure of a bialgebra. If the bialgebra is even a Hopf algebra, then compatibility with the antipode automatically follows. This remains true also in the case of braided Hopf algebras. Subproduct systems are comonoidal systems of Hilbert spaces. In the thesis we deal with the question, what are the possible dimensions of finite-dimensional subproduct systems. In discrete time, this can be reduced to the combinatorial problem of determining the complexities of factorial languages. We also discuss the rational and continuous time case. A further source for comonoidal systems are universal products, which are used in quantum probability to model independence. For the (r,s)-products, which were recently introduced by S. Lachs, we determine the corresponding product of representations by use of a generalized GNS-construction.

This thesis revolves around a new concept of independence of algebras. The independence nicely fits into the framework of universal products, which have been introduced to classify independence relations in quantum probability theory; the associated product is called (r,s)-product and depends on two complex parameters r and s. Based on this product, we develop a theory which works without using involutive algebras or states. The following aspects are considered: 1. Classification: Universal products are defined on the free product of algebras (the coproduct in the category of algebras) and model notions of independence in quantum probability theory. We distinguish universal products according to their behaviour on elements of length two, calling them (r,s)-universal products with complex parameters r and s respectively. In case r and s equal 1, Muraki was able to show that there exist exactly five universal products (Murakiâ€™s five). For r equals s nonzero we get five one parameter families (q-Murakiâ€™s five). We prove that in the case r not equal to s the (r,s)-product, a two parameter deformation of the Boolean product, is the only universal product satisfying our set of axioms. The corresponding independence is called (r,s)-independence. 2. Dual pairs and GNS construction: By use of the GNS construction, one can associate a product of representations with every positive universal product. Since the (r,s)-product does not preserve positivity, we need a substitute for the usual GNS construction for states on involutive algebras. In joint work with M. Gerhold, the product of representations associated with the (r,s)-product was determined, whereby we considered representations on dual pairs instead of Hilbert spaces. This product of representations is - as we could show - essentially different from the Boolean product. 3. Reduction and quantum LĂ©vy processes: U. Franz introduced a category theoretical concept which allows a reduction of the Boolean, monotone and antimonotone independence to the tensor independence. This existing reduction could be modified in order to apply to the (r,s)-independence. Quantum LĂ©vy processes with (r,s)-independent increments can, in analogy with the tensor case, be realized as solutions of quantum stochastic differential equations. To prove this theorem, the previously mentioned reduction principle in the sense of U. Franz and a generalization of M. SchĂĽrmannâ€™s theory for symmetric Fock spaces over dual pairs are used. As the main result, we obtain the realization of every (r,s)-LĂ©vy process as solution of a quantum stochastic differential equation. When one, more generally, defines LĂ©vy processes in a categorial way using U. Franzâ€™s definition of independence for tensor categories with inclusions, compatibility of the inclusions with the tensor category structure plays an important role. For this thesis such a compatibility condition was formulated and proved to be equivalent to the characterization proposed by M. Gerhold. 4. Limit distributions: We work with so-called dual semigroups in the sense of D. V. Voiculescu (comonoids in the tensor category of algebras with free product). The polynomial algebra with primitive comultiplication is an example for such a dual semigroup. We use a "weakened" reduction which we call reduction of convolution and which essentially consists of a cotensor functor constructed from the symmetric tensor algebra. It turns dual semigroups into commutative bialgebras and also translates the convolution exponentials. This method, which can be nicely described in the categorial language, allows us to formulate central limit theorems for the (r,s)-independence and to calculate the correponding limit distributions (convergence in moments). We calculate the moments appearing in the central limit theorem for the (r,s)-product: The even moments are homogeneous polynomials in r and s with the Eulerian numbers as coefficients; the odd moments vanish. The moment sequence that we get from the central limit theorem for an arbitrary universal product is the moment sequence of a probability measure on the real line if and only if r equals s greater or equal to 1. In this case we present an explicit formula for the probability measure.