Upcoming events:
Past events:
Friday, 17.02.2017
10:15

11:15
Using the LP formulation for GI, we describe the class of compact graphs,
proposed by Tinhofer. Isomorphism testing for such graphs can be done
efficiently, using the linear program for GI. The recognition problem for
such graphs remains open. Towards a partial understanding of this class,
we describe some connections of this class with Color Refinement and
Individualization techniques.
Friday, 03.02.2017
10:15

11:15
Seminarraum i7
The individualization and refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, it provides the only methods for isomorphism testing which are also viable for practical uses. From the theoretical point of view, no meaningful lower bounds on the worst case complexity of these tools are known. In fact, it is an open question whether these simple methods might provide similar upper bounds than existing algorithms combining complex group theoretic and combinatorial tools.
In this work we give a negative answer to this question and construct a family of graphs that provides an exponential lower bound for methods based on the individualization and refinement paradigm. Other than previous constructions like Miyazaki graphs, our construction is immune to changing the cell selector or invariants used within the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case where kdimensional Weisfeiler Leman is used for refinement instead of color refinement (i.e. 1dimensional Weisfeiler Leman) and the arguments even work when the automorphism group is initially known to the algorithm. In particular, under some moderate assumptions, our lower bounds are independent from which heuristics are used for cell selection, node invariants and automorphism detection.
Friday, 27.01.2017
10:15

11:15
Seminarraum i7
An automaton model for words over the alphabet of the natural numbers is presented  the so called Nmemory automaton.
We explain its close connection to MSOlogic and discuss its expressive power and closure properties.
Additionally we show among other decidability results the solvability of the nonemptiness problem.
Friday, 20.01.2017
10:15

11:15
Seminarraum i7
Lars Beckers
Friday, 13.01.2017
14:15

15:15
Propositional proof systems such as Resolution or the Polynomial Calculus have been studied as approaches for solving computationally
difficult problems, e.g. Graph Isomorphism. As this problem requires proofs of exponential size, this motivates the question
of which problems have only polynomial proof complexity. We address this by analysing the expressive power of logics incorporating
suitably restricted versions of Resolution and monomialPC (a weaker variant of the Polynomial Calculus) which are guaranteed
to produce polynomialsize proofs. It turns out that the power of these restricted proof systems can be described in terms
of other wellknown logics, namely we obtain equivalences with least fixedpoint logic (LFP), existential least fixedpoint
logic (EFP) and fixedpoint logic with counting (FPC).
Friday, 13.01.2017
10:15

11:15
Seminarraum i7
Andreas Klinger
Friday, 16.12.2016
10:15

11:15
Seminarraum i7
Erik van Loo
Friday, 02.12.2016
10:15

11:15
Thursday, 17.11.2016
10:00

11:00
Marc Meskes
Friday, 11.11.2016
10:15

11:15
Seminarraum i7
Friday, 04.11.2016
10:15

11:15
Seminarraum i7
Friday, 28.10.2016
11:00

12:00
Kaspar Scharf
The graph isomorphism problem is the problem to decide if two graphs of a given pair are isomorphic, which means if they are
of the same structure. For each natural number k, there exists an algorithm called the kdimensional WeisfeilerLeman algorithm
that is capable of deciding the graph isomorphism problem on some particular set of graphs. The lower the dimension k is,
the lower the worst case runtime of the algorithm is. Closely related to that algorithm is the bijective kpebblegame played
on pairs of graphs, where k is the number of used pebble pairs. The player Spoiler has a winning strategy for the game with
k pebble pairs for two given graphs, exactly if the WeisfeilerLeman algorithm with dimension k1 detects nonisomorphism of
these two graphs.
We prove that Spoiler wins the bijective 3pebble game on all pairs of nonisonorphic outerplanar graphs, which implies that the 2dimensional WeisfeilerLeman algorithm decides the graph isomorphism problem on outerplanar graphs. For that we show that if Spoiler wins the game on pairs of nonisomorphic 2connected components of a pair of given nonisomorphic graphs, then he also wins on these graphs.
We prove that Spoiler wins the bijective 3pebble game on all pairs of nonisonorphic outerplanar graphs, which implies that the 2dimensional WeisfeilerLeman algorithm decides the graph isomorphism problem on outerplanar graphs. For that we show that if Spoiler wins the game on pairs of nonisomorphic 2connected components of a pair of given nonisomorphic graphs, then he also wins on these graphs.
Monday, 24.10.2016
14:30

15:30
Sebastian Schaub
The problem of encoding a set in a data structure is a common problem in computer science, which has well known efficient
run time solutions. In a streaming situation with low memory and unknown set size, dynamic filters provide approximative representations.
They produce small fractions of false positives in exchange for good space efficiency. The Bloom is de facto the standard
for filtering, but other newer algorithms are known as well. This bachelor thesis proposes a new filter, the linear probing
filter. It is based on an intuitive approach which resembles a dictionary and uses constant length signatures as
representatives for keys. 5independent hashing functions have to be used to secure the running time being independent from the set size, although in practice this is only relevant for very large sets. The experimental results of this thesis validate the theoretical results of the algorithm and show the strengths and weaknesses of the linear probing filter. Its strengths are a constant and fast lookup and insertion time independent from the false positive rate and the set size. Its main weakness is a bad space efficiency in comparison to the other filters. The linear probing filter thus could potentially be a competitor for large set sizes.
representatives for keys. 5independent hashing functions have to be used to secure the running time being independent from the set size, although in practice this is only relevant for very large sets. The experimental results of this thesis validate the theoretical results of the algorithm and show the strengths and weaknesses of the linear probing filter. Its strengths are a constant and fast lookup and insertion time independent from the false positive rate and the set size. Its main weakness is a bad space efficiency in comparison to the other filters. The linear probing filter thus could potentially be a competitor for large set sizes.
Wednesday, 19.10.2016
11:00

12:00
Christopher Spinrath
Friday, 30.09.2016
13:00

13:45
Marcel Faßbender
Tuesday, 27.09.2016
16:00

17:00
Seminarraum i7
Daniel Wiebking
Monday, 26.09.2016
12:15

13:15
Seminar room i1
Oliver Major
In recent decades, the fields of dynamic complexity and its logical counterpart dynamic definability were consolidated as
interesting fields of study with connections to abstract database models and their implementation. As a standard model of
this field, DynFO was shown to have desirable properties such as low update complexity and the ability to express transitive
closures.
On the other hand, Choiceless Polynomial Time which is a contender for a logic for PTIME was recently shown to be equivalent to the very young polynomialtime interpretation logic (PIL). Although PIL is a logic with semantics in the classical sense, it uses an iteration system which resembles the one DynFO uses.
In this work, we attempt to develop a dynamic framework similar to DynFO which enables us to transfer our knowledge about the PIL iteration mechanism to the new framework. One major limitation of DynFO is that it does not allow changes of the underlying universe. We subsequently introduce the constant additions DynFO framework (CADF) which allows for new elements to be added to a structure in each update step. We show that alternative frameworks with up to a polynomial number of added elements turn out to have the same expressive power as CADF.
In a final part, we explore the expressive power of deterministic variants of CADF and compare them to DynFO where determinism can be imposed as a true restriction. Surprisingly, certain deterministic restrictions of CADF turn out to not be stronger than DynFO.
On the other hand, Choiceless Polynomial Time which is a contender for a logic for PTIME was recently shown to be equivalent to the very young polynomialtime interpretation logic (PIL). Although PIL is a logic with semantics in the classical sense, it uses an iteration system which resembles the one DynFO uses.
In this work, we attempt to develop a dynamic framework similar to DynFO which enables us to transfer our knowledge about the PIL iteration mechanism to the new framework. One major limitation of DynFO is that it does not allow changes of the underlying universe. We subsequently introduce the constant additions DynFO framework (CADF) which allows for new elements to be added to a structure in each update step. We show that alternative frameworks with up to a polynomial number of added elements turn out to have the same expressive power as CADF.
In a final part, we explore the expressive power of deterministic variants of CADF and compare them to DynFO where determinism can be imposed as a true restriction. Surprisingly, certain deterministic restrictions of CADF turn out to not be stronger than DynFO.
Friday, 23.09.2016
13:00

14:00
Room 9222 (E3)
Model theoretic interpretations are an important tool of algorithmic model theory. Their application range from reductions between logical theories to the construction of efficient algorithms for hard problems on restricted classes of structures, like graphs of bounded treewidth. We investigate this tool in three different areas of algorithmic model theory:  automata based decision procedures for logical theories,  algorithmic metatheorems, and  descriptive complexity. One of the main focus points of this dissertation are automata based presentations of infinite objects, which are closely related to monadic secondorder interpretations over set variables. We consider advice automatic presentations for several automata models and develop algebraic and combinatorial tools, which enable us to prove that certain structures cannot have an omegaautomatic presentation. The main result in this direction is that the field of reals is not omegaautomatic with any advice. This answers a weakened version of a question of Rabin, who asked whether the field of reals is interpretable in the infinite binary tree. Also this weak version has been considered an open problem since the introduction of omegaautomatic presentations. Further, we consider uniformly automatic classes, which are classes that are generated by a fixed presentation and a set of advices. Prototypic examples are graphs of bounded treewidth, the torsion free abelian groups of rank 1, and the class of all countable linear orders. Uniformly automatic classes are also found in the core of several algorithmic metatheorems. We investigate the efficiency of this approach by analysing the runtime of the generic automata based model checking algorithm in terms of the complexity of the presentation. We show that the runtime on a presentation of the direct product closure is only one exponential higher than the runtime on the primal class. We apply these findings to show that firstorder model checking is fixed parameter tractable on the classes of all finite boolean algebras and the class of all finite abelian groups. In both cases the parameter dependence of the runtime is elementary. The runtime, which we achieve on these classes is either provably optimal or outperforms other known approaches. Further we show that the runtime of the generic automata based algorithm for MSO model checking on graphs of treedepth at most d has an (d+1)fold exponential parameter dependence, which matches the runtime of the best known algorithm for model checking on these classes. In the last part of this dissertation we turn our attention to logics with a build in interpretation mechanism. Polynomial Time Interpretation Logic (PIL) is an alternative characterisation of Choiceless Polynomial Time (CPT). The logic CPT is currently considered the most promising candidate for a logic capturing PTIME. We contribute to the exploration of the expressive power of CPT by showing that there is a CPT definable canonisation procedure on classes of structures with bounded abelian colours. A structure has bounded abelian colours if it is of bounded colour class size and the automorphism group on every colour class is abelian. Examples of such classes emerge from the classical examples that separate Fixed Point Logic with Counting from PTIME. The CFIgraphs of Cai, Fürer, and Immerman, as well as the Multipedes of Blass, Gurevich, and Shelah have bounded abelian colours. Consequently, the isomorphism problems for these classes are solvable in CPT. For Multipedes this was an open question. In fact, Blass, Gurevich, and Shelah conjectured that the isomorphism problem for Multipedes might not be solvable by a CPT procedure.
Friday, 23.09.2016
11:30

12:15
Erik Van Loo
Friday, 23.09.2016
10:45

11:30
Friday, 23.09.2016
10:00

10:45
Thursday, 22.09.2016
10:45

11:45
Many learning problems can be formalized by parameterized formulas of
firstorder logic or monadic secondorder logic. We evaluate the
algorithmic complexity of learn ing such parameters in Valiant’s
probably approximately correct (PAC) learning model. The underlying
structures in this thesis are restricted finite graphs such as trees,
graphs of bounded degree and graphs from a nowhere dense graph class.
For the evaluation of the complexity of the learning algorithms we
consider two vari ants of the learning problem. In the first variant,
the structure over which the formulas are evaluated is part of the
input. The second variant contains only the size of the universe of the
structure as part of the input. This way the size of the first variant
is dominated by the structure, whereas in the second variant the size
of the input is loga rithmic in the structure. The learning problems
in the first variant are learning monadic secondorder formulas over
trees and graphs of bounded cliquewidth and firstorder formulas over
graphs from a nowhere dense graph class. In the second setting, the
learn ing problems for quantifierfree formulas over words with a
linear order and firstorder formulas over graphs of bounded degree are
considered. Those problems are evaluated using parameterized complexity
and an XP algorithm is presented for each of the cases. When
restricting the input formulas such that an example consists of a
single node and thus subsets of the structure are learned, all of the
above problems are fixed parameter tractable (FPT) with a linear
dependence on the number of examples.
Wednesday, 21.09.2016
11:00

12:00
Oliver Scheel
Nowadays, the focus in the automobile industry has shifted. Whereas in the past
cars were a product of classical mechanical engineering, today hardly any func
tion works without the help of electronics. Cars are controlled by complicated
software systems, and many innovations are made in this field. Naturally, this
increased amount of software comes with an increased complexity, challenging
car manufacturers to become experts in, amongst others, software engineering.
New methods and tools are needed to handle this complexity and reduce the
number of errors in the software to a minimum  as errors can not only be dan
gerous to traffic participants, but are also expensive to fix. One such practice is
continuous integration, in which continuous testing allows an always executable
build. To reduce costs and allow more testing in the automobile industry much
of the testing is done on test benches, which can simulate whole cars.
In this work a scheduling algorithm will be developed for scheduling tests on such
test benches. The scheduling of jobs to machines alone already is a very com
plicated task, here additional constraints further complicate it. These consist
of certain characteristics of the tests, like deadlines and specific requirements,
which cause setup times and costs, but also limitations of the scenario in general,
like available times of the test benches. This results in a strongly NPcomplete
problem. Thus no optimal solutions can be expected and the main goal of
this work is to experimentally test and compare different approximate solution
approaches. Methods known from the literature are adapted and tested for
the specific problem, as well as novel approaches developed, since this problem
in its full form has not been examined yet. The examined algorithms include
greedy heuristics, genetic algorithms, simulated annealing, machine learning ap
proaches and mathematical programming.
cars were a product of classical mechanical engineering, today hardly any func
tion works without the help of electronics. Cars are controlled by complicated
software systems, and many innovations are made in this field. Naturally, this
increased amount of software comes with an increased complexity, challenging
car manufacturers to become experts in, amongst others, software engineering.
New methods and tools are needed to handle this complexity and reduce the
number of errors in the software to a minimum  as errors can not only be dan
gerous to traffic participants, but are also expensive to fix. One such practice is
continuous integration, in which continuous testing allows an always executable
build. To reduce costs and allow more testing in the automobile industry much
of the testing is done on test benches, which can simulate whole cars.
In this work a scheduling algorithm will be developed for scheduling tests on such
test benches. The scheduling of jobs to machines alone already is a very com
plicated task, here additional constraints further complicate it. These consist
of certain characteristics of the tests, like deadlines and specific requirements,
which cause setup times and costs, but also limitations of the scenario in general,
like available times of the test benches. This results in a strongly NPcomplete
problem. Thus no optimal solutions can be expected and the main goal of
this work is to experimentally test and compare different approximate solution
approaches. Methods known from the literature are adapted and tested for
the specific problem, as well as novel approaches developed, since this problem
in its full form has not been examined yet. The examined algorithms include
greedy heuristics, genetic algorithms, simulated annealing, machine learning ap
proaches and mathematical programming.
Friday, 16.09.2016
11:00

12:00
Vinzent Skawran
Monday, 29.08.2016
11:00

12:00
Sonja Zimmermann
Friday, 29.07.2016
11:00

12:00
Jochen Schmücking
Tuesday, 26.07.2016
10:00

11:00
Andreas Tollkötter
Wednesday, 20.07.2016
11:00

12:00
Synthesis, the automatic construction of objects related to hard and software, is one of the great challenges of computer science. Although synthesis problems are impossible to solve in general, learningbased approaches, in which the synthesis of an object is based on learning from examples, have recently been used to build elegant and extremely effective solutions for a large number of difficult problems. Such examples include automatically fixing bugs, translating programs from one language into another, program verification, as well as the generation of highlevel code from given specifications.
This talk gives an introduction to learningbased synthesis. First, we develop a generic view on learningbased synthesis, called abstract learning frameworks for synthesis, which introduces a common terminology to compare and contrast learningbased synthesis techniques found in the literature. Then, we present a learningbased program verifier, which can prove the correctness of numeric programs (mainly) automatically, and show how this technique can be modeled as an abstract learning framework for synthesis. During the talk, we present various examples that highlight the power of learningbased synthesis.
Friday, 15.07.2016
12:15

13:15
work by Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, Thomas Zeume
Friday, 08.07.2016
12:15

13:15
work by Denis Kuperberg, Michał Skrzypczak
Friday, 01.07.2016
12:15

13:15
work by Mikołaj Bojanczyk
Friday, 24.06.2016
12:15

13:15
work by Mikołaj Bojanczyk, Michał Pilipczuk
Wednesday, 22.06.2016
13:30

14:30
Friday, 17.06.2016
12:15

13:15
We show that on graphs with n vertices the 2dimensional WeisfeilerLeman algorithm requires at most O(n^2/log(n)) iterations to reach stabilization. This in particular shows that the previously best, trivial upper bound of O(n^2) is asymptotically not tight. In the logic setting this translates to the statement that if two graphs of size n can be distinguished by a formula in first order logic with counting with 3 variables (i.e., in C3) then they can also be distinguished by a C3formula that has quantifier depth at most O(n^2/log(n)).
To prove the result we define a game between two players that enables us to decouple the causal dependencies between the processes happening simultaneously over several iterations of the algorithm. This allows us to treat large color classes and small color classes separately. As part of our proof we show that for graphs with bounded color class size, the number of iterations until stabilization is at most linear in the number of vertices. This also yields a corresponding statement in first order logic with counting.
Similar results can be obtained for the respective logic without counting quantifiers, i.e., for the logic L3.
This is joint work with Pascal Schweitzer.
Wednesday, 15.06.2016
11:00

12:00
Graphs have become ubiquitous to represent structured data in many application domains like computer vision, social network analysis or chem and bioinformatics. Techniques for graph comparison are a prerequisite for the application of a variety of data mining algorithms to these domains. This talk is dedicated to two different strategies for comparing graphs: maximum common subgraph problems and graph kernels.
Maximum common subgraph problems are based on classical graphtheoretical concepts for graph comparison and are NPhard in the general case. We consider variants of the maximum common subgraph problem in restricted graph classes, which are highly relevant for applications in cheminformatics. We develop a polynomialtime algorithm, which allows to compute a maximum common subgraph under socalled block and bridge preserving isomorphism in seriesparallel graphs. To solve the problem for this class of graphs the separators represented by standard graph decompositions are insufficient. We introduce the concept of potential separators and use them algorithmically to overcome this issue. Finally, we discuss polynomialtime computable graph distance metrics in restricted graph classes.
In the second part of the talk we consider graph kernels, which have their origin in specific data mining algorithms, socalled kernel methods. Graph kernels are positive semidefinite functions which measure the similarity between pairs of graphs. A leading design paradigm for graph kernels is the convolution kernel, which decomposes graphs into their parts and sums over all pairs of parts. Assignment kernels, in contrast, are obtained from an optimal bijection between parts. In general however, optimal assignments (maximum weight bipartite matchings) yield indefinite functions, which complicates their use in kernel methods. We characterize a class of base kernels used to compare parts that guarantees positive semidefinite optimal assignment kernels. These base kernels give rise to hierarchies from which the optimal assignment kernels are computed in linear time by histogram intersection. We apply these results by developing the WeisfeilerLehman optimal assignment kernel for graphs. It provides high classification accuracy on widelyused benchmark data sets improving over the original WeisfeilerLehman kernel.
Friday, 10.06.2016
12:15

13:15
work by Amir Abboud, Arturs Backurs, Thomas Dueholm Hansen
Friday, 03.06.2016
12:15

13:15
Dealing with automata on words or trees over infinite alphabets introduces many difficulties. One of the main problems here is defining the automata models in such a way, that they have decent expressive powers while still keeping the emptiness problem decidable. We will introduce a new automata model, the INmemory automaton, first to work only on words and then later to work on trees as well. We will discuss the closure properties of the classes of languages recognized by this model and show that the emptiness problem for INmemory automata is decidable.
Friday, 13.05.2016
12:15

13:15
work by Yann Disser, Jan Hackfeld, Max Klimm
Friday, 29.04.2016
12:00

13:00
Tuesday, 26.04.2016
11:00

11:45
Friday, 22.04.2016
10:00

11:00
Room 9222, E3
We study quantitative extensions of games, logics and automata with the idea of verification for systems with resources in mind. The resources are discrete and can be consumed stepbystep and replenished all at once. We formally model this by finitely many nonnegative integer counters that can be incremented, reset to zero or left unchanged. This particular formalism is inspired by the model of Bautomata for regular cost functions, which are a quantitative extension of regular languages with extensive closure properties and rich algorithmic results by Colcombet.
We consider resource games on resource pushdown systems as formal model for interactive, recursive systems with resource consumption. A resource game is a twoplayer graph game with a qualitative ωregular winning condition and an additional qualitative objective of the first player to minimize the used resources throughout a play. The game graph is the configuration graph of a resource pushdown system, whose transitions are annotated with actions for consuming or replenishing resources. We mainly study the bounded winning problem for these games. This is the question whether there is a uniform upper bound k such that the first player can win the game with an allowed resourceusage of k from all positions in a given set A of initial configurations. We provide algorithms to solve this problem if the set A is regular and show that it also has applications in automata theory.
The second part of this work is dedicated to study the various formal logics that have been considered around boundedness and unboundedness questions recently. There are the quantitative logics costMSO, which extends standard monadic secondorder logic with an atomic formula to count the elements in a set variable, and cost firstorder (CFO), which extends standard FOlogic with an universal quantifier that allows for a certain number of exceptions. Furthermore, there is firstorder+resource relations (FO+RR)  a quantitative variant of firstorder logic that is evaluated on structures with quantitative relations. Lastly, we also consider the qualitative logic MSO+U, which extends MSO with a quantifier to test if there are arbitrarily large finite sets that satisfy some formula.
We show that CFO and FO+RR have the same expressive power on the infinite binary tree with equal level predicate and both define the class of regular cost functions there. Moreover, we establish a general connection between costMSO over arbitrary relational structures and the two firstorder logics on the structure with the respective powerset as universe. Then, we consider the logic quantitative counting MSO (qcMSO), which combines the aspects of costMSO and MSO+U, and show that it can be translated into an extension of FO+RR with an infinity test (called FO+RR^∞) on the powerset structure. This allows us to algorithmically reason about qcMSO with a quantitative extension of automatic structures called resource automatic structures.
We study FO+RR^∞ on resource automatic structures and extensions to the area of infinite words and finite trees. The logic FO+RR^∞ can be effectively evaluated on resource automatic structures on finite words and trees. In the case of infinite words, we can only evaluate the weaker logic FO+RR. Altogether, we obtain algorithmic methods to evaluate weak qcMSO formulas, in which set quantification only ranges over finite sets, on the natural numbers with order and the infinite binary tree.
Wednesday, 20.04.2016
14:00

14:45
Torsten Rogoll
Friday, 08.04.2016
10:00

11:00
Choiceless Polynomial Time (CPT) ist one of the most promising candidates for
a logic for PTIME. As shown by Dawar, Richerby and Rossman, CPT can define
the CaiFürerImmerman query for the case that the underlying graphs are linearly
ordered. However, the CFI query over general graphs remains one of the few known
examples for which CPTdefinability is open.
Our first contribution generalises the result by Dawar, Richerby and Rossman to
the variant of the CFI query where the underlying graphs have colour classes of
logarithmic size. (Note that the case of ordered graphs corresponds to colour
class size one.) Secondly, for the CFI query over complete graphs, we establish
CPTdefinability using only sets of constant rank, which is known to be impossible
for the general case. Furthermore, even though sets of bounded rank are sufficient,
we prove that one has to use setlike objects. To do this, we first give a
characterisation of setlike objects via the automorphism group. Considered
from a more general perspective, our result shows that isomorphisminvariant
mechanisms which use sets as data structures are strictly more powerful than the
ones which only use sequences, a fact which was already known in the absence of
counting.
Joint work with Wied Pakusa and Erkal Selman
Thursday, 31.03.2016
10:00

11:00
Thursday, 17.03.2016
11:00

12:00
Marc Selders
Advice automata are finite automata that have an additional input tape, on which an infinite word  the advice  is written.
The automaton reads input and advice synchronously and changes its state according to both, thus for different advices an
advice automaton may recognize different languages. An advice automatic presentation of a structure is a labelling of the
universe with words from a language that is recognizable by a finite automaton with advice x and an interpretation of the
relations by synchronized advice automata that recognize the induced languages of the labelling using the same advice x. For
different advices a multitude of structures over a common signature may be presented by the same automata. We study classes
of advice automatic structures that allow uniform representation with adviceautomata.
Friday, 19.02.2016
10:00

11:00
Orderinvariant formulas access an ordering on a structure's universe, but the model relation is independent of the used ordering. Orderinvariant formulas capture unordered problems of complexity classes and they model the independence of the answer to a database query from lowlevel aspects of databases. We study the expressive power of orderinvariant monadic secondorder (MSO) and firstorder (FO) logic on restricted classes of structures that admit certain forms of tree decompositions (not necessarily of bounded width).
While orderinvariant MSO is more expressive than MSO and, even, CMSO (MSO with modulocounting predicates), we show that orderinvariant MSO and CMSO are equally expressive on graphs of bounded tree width and on planar graphs. This extends an earlier result for trees due to Courcelle. Moreover, we show that all properties definable in orderinvariant FO are also definable in MSO on these classes. These results are applications of a theorem that shows how to lift up
definability results for orderinvariant logics from the bags of a graph’s tree decomposition to the graph itself.
Joint work with Michael Elberfeld and Martin Grohe.
Friday, 12.02.2016
10:15

11:15
Seminar room 9U09
Stefan Hegselmann
We explore several counting constructs for logics with team semantics.
Counting is an important task in numerous applications, but with a somewhat delicate relationship to logic. Team semantics on the other side is the mathematical basis of modern logics of dependence and independence, in which formulae are evaluated not for a single assignment of values to variables, but for a set of such assignments. It is therefore interesting to ask what kind of counting constructs are adequate in this context, and how such constructs influence the expressive power, and the modeltheoretic and algorithmic properties of logics with team semantics. Due to the secondorder features of team semantics there is a rich variety of potential counting constructs. Here we study variations of two main ideas: forking atoms and counting quantifiers.
Forking counts how many different values for a tuple w occur in assignments with coinciding values for v. We call this the forking degree of v with respect to w. Forking is powerful enough to capture many of the previously studied atomic dependency properties. In particular we exhibit logics with forking atoms that have, respectively, precisely the power of dependence logic and independence logic.
Our second approach uses counting quantifiers „There exist at least mu many x“ of a similar kind as used in logics with Tarski semantics. The difference is that these quantifiers are now applied to teams of assignments that may give different values to mu. We show that, on finite structures, there is an intimate connection between inclusion logic with counting quantifiers and FPC, fixedpoint logic with counting, which is a logic of fundamental importance for descriptive complexity theory. For sentences, the two logics have the same expressive power.
Our analysis is based on a new variant of modelchecking games, called threshold safety games, on a trap condition for such games, and on game interpretations.
This is joint work with Erich Grädel.
Counting is an important task in numerous applications, but with a somewhat delicate relationship to logic. Team semantics on the other side is the mathematical basis of modern logics of dependence and independence, in which formulae are evaluated not for a single assignment of values to variables, but for a set of such assignments. It is therefore interesting to ask what kind of counting constructs are adequate in this context, and how such constructs influence the expressive power, and the modeltheoretic and algorithmic properties of logics with team semantics. Due to the secondorder features of team semantics there is a rich variety of potential counting constructs. Here we study variations of two main ideas: forking atoms and counting quantifiers.
Forking counts how many different values for a tuple w occur in assignments with coinciding values for v. We call this the forking degree of v with respect to w. Forking is powerful enough to capture many of the previously studied atomic dependency properties. In particular we exhibit logics with forking atoms that have, respectively, precisely the power of dependence logic and independence logic.
Our second approach uses counting quantifiers „There exist at least mu many x“ of a similar kind as used in logics with Tarski semantics. The difference is that these quantifiers are now applied to teams of assignments that may give different values to mu. We show that, on finite structures, there is an intimate connection between inclusion logic with counting quantifiers and FPC, fixedpoint logic with counting, which is a logic of fundamental importance for descriptive complexity theory. For sentences, the two logics have the same expressive power.
Our analysis is based on a new variant of modelchecking games, called threshold safety games, on a trap condition for such games, and on game interpretations.
This is joint work with Erich Grädel.
Thursday, 11.02.2016
14:15

17:00
AH 5
Friday, 29.01.2016
10:00

11:00
This article is inspired by two works from the early 90s. The first one is by Bogaert and Tison who considered a model of automata on finite ranked trees where one can check equality and disequality constraints between direct subtrees: they proved that this class of automata is closed under Boolean operations and that both the emptiness and the finiteness problem of the accepted language are decidable. The second one is by Niwinski who showed that one can compute the cardinality of any ωregular language of infinite trees.
Here, we generalise the model of automata of Tison and Bogaert to the setting of infinite binary trees. Roughly speaking we consider parity tree automata where some transitions are guarded and can be used only when the two direct subtrees of the current node are equal/disequal. We show that the resulting class of languages encompasses the one of ωregular languages of infinite trees while sharing most of its closure properties, in particular it is a Boolean algebra. Our main technical contribution is then to prove that it also enjoys a decidable cardinality problem.
Joint work with Arnaud Carayol and Olivier Serre
Friday, 08.01.2016
10:00

11:00
Friday, 11.12.2015
11:00

12:00
Yanje Wang
Monday, 07.12.2015
10:30

11:30
Room 9222
The search for a logic which captures polynomial time is one of the most important challenges in finite model theory. During the last years, significant new insights were obtained by the systematic study of the descriptive complexity of queries from (linear) algebra. These investigations were initiated in 2007 by a striking result of Atserias, Bulatov, and Dawar, who showed that fixedpoint logic with counting (FPC) cannot define the solvability of linear equation systems over finite Abelian groups. Their result triggered the development of new candidates for capturing polynomial time, for instance of rank logic (FPR), which was introduced by Dawar, Grohe, Holm, and Laubner in 2009. Before that, only few other candidates had been proposed, of which certainly the most important one is Choiceless Polynomial Time (CPT), developed by Blass, Gurevich, and Shelah in 1999. This thesis continues the search for a logic capturing polynomial time in the light of the following leading questions. (1) How can the algorithmic principles for solving linear equation systems be captured by logical mechanisms (such as operators or quantifiers)? (2) Are there interesting classes of structures on which the solvability of linear equations systems can be used to capture polynomial time? Ad (1), we study the interdefinability of linear equation systems over finite Abelian groups, rings, and modules. Our aim is to transform linear equation systems over these algebraic domains into equivalent linear equation systems over simpler domains, such as fields, or cyclic groups, via a reduction which is definable in fixedpoint logic. For linear equation systems over ordered Abelian groups, rings, and modules, and also for certain interesting classes of unordered commutative rings, we obtain a reduction to cyclic groups. Moreover, we establish a reduction to commutative rings for the general case. Further, we study rank logic (FPR), which extends FPC by operators to compute the rank of definable matrices over finite fields. Our main result validates a conjecture of Dawar and Holm: rank operators over different prime fields have incomparable expressive power. An important consequence is that rank logic, in the original definition with a distinct rank operator for every prime, fails to capture polynomial time, and should be replaced by a more powerful version with a uniform rank operator. We further show that, in the absence of counting, solvability quantifiers are weaker than rank operators. Ad (2), we introduce a class of linear equation systems, so called cyclic linear equation systems, which are structurally simple, but general enough to describe the CaiFürerImmerman query, and thus separate FPC from polynomial time. Our main result is that CPT can express the solvability of cyclic linear equation systems. Further, we use this definability result to show that CPT captures polynomial time on structures with Abelian colours, a class containing many of the known queries which separate FPC from polynomial time. Our result further solves an open question of Blass, Gurevich, and Shelah: the isomorphism problem for multipedes is definable in CPT.
Friday, 04.12.2015
12:00

13:00
Local specifications of the isomorphism types of designated
substructures (as building blocks) with amalgamation patterns (as
gluing instructions for pairs of blocks) arise in various contexts.
This talk discusses a generic approach to "locally free" finite
realisations, in which groupoids (or inverse semigroups) with
characteristic acyclicity properties serve as global amalgamation
patterns between the building blocks.
Amalgamation techniques also play a role in the construction of those
underlying groupoids: they provide an algebraiccombinatorial route
to Cayley group(oid)s and graphs with strong acyclicity properties.
Among further modeltheoretic applications of such groupoidal
constructions are a new proof of a powerful theorem of Herwig and
Lascar concerning the lifting of local to global symmetries in finite
structures and  not quite unrelated  finite model properties and
finite controllability results for guarded logics.
(Cf. arXiv:1404.4599 (v4) 2015)
Monday, 23.11.2015
16:00

16:45
Friday, 13.11.2015
12:30

13:30
We present two novel linearfpt algorithms for Dominating Set which use
little space when parameterized by treedepth and give lower bounds for
the space consumption of dynamic programming algorithms for Vertex
Cover, 3Coloring and Dominating Set on tree, path and treedepth
decompositions.
Tuesday, 27.10.2015
16:00

17:00
Phillip Keldenich
Friday, 09.10.2015
12:30

13:30
The most prominent connection between automata theory and logic is the
expressive equivalence between finite automata over words and monadic
secondorder logic MSO. We consider a more restricted, not as popular, but also fundamental logic, “monadic transitive closure logic” MTC, which over words has the same expressive power as MSO. In MTC one can proceed from a formula F(z,z’) to F*(x,y) which expresses that a sequence exists from x to y such that each step is in accordance with F.
Thus, MTC is a very natural framework to express reachability properties.
We survey old and recent results which clarify the expressive power and
possible uses of MTC. We start with the equivalence between MTC and MSO
over word models. Then we discuss MTC over trees and labelled grids,
addressing the relation to treewalking and gridwalking automata.
We conclude with some open questions.
Friday, 02.10.2015
11:00

12:00
Andreas Tollkötter
Thursday, 01.10.2015
11:00

12:00
Tuesday, 22.09.2015
11:45

12:45
Maximilian Gödicke
The (1dimensional) WeisfeilerLehmanAlgorithm or Colour Refinement is a widely used subroutine for graph isomorphism algorithms.
It iteratively refines the coloring of its input graph until the coloring is stable. If two graphs have different stable
colorings, they are not isomorphic. We call the number of iterations needed the iteration number. Because of its importance
in practical graph isomorphism algorithms, lower and upper bounds on the iteration number are of practical interest. Krebs
and Verbitsky give a lower bound for the maximal iteration number of uncoloured graphs by constructing graphs on which Colour
Refinement takes nO(sqrt(n)) iterations. However, for random nonisomorphic graphs, the probability that the algorithm does
not distinguish them after only 2 iterations is exponentially small in the size of the graphs.
In order to find tight upper bounds on the maximal iteration number of graphs of size n, we establish a connection between the maximal iteration number for colored and uncolored graphs and present the new computational tool of splitprocedures.
These can be used, given n,k, to find graphs of size n with iteration number k. This tool allows us to construct explicit examples of graphs that need n1 iterations of Color Refinement, which shows that for these n the trivial upper bound on the maximal number of iterations is tight.
In order to find tight upper bounds on the maximal iteration number of graphs of size n, we establish a connection between the maximal iteration number for colored and uncolored graphs and present the new computational tool of splitprocedures.
These can be used, given n,k, to find graphs of size n with iteration number k. This tool allows us to construct explicit examples of graphs that need n1 iterations of Color Refinement, which shows that for these n the trivial upper bound on the maximal number of iterations is tight.
Tuesday, 22.09.2015
11:00

11:45
Daniel Neuen
The Graph Isomorphism Problem is one of the most famous open problems
in Theoretical Computer Science and is up to date neither known to be
in P nor NPcomplete. Although it does seem unlikely for the Graph
Isomorphism Problem to be NPcomplete the best currently known
algorithm only runs in moderately exponential time. On the other hand
for many natural restricted graph classes efficient algorithms are
known. However most of these classes contain only sparse graphs that
do not have arbitrarily large cliques as subgraphs. A
number of natural graph classes also containing dense graphs arise
from geometric graphs as interval graphs which admit
linear time isomorphism tests. For several other geometric graph
classes proposed in the literature the complexity of the Graph
Isomorphism Problem is still unknown. In this work we will consider
unit square graphs which are intersection graphs of unit squares in
the plane. One of the main results will be a polynomial time algorithm
solving graph isomorphism for unit square graphs with bounded diameter.
in Theoretical Computer Science and is up to date neither known to be
in P nor NPcomplete. Although it does seem unlikely for the Graph
Isomorphism Problem to be NPcomplete the best currently known
algorithm only runs in moderately exponential time. On the other hand
for many natural restricted graph classes efficient algorithms are
known. However most of these classes contain only sparse graphs that
do not have arbitrarily large cliques as subgraphs. A
number of natural graph classes also containing dense graphs arise
from geometric graphs as interval graphs which admit
linear time isomorphism tests. For several other geometric graph
classes proposed in the literature the complexity of the Graph
Isomorphism Problem is still unknown. In this work we will consider
unit square graphs which are intersection graphs of unit squares in
the plane. One of the main results will be a polynomial time algorithm
solving graph isomorphism for unit square graphs with bounded diameter.
Wednesday, 26.08.2015
10:00

11:00
Nicolas Mazzocchi
A treatment of the synthesis problem in the framework of game theory was first proposed by McNaughton in 1965. This talk shows
a completeexample of this problem with focus on the automatic relations uniformization by subsequential transducers presented
in the paper "Uniformization in Automata Theory" written by Christof Löding and Arnaud Carayol. First I will present the
uniformization problem through different resultson automatic relations. Then, I will describe the decision procedure and thesynthesis
in case of success. Finally, I will discuss complexity bounds.
Thursday, 16.07.2015
13:00

14:00
Room 9U09, E3
Given a kSAT instance with the promise that there is an assignment satisfying at least t out of k literals in each clause, can one efficiently find a satisfying assignment (setting at least one literal to true in every clause)? The NPhardness of 3SAT implies that this problem is NPhard when t <= k/3, and extensions of some 2SAT algorithms give efficient solutions when t >= k/2.
We prove that for t < k/2, the problem is NPhard. Thus, satisfiability becomes hard when the promised density of true literals falls below 1/2. One might thus say that the transition from easy to hard in 2SAT vs. 3SAT takes place just after two and not just before three.
A strengthening of this result shows that given a (2k+1)uniform hypergraph that can be 2colored such that each edge has nearperfect balance, it is NPhard to even find a 2coloring that avoids a monochromatic edge. This shows extreme hardness of discrepancy minimization for systems of boundedsize sets.
The talk will sketch our proof of the SAT result, which is based on the fact that the only functions passing a natural "dictatorship test" are "juntas" depending on few variables. We might also briefly mention the general principle, based on the lack of certain "weak polymorphisms," that underlies hardness of constraint satisfaction, and an improvement of the hypergraph coloring result ruling out coloring lowdiscrepancy hypergraphs with any constant number of colors.
Based on joint works with Per Austrin and Johan Håstad (FOCS 2014) and Euiwoong Lee (SODA 2015).
Friday, 03.07.2015
11:15

12:15
work by Fedor V. Fomin, Alexander Golovnev, Alexander S. Kulikov, Ivan Mihajlin
Friday, 26.06.2015
11:15

12:15
work by Mitsuru Kusumoto, Yuichi Yoshida
Friday, 19.06.2015
15:00

16:00
Seminarraum 9222
Algorithmic formal methods such as model checking and reactive
synthesis were originally developed in the context of discrete systems
such as hardware circuits and software.
Over the last few years, these techniques have played an increasingly
important role in the development of highassurance cyberphysical
systems, in which discrete components interact with continuous
physical systems.
I will survey some recent results applying formal methods to
continuous systems, pointing out how notions such as invariants,
variants, bisimulation,
and trace equivalence can be adapted in the context of continuous
systems.
Specifically, I will show two examples of theoretical ideas from
formal methods applied to continuous control:
a deductive system for controller synthesis for continuous systems
against temporal specifications,
and a semiformal conformance testing procedure based on a metric
generalization of trace equivalence.
I will conclude with some current challenges in the area.
[Joint work with Jyo Deshmukh, Rayna Dimitrova, and Vinayak Prabhu]
Friday, 19.06.2015
11:15

12:15
work by Dana Silverbush and Roded Sharan
Friday, 12.06.2015
11:15

12:15
work by Martin Grohe, Stephan Kreutzer, Roman Rabinovich, Sebastian Siebertz, Konstantinos Stavropoulos
Friday, 05.06.2015
11:15

12:15
work by Bingkai Lin
Friday, 22.05.2015
11:15

12:15
work by Sandra Kiefer, Pascal Schweitzer, Erkal Selman
Wednesday, 13.05.2015
14:00

15:00
Room 9U09, Erweiterungsbau 3
In verification and synthesis, properties or models of interest are often quantitative, and many quantitative aspects involve counting. For example, one might be interested in the amount of memory required by a system, how many simultaneous requests a system has to process along a run, or how many infixes belonging to some regular language a word contains. In this talk, we present two quantitative counting logics and two models of games with counter. We first study Qmu[#MSO], a counting logic based on the quantitative mucalculus. This logic is designed specifically for transition systems where the states are labeled with finite relational structures. Counting is introduced to Qmu with the help of counting terms of monadic secondorder logic, which are evaluated on the relational structures the states are labeled with. We prove that the modelchecking problem for Qmu[#MSO] on a generalization of pushdown systems is reducible to solving finite counter parity games. We then present the model of counter parity games, which are quantitative games in which a finite set of counters is updated along the edges. Payoffs of finite plays are obtained via the counters, while payoffs of infinite plays are determined via a parity condition. We show that finite counter parity games are effectively solvable, using games with imperfect recall to solve the unboundedness problem for the value. Thus, it follows that the above modelchecking problem for Qmu[#MSO] is decidable. In the last part of the talk, we discuss a quantitative counting extension of monadic secondorder logic called qcMSO and a model of games with counters where payoffs are obtained via a meanpayoff objective on a special counter.
Wednesday, 13.05.2015
11:15

12:15
There are regular languages for: finite words, finite trees, infinite words, various kinds of infinite trees, queries, tuples of words, tuples of trees, etc etc. I will show how monads, a concept from the Mordor of category theory, can be used to unify some of the theory of “regularity". The idea is that for each setting, like finite words or labelled countable total orders, or infinite trees with countably many branches, one chooses a monad, and then many definitions and some theorems come out for free.
Friday, 08.05.2015
12:30

13:30
The question whether there is logic for polynomial time is open for
more than 30 years and it remains the most important challenge
in the field of descriptive complexity theory.
In my talk I summarise what is known about this question today, and
I report on very recent work which shows that rank logic, a powerful logic that
was proposed by Dawar, Grohe, Holm and Laubner as a possible solution,
fails to express all polynomialtime properties (at least, in its original definition).
Friday, 08.05.2015
11:15

12:15
work by Martin Grohe, Pascal Schweitzer
Friday, 24.04.2015
11:15

12:30
work by Michael Elberfeld
Monday, 16.03.2015
14:00

15:00
Model checking (for various temporal logics), as well as checking behavioural equivalence of processes (using various equivalence relations), are powerful tools in system verification. Both problems can be seen as comparing two behavioural specifications. Certain fixpoint logics let us easily express these problems as formulas that can be evaluated on a structure that represents both specifications.
The versatility of fixpoint logics makes them interesting as a basis for practical system analysis tools. Important for the usability of such tools is that they should provide diagnostics to their users. In this talk, we will look at proof graphs: gamelike proof objects for LFP. They provide an alternative semantics for the LFP fixpoint operator, and can serve as the basis of a diagnostics generation framework. This is illustrated with some small examples. If time permits, we will see how this approach can be directly reused for parameterized Boolean equation systems (an equational fixpoint logic).
Friday, 20.02.2015
14:00

15:00
We investigate the power of algebraic techniques to test whether two given graphs are isomorphic.
In the algebraic setting, one encodes the graphs into a system of polynomial equations that is satisfiable if the graphs are isomorphic.
Afterwards, one tries to test satisfiability of this system using, for example, the Gröbner basis algorithm.
In some cases this can be done in polynomial time, in particular, if the equations admit a constant degree refutation in algebraic proof systems such as Nullstellensatz or Polynomial Calculus.
We compare this approach to other known relaxations and show that the kdimensional WeisfeilerLehman Algorithm can be characterized in terms of algebraic proof system over the reals that lies between degreek Nullstellensatz and degreek Polynomial Calculus.
Furthermore, we prove a linear lower bound on the Polynomial Calculus degree of CaiFürerImmerman graphs, which holds over any field of characteristic different from two.
This is joint work with Martin Grohe.
Friday, 13.02.2015
12:30

13:40
We first give a brief introduction of rankwidth and linear rankwidth. Even though rankwidth has been well developed by a series of papers, only few algorithmic properties are developed for linear rankwidth. Especially, the only known fpt algorithm for testing linear rankwidth at most k, for fixed k, is using the pivotminor obstructions for those classes. This algorithm is nonconstructive it uses the finiteness of the obstructions guaranteed from the wellquasiordering property of graphs of bounded rankwidth.We mainly show that the size of pivotminor obstructions for linear rankwidth at most k is doubly exponential in O(k). To show this, we extend the notion of the pseudominor order developed by Lagergren, which is a relaxation of usual minor notion and is used to find an upper bound for the size of minor obstructions for graphs of bounded pathwidth.Interestingly, our result implies that the size of minor obstructions for Frepresentable matroids (where F is a finite field) of (matroid) pathwidth at most k is also doubly exponential in O(k).
Tuesday, 10.02.2015
10:00

11:00
Seminar room i1
Friday, 30.01.2015
10:00

12:00
work by Jochen Messner, Thomas Thierauf
Friday, 23.01.2015
10:00

12:00
work by Robin A. Moser, Gábor Tardos
Thursday, 22.01.2015
14:45

15:45
Delay games are twoplayer games of infinite duration in which one player may delay her moves to obtain a lookahead on her opponent's moves. For omegaregular winning conditions it is known that such games can be solved in doublyexponential time and that doublyexponential lookahead is sufficient.
We improve upon both results by giving an exponential time algorithm and an exponential upper bound on the necessary lookahead. This is complemented by showing EXPTIMEhardness of the solution problem and tight exponential lower bounds on the lookahead. Both lower bounds already hold for safety conditions. Furthermore, solving delay games with reachability conditions is shown to be PSPACEcomplete.
This is joint work with Martin Zimmermann.
Thursday, 22.01.2015
14:00

14:45
We determine the complexity of counting models of bounded size of specifications expressed in Lineartime Temporal Logic. Counting word models is #Pcomplete, if the bound is given in unary, and as hard as counting accepting runs of nondeterministic polynomialspace Turing machines, if the bound is given in binary. Counting tree models is as hard as counting accepting runs of nondeterministic exponentialtime Turing machines, if the bound is given in unary. For a binary encoding of the bound, the problem is at least as hard as counting accepting runs of nondeterministic exponential space Turing machines. On the other hand, it is not harder than counting accepting runs of nondeterministic doublyexponential time Turing machines.
This is joint work with Hazem Torfah.
Friday, 16.01.2015
10:00

12:00
work by Todd L. Veldhuizen
Friday, 09.01.2015
11:00

12:00
work by Vikraman Arvind, Gaurav Rattan
Friday, 12.12.2014
13:00

14:00
We consider the fundamental problem of computing the natural join of a sequence of relations. We show that a simple linear program, which describes the socalled fractional edge cover number of a hypergraph associated with the join expression, gives nontrivial bounds on the size of such joins. Using linear programming duality, we can also show that these bounds are optimal.
This gives us better cost estimates to optimize join queries, which lead to better algorithms.
(This is joint work with Albert Atserias and Daniel Marx.)
Friday, 12.12.2014
11:30

12:30
Joachim Redies
Friday, 12.12.2014
10:00

12:00
work by Jiří Fiala, Pavel Klavík, Jan Kratochvíl, Roman Nedela
Wednesday, 10.12.2014
17:00

18:00
Raum 9222, E3
The classification of recognizable omegaword languages into Borel levels is the basis of many specialized solutions in the fields of formal verification and algorithmic controller synthesis. Each of these levels is characterized by a class of deterministic omegaautomata, namely deterministic weak (reachability and safety), deterministic Buechi, and deterministic Muller automata. This thesis analyses the more general setting of infinitary Mazurkiewicz traces, which model nonterminating, concurrent computation of distributed systems. The study of finitary tracelanguages generalizes that of wordlanguages, and numerous results have been extended to infinitary tracelanguages (due to Gastin, Petit, Diekert, Muscholl, and others). However, the current definitions of asynchronous omegaautomata fail to yield a classification of omegatrace languages that is compatible with the Borel hierarchy. We close this gap, which had been open since the 1990's, by introducing the family of ``synchronizationaware'' automata.
We also demonstrate the semidecidability of the problem to determine whether a given recognizable omegatrace languages is also recognized by a deterministic synchronization aware Buechi automaton.
Although asynchronous automata are useful in implementing distributed monitors and distributed controllers, their constructions are prohibitively expensive even by automatatheoretic standards. On the other hand, ``linearizations'' of infinitary trace languages, which invoke the framework of ``traceclosed'' omegaword languages, can find immediate applications in model checking and formal verification of distributed systems. This is because word automata recognizing traceclosed languages support more efficient analyses of most of the interesting properties pertaining to distributed computations. In this setting, we present another classification of omegatrace languages in terms of a Borellike hierarchy of traceclosed omegaword languages.
Thursday, 04.12.2014
10:30

11:30
AH VI
Für das ConstraintSatisfactionProblem (CSP), das Erfüllbarkeitsproblem der Aussagenlogik (3SAT) und das Graphisomorphieproblem sind keine effizienten Algorithmen bekannt. Um sie zu lösen, werden deshalb heuristische Algorithmen mit polynomieller Laufzeit eingesetzt. Wir untersuchen klassische Heuristiken für die genannten Entscheidungsprobleme dahingehend, ob sie mit schnelleren als den bekannten Algorithmen implementiert werden können.
Die kKonsistenzHeuristik für das ConstraintSatisfactionProblem versucht durch sukzessives Ableiten neuer Bedingungen lokal konsistente Instanzen zu erzeugen und kann mit einer Laufzeit von n^O(k) implementiert werden. Wir zeigen, dass der Anstieg der Laufzeit mit wachsendem Parameter k unvermeidbar ist. Dazu beweisen wir, dass es nicht möglich ist schneller zu entscheiden, ob lokale Konsistenz auf einer binären CSPInstanz erreicht werden kann.
Eine Heuristik für 3SAT ist das Suchen nach einer Resolutionswiderlegung, in der jede Klausel höchstens k Literale enthält. Eine solche Widerlegung der Weite k kann in Zeit n^O(k) gefunden werden. Wir zeigen, dass es unmöglich ist schneller zu entscheiden, ob eine Resolutionswiderlegung der Weite k existiert.
Abschließend gehen wir auf die Komplexität der ColorRefinementHeuristik für das Graphisomorphieproblem ein.
Friday, 28.11.2014
10:00

12:00
work by Ittai Abraham, Cyril Gavoille, Anupam Gupta, Ofer Neiman, Kunal Talwar
Friday, 21.11.2014
12:30

13:30
In the network orientation problem one is given a mixed graph, consisting of directed and undirected edges, and a set of sourcetarget vertex pairs. The goal is to orient the undirected edges so that a maximum number of pairs admit a directed path from the source to the target. This NPcomplete problem arises in the context of analyzing physical networks of proteinprotein and proteinDNA interactions. While the latter are naturally directed from a transcription factor to a gene, the direction of signal flow in proteinprotein interactions is often unknown or cannot be measured en masse. One then tries to infer this information by using causality data on pairs of genes such that the perturbation of one gene changes the expression level of the other gene.
After introducing physical networks of protein interactions and the related orientation problem, the talk reports on two studies related to its solution: First, we have a look at optimal solutions to the problem that are based on an integer linear programming formulation; this formulation is applied to orient proteinprotein interactions in yeast. Second, we consider approximate solutions and their algorithmic aspects. Finally, we will have a look at current and future work around graph orientations and their applications to network biology.
Friday, 21.11.2014
10:00

12:00
Friday, 14.11.2014
10:00

12:00
work by Diego Figueira, Leonid Libkin
Friday, 07.11.2014
10:00

12:00
work by Andreas Björklund, Thore Husfeldt
Friday, 31.10.2014
11:00

13:00
Seminarroom I11 (room 2202)
How can we affordably build trustworthy autonomous, networked systems?
Partly motivated by this question, I describe a shift from the traditional “design+verify” approach to “specify+synthesize” in modelbased engineering. I then discuss our recent results on automated synthesis of correctbyconstruction, hierarchical control protocols. These results account for hybrid dynamics that are subject to rich temporal logic specifications and heterogenous uncertainties, and that operate in adversarial environments. They combine ideas from control theory with those from computer science, and exploit underlying systemtheoretic interpretations to suppress the inherent computational complexity. The expressivity of the resulting design methodology enables us to formally investigate a number of emerging issues in autonomous, networked systems. I conclude my talk with a brief overview of several such issues from my ongoing projects: (i) compositional synthesis for the socalled fractionated systems; (ii) effects of perception imperfections on protocol synthesis; (iii) interfaces between learning modules and reactive controllers with provable guarantees of correctness; and (iv) humanembedded autonomy.
Friday, 24.10.2014
12:30

13:30
In our foray into social networks we have discovered several problems that we perceive would be of interest to theoreticians.
I will briefly describe some of these and then focus on the rather practical problem: that of finding community structure in social/complex networks.
The problem of discovering communities in networks has been very wellstudied in the literature. There are two major roadblocks here: the first is that there is no generally accepted notion of what constitutes a community; and second, any such algorithm is meant to be used on large networks and therefore must be fast enough.
On large networks, typically even quadratic running times are considered infeasible. Existing algorithms use their own definition of a community and, to the best of our knowledge, even the fastest of these take quadratic time.
I will talk about an approach that manages to do well on both these fronts. Instead of settling on a fixed definition of a community, we let the user decide how to define them. We let the user identify a (small) set of "seed nodes" which they classify into communities. We then use a random walk approach and classify the rest of the network. In order to estimate the random walk probabilities, we end up solving a system of linear equations which turn out to be what is called symmetrically diagonally dominant linear system, which by the work of Speilman, Teng and others can be solved in almost linear time.
I will also describe associated experimental work which seems to be very promising.
Wednesday, 22.10.2014
10:00

12:00
Room 5052
We address the safety analysis of chemical plants controlled by programmable logic controllers (PLCs). We consider sequential function charts (SFCs) for the programming of the PLCs, extended with the specification of the dynamic plant behavior. The resulting hybrid SFC models can be transformed to hybrid automata, opening the way to the application of advanced techniques for their reachability analysis.
However, the hybrid automata models are often too large to be analyzed.
To keep the size of the models moderate, we propose a counterexampleguided abstraction refinement (CEGAR) approach, which starts with the purely discrete SFC model of the controller and extends it with those parts of the dynamic behavior, which are relevant for proving or disproving safety.
Tuesday, 21.10.2014
14:00

15:00
We study the parameterized space complexity of modelchecking firstorder logic with a bounded number of variables. By restricting the number of the quantifier alternations we obtain problems complete for a natural hierarchy between parameterized logarithmic space and FPT. We call this hierarchy the tree hierarchy, provide a machine characterization, and link it to the recently introduced classes PATH and TREE. We show that the lowest class PATH collapses to parameterized logarithmic space only if Savitch's theorem can be improved. Finally, we settle the complexity with respect to the treehierarchy of finding short undirected paths and small undirected trees.
This is joint work with Moritz Mueller.