Upcoming events:
Friday, 27.10.2017
13:30

14:15
Lorena Reintgen
We develop a new approach to solve the Graph Isomorphism Problem (GI) for input graphs G and H over the same vertex set V
that are close with respect to permutation distance. We show that if the input pair of graphs have permutation distance at
most k, then it is possible to construct in quadratic time an equivalent instance of size at most f(k), for a computable function
f, and decide whether there exists an isomorphism from G to H. Our algorithm is thus a kernelization algorithm and can be
used as a preprocessing step for inputs with low permutation distance. The algorithm consists of four steps. In the first
step we find a relabeling on the vertices of the symmetric difference $G triangle H$ as to restrict the maximal degree within
$G triangle H$. Next, we construct from the previous result (G',H) a subset of vertices that contains the set of vertices
that have to be relabeled in order to identify G' with H. This result in turn is used to construct a smaller instance for
Colored Graph Isomorphism. In the final step this colored instance is again transformed into an uncolored graph, which constitutes
our kernel instance.
Past events:
Friday, 20.10.2017
10:00

11:00
The graph similarity problem, also known as approximate graph
isomorphism or graph matching problem has been extensively
studied in the machine learning community, but has not received
much attention in the algorithms community.
Given two graphs G,H with adjacency matrices A_G,A_H, a
wellstudied measure of similarity is the Frobenius distance
dist(G,H):=min_P  A_G^P  A_H _F,
where P ranges over all permutations of the vertex set of G, A_G^P
denotes the matrix obtained from A_G by permuting rows and columns
according to P, and M_F is the Frobenius norm of a matrix M.
The (weighted) graph similarity problem, denoted by SIM (WSIM),
is the problem of computing this distance for two graphs of same order.
This problem is closely related to the notoriously hard
quadratic assignment problem (QAP), which is known to be
NPhard even for severely restricted cases.
It is known that SIM (WSIM) is NPhard: we strengthen this
hardness result by showing that the problem remains NPhard
even for the class of trees. Identifying the boundary of
tractability for this problem is best done in the framework
of linear algebra. Our main result is a polynomial time
algorithm for the special case of WSIM where both input
matrices are positive semidefinite, have boundedrank, and
where one of the matrices has a bounded clustering number.
The clustering number is a natural algorithmic parameter
arising from spectral clustering techniques. We complement
this result by showing NPhardness for matrices of bounded rank
and for positivesemidefinite matrices.
isomorphism or graph matching problem has been extensively
studied in the machine learning community, but has not received
much attention in the algorithms community.
Given two graphs G,H with adjacency matrices A_G,A_H, a
wellstudied measure of similarity is the Frobenius distance
dist(G,H):=min_P  A_G^P  A_H _F,
where P ranges over all permutations of the vertex set of G, A_G^P
denotes the matrix obtained from A_G by permuting rows and columns
according to P, and M_F is the Frobenius norm of a matrix M.
The (weighted) graph similarity problem, denoted by SIM (WSIM),
is the problem of computing this distance for two graphs of same order.
This problem is closely related to the notoriously hard
quadratic assignment problem (QAP), which is known to be
NPhard even for severely restricted cases.
It is known that SIM (WSIM) is NPhard: we strengthen this
hardness result by showing that the problem remains NPhard
even for the class of trees. Identifying the boundary of
tractability for this problem is best done in the framework
of linear algebra. Our main result is a polynomial time
algorithm for the special case of WSIM where both input
matrices are positive semidefinite, have boundedrank, and
where one of the matrices has a bounded clustering number.
The clustering number is a natural algorithmic parameter
arising from spectral clustering techniques. We complement
this result by showing NPhardness for matrices of bounded rank
and for positivesemidefinite matrices.
Friday, 06.10.2017
11:00

11:30
Seminarraum i7
Patrick Smandzich
Master's thesis presentation
Wednesday, 04.10.2017
14:00

15:00
Seminarraum i7
The paper develops a new technique to extract a characteristic subset from a random source that repeatedly samples from a set of elements. Here a characteristic subset is a set that when containing an element contains all elements that have the same probability.
With this technique at hand the paper looks at the special case of the tournament isomorphism problem that stands in the way towards a polynomialtime algorithm for the graph isomorphism problem. Noting that there is a reduction from the automorphism (asymmetry) problem to the isomorphism problem, a reduction in the other direction is nevertheless not known and remains a thorny open problem.
Applying the new technique, we develop a randomized polynomialtime Turingreduction from the tournament isomorphism problem to the tournament automorphism problem. This is the first such reduction for any kind of combinatorial object not known to have a polynomialtime solvable isomorphism problem
Friday, 29.09.2017
12:15

12:45
Joshua Fürste
Conjunctive queries represent an important subset of queries issued on relational databases. We study the problem of computing
conjunctive queries over large distributed databases. Using the structures of a query Q and the skew in the data, we take
a look at the amount of communication required over one or multiple communication rounds, that is required to compute Q.
Friday, 29.09.2017
11:30

12:00
Randomized hashing is a fundamental part of approximating frequency mo ments of data streams. To approximate the second frequency
moment of a data stream, a strongly 4universal family of hash functions is mandatory to give strong guarantees of the returned
estimation’s accuracy. When implementing these approximation algo rithms, arguments against using a strongly 4universal
family of hash functions are the need for shorter runtime or only having access to few random bits. We evaluate ex perimentally
the effects of using different hashing schemes to approximate the second frequency moment of real data streams. We obtain
that universal, strongly 2universal hashing, and a strongly 3universal tabulation hashing scheme break the algorithms. However
two randomized hash functions without theoretical guarantees seem to work as well as strongly 4universal hashing.
Friday, 29.09.2017
10:45

11:15
Laurids Vollmann
In this thesis we will examine the current research concerning the calculation
of pfaffian orientations on different classes of graphs. Pfaffian orientations
assign a direction to each edge in an undirected graph in a specific way and
allow us to efficently calculate the amount of perfect matchings. Not every
graph has a pfaffian orientation and for most graphs finding one is hard. We
will also briefly present some polynomialtime equivalent problems.
of pfaffian orientations on different classes of graphs. Pfaffian orientations
assign a direction to each edge in an undirected graph in a specific way and
allow us to efficently calculate the amount of perfect matchings. Not every
graph has a pfaffian orientation and for most graphs finding one is hard. We
will also briefly present some polynomialtime equivalent problems.
Friday, 29.09.2017
10:00

10:30
Mario Jörres
Das Kolloquium gibt einen Überblick über einige Ergebnisse bezüglich der Bearbeitung von Datenbankanfragen. Dabei beschränken
wir uns auf die Klasse der konjunktiven Anfragen, da eine Reihe von bekannten NPvollständigen Problemen als solche ausgedrückt
werden kann, und stellen eine alternative Betrachtungsweise für die Komplexität des Auswertungsproblems solcher Anfragen vor.
Wir nehmen an, es sei nicht notwendig, alle Antworten auf eine Anfrage zugleich zu erhalten, sondern es genügt, diese mit
konstanter Verzögerung zwischen zwei aufeinanderfolgenden Antworten auszugeben, nachdem eine Vorverarbeitung in linearer Zeit
stattfand. Dann kann man die Komplexität des Aufzählungsproblems anhand der Dauer der Vorverarbeitung und der Verzögerung
betrachten.
Thursday, 28.09.2017
11:00

12:00
Seminarraum i7
Bachelor Kolloquium
Tuesday, 26.09.2017
11:00

12:00
Tuesday, 26.09.2017
10:00

11:00
The talk will be given in English.
Wednesday, 13.09.2017
11:00

12:00
Seminarraum i7
Abhisekh Sankaran
We present a modeltheoretic property of finite structures, that can be
seen to be a finitary analogue of the wellstudied downward
LowenheimSkolem property from classical model theory. We call this
property the *equivalent bounded substructure property*, denoted EBSP.
Intuitively, EBSP states that a large finite structure contains a small
``logically similar'' substructure, where logical similarity means
indistinguishability with respect to sentences of FO/MSO having a given
quantifier nesting depth. It turns out that this simply stated property
is enjoyed by a variety of classes of interest in computer science:
examples include regular languages of words, trees (unordered, ordered,
ranked or partially ranked) and nested words, and various classes of
graphs, such as cographs, graph classes of bounded treedepth, those of
bounded shrubdepth and mpartite cographs. Further, EBSP remains
preserved in the classes generated from the above using various
wellstudied operations, such as complementation, transpose, the
linegraph operation, disjoint union, join, and various products
including the Cartesian and tensor products.
All of the aforementioned classes admit natural tree representations for
their structures. We use this fact to show that the small and logically
similar substructure of a large structure in any of these classes, can
be computed in time linear in the size of the tree representation of the
structure, giving linear time fixed parameter tractable (f.p.t.)
algorithms for checking FO/MSO definable properties of the large
structure. We conclude by presenting a strengthening of EBSP, that
asserts ``logical selfsimilarity at all scales'' for a suitable notion
of scale. We call this the *logical fractal* property and show that most
of the classes mentioned above are indeed, logical fractals.
seen to be a finitary analogue of the wellstudied downward
LowenheimSkolem property from classical model theory. We call this
property the *equivalent bounded substructure property*, denoted EBSP.
Intuitively, EBSP states that a large finite structure contains a small
``logically similar'' substructure, where logical similarity means
indistinguishability with respect to sentences of FO/MSO having a given
quantifier nesting depth. It turns out that this simply stated property
is enjoyed by a variety of classes of interest in computer science:
examples include regular languages of words, trees (unordered, ordered,
ranked or partially ranked) and nested words, and various classes of
graphs, such as cographs, graph classes of bounded treedepth, those of
bounded shrubdepth and mpartite cographs. Further, EBSP remains
preserved in the classes generated from the above using various
wellstudied operations, such as complementation, transpose, the
linegraph operation, disjoint union, join, and various products
including the Cartesian and tensor products.
All of the aforementioned classes admit natural tree representations for
their structures. We use this fact to show that the small and logically
similar substructure of a large structure in any of these classes, can
be computed in time linear in the size of the tree representation of the
structure, giving linear time fixed parameter tractable (f.p.t.)
algorithms for checking FO/MSO definable properties of the large
structure. We conclude by presenting a strengthening of EBSP, that
asserts ``logical selfsimilarity at all scales'' for a suitable notion
of scale. We call this the *logical fractal* property and show that most
of the classes mentioned above are indeed, logical fractals.
Friday, 01.09.2017
10:00

11:00
Thursday, 31.08.2017
12:00

12:30
Seminarraum i7
Wednesday, 30.08.2017
11:00

12:00
We introduce graph motif parameters, a class of graph
parameters that depend only on the frequencies of constantsize induced
subgraphs. Classical works by Lovász show that many interesting
quantities have this form, including, for fixed graphs H, the number of
Hcopies (induced or not) in an input graph G, and the number of
homomorphisms from H to G.
Using the framework of graph motif parameters, we obtain faster
algorithms for counting subgraph copies of fixed graphs H in host graphs
G: For graphs H on k edges, we show how to count subgraph copies of H in
time k^{O(k)}⋅n^{0.174k+o(k)} by a surprisingly simple algorithm. This
improves upon previously known running times, such as O(n0.91k+c) time
for kedge matchings or O(n0.46k+c) time for kcycles.
Furthermore, we prove a general complexity dichotomy for evaluating
graph motif parameters: Given a class C of such parameters, we consider
the problem of evaluating f∈C on input graphs G, parameterized by the
number of induced subgraphs that f depends upon. For every recursively
enumerable class C, we prove the above problem to be either FPT or
#W[1]hard, with an explicit dichotomy criterion. This allows us to
recover known dichotomies for counting subgraphs, induced subgraphs, and
homomorphisms in a uniform and simplified way, together with improved
lower bounds.
Finally, we extend graph motif parameters to colored subgraphs and prove
a complexity trichotomy: For vertexcolored graphs H and G, where H is
from a fixed class H, we want to count colorpreserving Hcopies in G.
We show that this problem is either polynomialtime solvable or FPT or
#W[1]hard, and that the FPT cases indeed need FPT time under reasonable
assumptions.
Joint work with Radu Curticapean and Dániel Marx
parameters that depend only on the frequencies of constantsize induced
subgraphs. Classical works by Lovász show that many interesting
quantities have this form, including, for fixed graphs H, the number of
Hcopies (induced or not) in an input graph G, and the number of
homomorphisms from H to G.
Using the framework of graph motif parameters, we obtain faster
algorithms for counting subgraph copies of fixed graphs H in host graphs
G: For graphs H on k edges, we show how to count subgraph copies of H in
time k^{O(k)}⋅n^{0.174k+o(k)} by a surprisingly simple algorithm. This
improves upon previously known running times, such as O(n0.91k+c) time
for kedge matchings or O(n0.46k+c) time for kcycles.
Furthermore, we prove a general complexity dichotomy for evaluating
graph motif parameters: Given a class C of such parameters, we consider
the problem of evaluating f∈C on input graphs G, parameterized by the
number of induced subgraphs that f depends upon. For every recursively
enumerable class C, we prove the above problem to be either FPT or
#W[1]hard, with an explicit dichotomy criterion. This allows us to
recover known dichotomies for counting subgraphs, induced subgraphs, and
homomorphisms in a uniform and simplified way, together with improved
lower bounds.
Finally, we extend graph motif parameters to colored subgraphs and prove
a complexity trichotomy: For vertexcolored graphs H and G, where H is
from a fixed class H, we want to count colorpreserving Hcopies in G.
We show that this problem is either polynomialtime solvable or FPT or
#W[1]hard, and that the FPT cases indeed need FPT time under reasonable
assumptions.
Joint work with Radu Curticapean and Dániel Marx
Friday, 25.08.2017
15:00

16:00
Closing the gap between the lower and upper bounds for the computational complexity of the Graph Isomorphism problem still
is a big challenge for mathematicians and computer scientists. While resolving this problem in the general case seems out
of reach, substantial progress has been made for restricted graph classes such as planar graphs. In particular, planar graph
isomorphism has recently been shown to be in L (and thus is Lcomplete) by Datta, Limaye, Nimbhorkar, Thierauf and Wagner.
We generalize the decomposition
technique used in their result to work with minorclosed graph classes whose 3connected members fulfill a fixability condition and admit a logspace canonization algorithm.
technique used in their result to work with minorclosed graph classes whose 3connected members fulfill a fixability condition and admit a logspace canonization algorithm.
Thursday, 24.08.2017
10:30

10:30
Seminarraum i1
Jan Tönshoff
Many real world applications, such as analysing social networks or internet traffic, require the analysis of large sets of
graph structured data. Teaching neural networks to recognise patterns in graphs seems like an intuitive way of simplifying
these tasks. To achieve this, we need to represent graphs as vectors in a suitable way that enables neural networks to find
patterns in the underlying structure of the graphs.
In this thesis we examine the performance of a vector representation for graphs obtained from the WeisfeilerLehman subtree kernel as an input for neural networks. We test the performance on standard graph classification tests with datasets based in bioinformatics. The results are shown to be competitive with those of stateoftheart graph kernels for support vector machines.
To solve scalability problems, we show that the size of these vectors can be reduced significantly without lowering the learning performance.
Additionally, we show that the adjacency matrix is not a suitable vector representation of graphs when using neural networks. While the networks are able to learn some patterns with these inputs, the learning performance is significantly lower than for the vectors from the WeisfeilerLehman subtree kernel.
Before we make these observations we provide brief introductions into WeisfeilerLehman graph kernels and neural networks.
In this thesis we examine the performance of a vector representation for graphs obtained from the WeisfeilerLehman subtree kernel as an input for neural networks. We test the performance on standard graph classification tests with datasets based in bioinformatics. The results are shown to be competitive with those of stateoftheart graph kernels for support vector machines.
To solve scalability problems, we show that the size of these vectors can be reduced significantly without lowering the learning performance.
Additionally, we show that the adjacency matrix is not a suitable vector representation of graphs when using neural networks. While the networks are able to learn some patterns with these inputs, the learning performance is significantly lower than for the vectors from the WeisfeilerLehman subtree kernel.
Before we make these observations we provide brief introductions into WeisfeilerLehman graph kernels and neural networks.
Monday, 14.08.2017
11:00

12:00
Fixing sets are a way to break symmetries and determine automorphisms of a graph. The fixing number of a graph G is the smallest
cardinality of a set of vertices S such that only the trivial automorphism of G fixes every vertex in S. In this talk I present
an algorithm for computing the fixing number of a planar graph by decomposing it into its triconnected components and by using
bottomup recursion upon these components. For this to work I introduce a generalization of the fixing number, the socalled
vertexarc fixing number: in addition to vertices, the fixing set may also contain edges and arcs of the graph. The provided
algorithm can be applied to any minorclosed graph class that allows isomorphism testing and computing the vertexarc fixing
number of its triconnected graphs in polynomial time.
Friday, 28.07.2017
10:15

11:45
Seminarraum i7
Thursday, 27.07.2017
11:00

12:00
Seminarraum i7
Gaëtan Douéneau
We investigate a certain notion of comparison between infinite words. In a general way, if M is a model of computation (e.g.
Turing machines) and C a class of objects (e.g. languages), the complexity of an infinite word w can be measured with respect
to the objects from C that are presentable with machines from M using w as an advice.
In our case, the model is finite automata and the objects are either recognized languages or presentable structures, known respectively as advice regular languages and advice automatic structures. This leads to several different classifications that are studied in detail; logical and computational equivalent characterizations are derived. Our main results explore the connections between classes of advice automatic structures, $MSO$transductions and twoway transducers.
In our case, the model is finite automata and the objects are either recognized languages or presentable structures, known respectively as advice regular languages and advice automatic structures. This leads to several different classifications that are studied in detail; logical and computational equivalent characterizations are derived. Our main results explore the connections between classes of advice automatic structures, $MSO$transductions and twoway transducers.
Friday, 21.07.2017
10:15

11:45
Seminarraum i7
work by Mohsen Ghaffari
Thursday, 20.07.2017
14:00

15:00
Seminarraum i7
Graph kernels are a popular approach to graph comparison and at the heart of many machine learning applications in bioinformatics, imaging, and socialnetwork analysis. In this talk we shall first present a graph kernel that takes "local" and "global" graph properties into account. Thereto, we develop a "local" version of the $k$dimensional WeisfeilerLehman algorithm. In order to make our kernel scalable, we devise a randomized version of the kernel with provable approximation guarantees using conditional Rademacher averages. On boundeddegree graphs, it can even be computed in constant time. In the second part, we will give an overview about recent progress in the area of graph classification and graph regression via so called "deep learning".
Wednesday, 19.07.2017
16:00

17:00
Stochastic dominance is a technique for evaluating the performance of online algorithms that provides an intuitive, yet powerful
stochastic order between the compared algorithms. Accordingly this holds for bijective analysis, which can be interpreted
as stochastic dominance assuming the uniform distribution over requests. These techniques have been applied to some online
problems, and have provided a clear separation between algorithms whose performance varies significantly in practice. However,
there are situations in which they are not readily applicable due to the fact that they stipulate a stringent relation between
the compared algorithms.
In this presentation, we propose remedies for these shortcomings. Our contribution is twofold. First, we establish sufficient conditions that allow us to prove the bijective optimality of a certain class of algorithms for a wide range of problems. Second, to account for situations in which two algorithms are incomparable or there is no clear optimum, we introduce the bijective ratio as a natural extension of (exact) bijective analysis. This renders the concept of bijective analysis (and that of stochastic dominance) applicable to all online problems, and allows for the incorporation of other useful techniques such as amortized analysis. We demonstrate the applicability of the bijective ratio to one of the fundamental online problems, namely the continuous kserver problem on metrics such as the line, the circle, and the star.
Joint work with Marc Renault and Pascal Schweitzer
In this presentation, we propose remedies for these shortcomings. Our contribution is twofold. First, we establish sufficient conditions that allow us to prove the bijective optimality of a certain class of algorithms for a wide range of problems. Second, to account for situations in which two algorithms are incomparable or there is no clear optimum, we introduce the bijective ratio as a natural extension of (exact) bijective analysis. This renders the concept of bijective analysis (and that of stochastic dominance) applicable to all online problems, and allows for the incorporation of other useful techniques such as amortized analysis. We demonstrate the applicability of the bijective ratio to one of the fundamental online problems, namely the continuous kserver problem on metrics such as the line, the circle, and the star.
Joint work with Marc Renault and Pascal Schweitzer
Tuesday, 18.07.2017
15:00

15:30
Seminarraum i7
Steffen van Bergerem
Friday, 07.07.2017
11:15

12:15
Many intractable problems on graphs have efficient solvers when graphs are trees or forests. Tree decompositions often allow
to apply such solvers to general graphs by grouping nodes into bags laid out in a tree structure, thereby decomposing the
problem into the subproblems induced by the bags. This approach has applications in a plethora of domains, partly because
it allows the optimize inference on probabilistic graphical models, as well as evaluation of database queries. Nevertheless,
a graph can have exponentially many tree decompositions and finding an ideal one is challenging, for two main reasons. First,
the measure of goodness often depends on subtleties of the specific application at hand. Second, theoretical hardness is met
already for the simplest measures such as the maximal size of bag (a.k.a. “width”). Therefore, we explore the approach of
producing a large space of highquality tree decompositions for the application to choose from.
I will describe our application of tree decompositions in the context of “worstcase optimal” joins  a new breed of inmemory join algorithms that satisfy strong theoretical guarantees and were found to feature a significant speedup compared to traditional approaches. Specifically, I will explain how this development led us to the challenge of enumerating tree decompositions. Then, I will describe a novel enumeration algorithm for tree decompositions with a theoretical guarantee on the delay (the time between consecutive answers), and an experimental study thereof (on graphs from various relevant domains). Finally, I will describe recent results that provide guarantees on both the delay and the quality of the generated tree decompositions.
The talk will be based on papers that appeared in EDBT 2017 and PODS 2017, coauthored with Nofar Carmeli, Yoav Etsion, Oren Kalinsky and Batya Kenig.
I will describe our application of tree decompositions in the context of “worstcase optimal” joins  a new breed of inmemory join algorithms that satisfy strong theoretical guarantees and were found to feature a significant speedup compared to traditional approaches. Specifically, I will explain how this development led us to the challenge of enumerating tree decompositions. Then, I will describe a novel enumeration algorithm for tree decompositions with a theoretical guarantee on the delay (the time between consecutive answers), and an experimental study thereof (on graphs from various relevant domains). Finally, I will describe recent results that provide guarantees on both the delay and the quality of the generated tree decompositions.
The talk will be based on papers that appeared in EDBT 2017 and PODS 2017, coauthored with Nofar Carmeli, Yoav Etsion, Oren Kalinsky and Batya Kenig.
Friday, 07.07.2017
10:15

11:15
Seminarraum i7
Tuesday, 04.07.2017
17:00

18:00
Property testing (for a property P) asks for a given input, whether it has property P, or is "far" from having that property. A "testing algorithm" is a probabilistic algorithm that answers this question with high probability correctly, by only looking at small parts of the input. Testing algorithms are thought of as "extremely efficient", making them relevant in the context of big data.
We extend the bounded degree model of property testing from graphs to relational structures, and we show that every property definable in firstorder logic is testable with a constant number of queries in constant time. On structures of bounded treewidth, monadic secondorder logic can be tested with a constant number of queries in polylogarithmic time.
This is joint work with Frederik Harwath.
Tuesday, 04.07.2017
16:00

17:00
The dichotomy conjecture for the parameterized embedding problem states that
the problem of deciding whether a given graph G from some class K of
``pattern graphs'' can be embedded into a given graph H (that is, is
isomorphic to a subgraph of H) is fixedparameter tractable if K is a class
of graphs of bounded tree width and W[1]complete otherwise.
In this talk, I will explain the background and history of this conjecture,
and also our recent result that the embedding problem is W[1]complete if K
is the class of all grids or the class of all walls.
This is joint work with Martin Grohe and Bingkai Lin.
Friday, 30.06.2017
10:15

11:45
Seminarraum i7
work by Wojciech Czerwiński and Sławomir Lasota
Friday, 23.06.2017
10:15

11:45
Seminarraum i7
work by Amir Abboud and Greg Bodwin
Friday, 16.06.2017
10:15

11:45
Seminarraum i7
We prove that the WeisfeilerLeman (WL) dimension of the class of all finite planar graphs is at most 3. In particular, every finite planar graph is definable in fixed
point logic with counting using at most 4 variables. The previously best known upper bounds for the dimension and number of variables were 14 and 15, respectively.
First we show that, for dimension 3 and higher, the WLalgorithm correctly tests isomorphism of graphs in a minorclosed class whenever it determines the orbits
of the automorphism group of any vertex/arccolored 3connected graph belonging to this class.
Then we prove that, apart from several exceptional graphs (which have WLdimension at most 2), the individualization of two correctly chosen vertices of a colored 3connected planar graph followed by the 1dimensional WLalgorithm
produces the discrete vertex partition. This implies that the 3dimensional WLalgorithm determines the orbits of a colored 3connected planar graph.
As a byproduct of the proof, we get a classification of the 3connected planar graphs with fixing number 3.
This is joint work with Ilia Ponomarenko and Pascal Schweitzer.
Friday, 02.06.2017
10:15

11:45
Seminarraum i7
Tuesday, 30.05.2017
10:30

11:30
Seminarraum i7
Bachelor thesis final talk
Friday, 19.05.2017
10:15

11:45
Seminarraum i7
Wednesday, 17.05.2017
15:30

16:30
Seminarraum i7
Query evaluation is one of the most fundamental tasks in databases, and a vast amount of literature is devoted to the complexity of this problem. This talk will focus on query evaluation in the “dynamic setting”, where the database may be updated by inserting or deleting tuples. In this setting, an evaluation algorithm receives a query Q and an initial database D and starts with a preprocessing phase that computes a suitable data structure to represent the result of evaluating Q on D. After every database update, the data structure is updated so that it represents the result of evaluating Q on the updated database. The data structure shall be designed in such a way that it quickly provides the query result, preferably in constant time (i.e., independent of the database size). We focus on the following flavours of query evaluation.
(1) Testing: Decide whether a given tuple t is contained in Q(D).
(2) Counting: Compute the number of tuples that belong to Q(D).
(3) Enumeration: Enumerate Q(D) with a bounded delay between the output tuples. Here, as usual, Q(D) denotes the kary relation obtained by evaluating a kary query Q on a relational database D. For Boolean queries, all three tasks boil down to
(4) Answering: Decide if Q(D) is nonempty.
Compared to the dynamic descriptive complexity framework introduced by Patnaik and Immerman (1997), which focuses on the expressive power of firstorder logic on dynamic databases and has led to a rich body of literature, we are interested in the computational complexity of query evaluation. We say that a query evaluation algorithm is efficient if the update time is either constant or at most polylogarithmic in the size of the database. In this talk I want to give an overview of recent results in this area.
Friday, 12.05.2017
10:15

11:45
Seminarraum i7
Friday, 05.05.2017
10:15

11:45
Seminarraum i7
Friday, 28.04.2017
10:15

11:45
Seminarraum i7
Unfolding proofs are a heuristic that is used for verifying properties
of heap manipulating programs with recursively defined data structures
(like lists or trees). The heuristic reduces the property to be
verified to a finite set of quantifier formulas, which can be tested
for satisfiability by an SMT solver. The quantifier formulas are
basically obtained by instantiating (unfolding) the recursive
definitions with concrete terms referring to heap elements.
In this joint ongoing work with P. Madhusudan and Lucas Pena
(University of Illinois UrbanaChampaign), we aim at understanding
which properties can proven using this heuristic.
Tuesday, 25.04.2017
10:30

11:30
Seminarraum i7
Bachelor thesis final talk
Friday, 21.04.2017
11:45

12:45
Seminarraum i7
Bachelor thesis final talk
Wednesday, 29.03.2017
10:00

11:00
Seminarraum i7
Philipp Niemietz
Friday, 24.03.2017
10:15

11:45
Katrin Dannert
The problem of deciding parity games is known to be in NP and in coNP but it
is unknown, whether it is solvable in polynomial time. Recently it has been shown
that the problem can be solved both in quasipolynomial time and in FPT with
the number of colours as the parameter. In order to obtain these runtimes, space
efficient winning statistics for a play are maintained by building up and combining
"favourable" sequences whose lengths are powers of two. This yields an alternating
polylogarithmic space algorithm, which can be translated both into a quasipolyno
mial time algorithm and an FPT algorithm.
Based on the paper Deciding Parity Games in Quasipolynomial Time by Calude,
Jain, Khoussainov, Li and Stephan
is unknown, whether it is solvable in polynomial time. Recently it has been shown
that the problem can be solved both in quasipolynomial time and in FPT with
the number of colours as the parameter. In order to obtain these runtimes, space
efficient winning statistics for a play are maintained by building up and combining
"favourable" sequences whose lengths are powers of two. This yields an alternating
polylogarithmic space algorithm, which can be translated both into a quasipolyno
mial time algorithm and an FPT algorithm.
Based on the paper Deciding Parity Games in Quasipolynomial Time by Calude,
Jain, Khoussainov, Li and Stephan
Monday, 20.03.2017
12:00

13:00
Seminarraum i7
Yordan Manolov
Monday, 20.03.2017
11:00

12:00
Seminarraum i7
Tuesday, 14.03.2017
11:00

12:00
Seminarraum i7
Richard Wilke
Klassische Logiken werden mit der so genannten TarskiSemantik aus gewertet. Dabei gibt eine Zuweisung den vorkommenden Variablen
einer Formel ihre Bedeutung. Dieses Konzept st ̈oßt an seine Grenzen sobald man Abh ̈angigkeiten zwischen den Variablen ausdru
̈cken m ̈ochte. Verschiedene Logiken sind entwickelt worden um derartige Aussagen treffen zu k ̈onnen. Das ju ̈ngste Konzept
ist die TeamSemantik. Man betrachtet dort zu ei ner Formel eine Menge von Zuweisungen, die miteinander interagieren. So
kann man beispielsweise die Aussage treffen, dass der Wert einer Variable y ausschließlich vom Wert der Variable x abh ̈angig
ist.
Ein Team kann als Datenbank betrachtet werden, wobei jede Zuweisung einen Eintrag repr ̈asentiert. In reellen Anwendungen kann es von großer Be deutung sein, wie oft ein gewisser Eintrag vorkommt. Wir betrachtet daher Logiken mit MultiteamSemantik, welche sich eben dieser Herausforderung stellen.
Ein Team kann als Datenbank betrachtet werden, wobei jede Zuweisung einen Eintrag repr ̈asentiert. In reellen Anwendungen kann es von großer Be deutung sein, wie oft ein gewisser Eintrag vorkommt. Wir betrachtet daher Logiken mit MultiteamSemantik, welche sich eben dieser Herausforderung stellen.
Friday, 03.03.2017
10:15

11:15
We investigate the logical foundations of hyperproperties. Hyperproperties generalize trace properties, which are sets of traces, to sets of sets of traces. The most prominent application of hyperproperties is information flow security: information flow policies characterize the secrecy and integrity of a system by comparing two or more execution traces, for example by comparing the observations made by an external observer on execution traces that result from different values of a secret variable. In this paper, we establish the first connection between temporal logics for hyperproperties and firstorder logic. Kamp's seminal theorem (in the formulation due to Gabbay et al.) states that lineartime temporal logic (LTL) is expressively equivalent to firstorder logic over the natural numbers with order. We introduce firstorder logic over sets of traces and prove that HyperLTL, the extension of LTL to hyperproperties, is strictly subsumed by this logic. We furthermore exhibit a fragment that is expressively equivalent to HyperLTL, thereby establishing Kamp's theorem for hyperproperties.
Based on joint work with Bernd Finkbeiner (Saarland University).
Friday, 17.02.2017
10:15

11:15
Using the LP formulation for GI, we describe the class of compact graphs,
proposed by Tinhofer. Isomorphism testing for such graphs can be done
efficiently, using the linear program for GI. The recognition problem for
such graphs remains open. Towards a partial understanding of this class,
we describe some connections of this class with Color Refinement and
Individualization techniques.
Friday, 03.02.2017
10:15

11:15
Seminarraum i7
The individualization and refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, it provides the only methods for isomorphism testing which are also viable for practical uses. From the theoretical point of view, no meaningful lower bounds on the worst case complexity of these tools are known. In fact, it is an open question whether these simple methods might provide similar upper bounds than existing algorithms combining complex group theoretic and combinatorial tools.
In this work we give a negative answer to this question and construct a family of graphs that provides an exponential lower bound for methods based on the individualization and refinement paradigm. Other than previous constructions like Miyazaki graphs, our construction is immune to changing the cell selector or invariants used within the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case where kdimensional Weisfeiler Leman is used for refinement instead of color refinement (i.e. 1dimensional Weisfeiler Leman) and the arguments even work when the automorphism group is initially known to the algorithm. In particular, under some moderate assumptions, our lower bounds are independent from which heuristics are used for cell selection, node invariants and automorphism detection.
Friday, 27.01.2017
10:15

11:15
Seminarraum i7
An automaton model for words over the alphabet of the natural numbers is presented  the so called Nmemory automaton.
We explain its close connection to MSOlogic and discuss its expressive power and closure properties.
Additionally we show among other decidability results the solvability of the nonemptiness problem.
Friday, 20.01.2017
10:15

11:15
Seminarraum i7
Lars Beckers
Friday, 13.01.2017
14:15

15:15
Propositional proof systems such as Resolution or the Polynomial Calculus have been studied as approaches for solving computationally
difficult problems, e.g. Graph Isomorphism. As this problem requires proofs of exponential size, this motivates the question
of which problems have only polynomial proof complexity. We address this by analysing the expressive power of logics incorporating
suitably restricted versions of Resolution and monomialPC (a weaker variant of the Polynomial Calculus) which are guaranteed
to produce polynomialsize proofs. It turns out that the power of these restricted proof systems can be described in terms
of other wellknown logics, namely we obtain equivalences with least fixedpoint logic (LFP), existential least fixedpoint
logic (EFP) and fixedpoint logic with counting (FPC).
Friday, 13.01.2017
10:15

11:15
Seminarraum i7
Andreas Klinger
Friday, 16.12.2016
10:15

11:15
Seminarraum i7
Erik van Loo
Friday, 02.12.2016
10:15

11:15
Thursday, 17.11.2016
10:00

11:00
Marc Meskes
Friday, 11.11.2016
10:15

11:15
Seminarraum i7
Friday, 04.11.2016
10:15

11:15
Seminarraum i7
Friday, 28.10.2016
11:00

12:00
Kaspar Scharf
The graph isomorphism problem is the problem to decide if two graphs of a given pair are isomorphic, which means if they are
of the same structure. For each natural number k, there exists an algorithm called the kdimensional WeisfeilerLeman algorithm
that is capable of deciding the graph isomorphism problem on some particular set of graphs. The lower the dimension k is,
the lower the worst case runtime of the algorithm is. Closely related to that algorithm is the bijective kpebblegame played
on pairs of graphs, where k is the number of used pebble pairs. The player Spoiler has a winning strategy for the game with
k pebble pairs for two given graphs, exactly if the WeisfeilerLeman algorithm with dimension k1 detects nonisomorphism of
these two graphs.
We prove that Spoiler wins the bijective 3pebble game on all pairs of nonisonorphic outerplanar graphs, which implies that the 2dimensional WeisfeilerLeman algorithm decides the graph isomorphism problem on outerplanar graphs. For that we show that if Spoiler wins the game on pairs of nonisomorphic 2connected components of a pair of given nonisomorphic graphs, then he also wins on these graphs.
We prove that Spoiler wins the bijective 3pebble game on all pairs of nonisonorphic outerplanar graphs, which implies that the 2dimensional WeisfeilerLeman algorithm decides the graph isomorphism problem on outerplanar graphs. For that we show that if Spoiler wins the game on pairs of nonisomorphic 2connected components of a pair of given nonisomorphic graphs, then he also wins on these graphs.
Monday, 24.10.2016
14:30

15:30
Sebastian Schaub
The problem of encoding a set in a data structure is a common problem in computer science, which has well known efficient
run time solutions. In a streaming situation with low memory and unknown set size, dynamic filters provide approximative representations.
They produce small fractions of false positives in exchange for good space efficiency. The Bloom is de facto the standard
for filtering, but other newer algorithms are known as well. This bachelor thesis proposes a new filter, the linear probing
filter. It is based on an intuitive approach which resembles a dictionary and uses constant length signatures as
representatives for keys. 5independent hashing functions have to be used to secure the running time being independent from the set size, although in practice this is only relevant for very large sets. The experimental results of this thesis validate the theoretical results of the algorithm and show the strengths and weaknesses of the linear probing filter. Its strengths are a constant and fast lookup and insertion time independent from the false positive rate and the set size. Its main weakness is a bad space efficiency in comparison to the other filters. The linear probing filter thus could potentially be a competitor for large set sizes.
representatives for keys. 5independent hashing functions have to be used to secure the running time being independent from the set size, although in practice this is only relevant for very large sets. The experimental results of this thesis validate the theoretical results of the algorithm and show the strengths and weaknesses of the linear probing filter. Its strengths are a constant and fast lookup and insertion time independent from the false positive rate and the set size. Its main weakness is a bad space efficiency in comparison to the other filters. The linear probing filter thus could potentially be a competitor for large set sizes.
Wednesday, 19.10.2016
11:00

12:00
Christopher Spinrath
Friday, 30.09.2016
13:00

13:45
Marcel Faßbender
Tuesday, 27.09.2016
16:00

17:00
Seminarraum i7
Daniel Wiebking
Monday, 26.09.2016
12:15

13:15
Seminar room i1
Oliver Major
In recent decades, the fields of dynamic complexity and its logical counterpart dynamic definability were consolidated as
interesting fields of study with connections to abstract database models and their implementation. As a standard model of
this field, DynFO was shown to have desirable properties such as low update complexity and the ability to express transitive
closures.
On the other hand, Choiceless Polynomial Time which is a contender for a logic for PTIME was recently shown to be equivalent to the very young polynomialtime interpretation logic (PIL). Although PIL is a logic with semantics in the classical sense, it uses an iteration system which resembles the one DynFO uses.
In this work, we attempt to develop a dynamic framework similar to DynFO which enables us to transfer our knowledge about the PIL iteration mechanism to the new framework. One major limitation of DynFO is that it does not allow changes of the underlying universe. We subsequently introduce the constant additions DynFO framework (CADF) which allows for new elements to be added to a structure in each update step. We show that alternative frameworks with up to a polynomial number of added elements turn out to have the same expressive power as CADF.
In a final part, we explore the expressive power of deterministic variants of CADF and compare them to DynFO where determinism can be imposed as a true restriction. Surprisingly, certain deterministic restrictions of CADF turn out to not be stronger than DynFO.
On the other hand, Choiceless Polynomial Time which is a contender for a logic for PTIME was recently shown to be equivalent to the very young polynomialtime interpretation logic (PIL). Although PIL is a logic with semantics in the classical sense, it uses an iteration system which resembles the one DynFO uses.
In this work, we attempt to develop a dynamic framework similar to DynFO which enables us to transfer our knowledge about the PIL iteration mechanism to the new framework. One major limitation of DynFO is that it does not allow changes of the underlying universe. We subsequently introduce the constant additions DynFO framework (CADF) which allows for new elements to be added to a structure in each update step. We show that alternative frameworks with up to a polynomial number of added elements turn out to have the same expressive power as CADF.
In a final part, we explore the expressive power of deterministic variants of CADF and compare them to DynFO where determinism can be imposed as a true restriction. Surprisingly, certain deterministic restrictions of CADF turn out to not be stronger than DynFO.
Friday, 23.09.2016
13:00

14:00
Room 9222 (E3)
Model theoretic interpretations are an important tool of algorithmic model theory. Their application range from reductions between logical theories to the construction of efficient algorithms for hard problems on restricted classes of structures, like graphs of bounded treewidth. We investigate this tool in three different areas of algorithmic model theory:  automata based decision procedures for logical theories,  algorithmic metatheorems, and  descriptive complexity. One of the main focus points of this dissertation are automata based presentations of infinite objects, which are closely related to monadic secondorder interpretations over set variables. We consider advice automatic presentations for several automata models and develop algebraic and combinatorial tools, which enable us to prove that certain structures cannot have an omegaautomatic presentation. The main result in this direction is that the field of reals is not omegaautomatic with any advice. This answers a weakened version of a question of Rabin, who asked whether the field of reals is interpretable in the infinite binary tree. Also this weak version has been considered an open problem since the introduction of omegaautomatic presentations. Further, we consider uniformly automatic classes, which are classes that are generated by a fixed presentation and a set of advices. Prototypic examples are graphs of bounded treewidth, the torsion free abelian groups of rank 1, and the class of all countable linear orders. Uniformly automatic classes are also found in the core of several algorithmic metatheorems. We investigate the efficiency of this approach by analysing the runtime of the generic automata based model checking algorithm in terms of the complexity of the presentation. We show that the runtime on a presentation of the direct product closure is only one exponential higher than the runtime on the primal class. We apply these findings to show that firstorder model checking is fixed parameter tractable on the classes of all finite boolean algebras and the class of all finite abelian groups. In both cases the parameter dependence of the runtime is elementary. The runtime, which we achieve on these classes is either provably optimal or outperforms other known approaches. Further we show that the runtime of the generic automata based algorithm for MSO model checking on graphs of treedepth at most d has an (d+1)fold exponential parameter dependence, which matches the runtime of the best known algorithm for model checking on these classes. In the last part of this dissertation we turn our attention to logics with a build in interpretation mechanism. Polynomial Time Interpretation Logic (PIL) is an alternative characterisation of Choiceless Polynomial Time (CPT). The logic CPT is currently considered the most promising candidate for a logic capturing PTIME. We contribute to the exploration of the expressive power of CPT by showing that there is a CPT definable canonisation procedure on classes of structures with bounded abelian colours. A structure has bounded abelian colours if it is of bounded colour class size and the automorphism group on every colour class is abelian. Examples of such classes emerge from the classical examples that separate Fixed Point Logic with Counting from PTIME. The CFIgraphs of Cai, Fürer, and Immerman, as well as the Multipedes of Blass, Gurevich, and Shelah have bounded abelian colours. Consequently, the isomorphism problems for these classes are solvable in CPT. For Multipedes this was an open question. In fact, Blass, Gurevich, and Shelah conjectured that the isomorphism problem for Multipedes might not be solvable by a CPT procedure.
Friday, 23.09.2016
11:30

12:15
Erik Van Loo
Friday, 23.09.2016
10:45

11:30
Friday, 23.09.2016
10:00

10:45
Thursday, 22.09.2016
10:45

11:45
Many learning problems can be formalized by parameterized formulas of
firstorder logic or monadic secondorder logic. We evaluate the
algorithmic complexity of learn ing such parameters in Valiant’s
probably approximately correct (PAC) learning model. The underlying
structures in this thesis are restricted finite graphs such as trees,
graphs of bounded degree and graphs from a nowhere dense graph class.
For the evaluation of the complexity of the learning algorithms we
consider two vari ants of the learning problem. In the first variant,
the structure over which the formulas are evaluated is part of the
input. The second variant contains only the size of the universe of the
structure as part of the input. This way the size of the first variant
is dominated by the structure, whereas in the second variant the size
of the input is loga rithmic in the structure. The learning problems
in the first variant are learning monadic secondorder formulas over
trees and graphs of bounded cliquewidth and firstorder formulas over
graphs from a nowhere dense graph class. In the second setting, the
learn ing problems for quantifierfree formulas over words with a
linear order and firstorder formulas over graphs of bounded degree are
considered. Those problems are evaluated using parameterized complexity
and an XP algorithm is presented for each of the cases. When
restricting the input formulas such that an example consists of a
single node and thus subsets of the structure are learned, all of the
above problems are fixed parameter tractable (FPT) with a linear
dependence on the number of examples.
Wednesday, 21.09.2016
11:00

12:00
Oliver Scheel
Nowadays, the focus in the automobile industry has shifted. Whereas in the past
cars were a product of classical mechanical engineering, today hardly any func
tion works without the help of electronics. Cars are controlled by complicated
software systems, and many innovations are made in this field. Naturally, this
increased amount of software comes with an increased complexity, challenging
car manufacturers to become experts in, amongst others, software engineering.
New methods and tools are needed to handle this complexity and reduce the
number of errors in the software to a minimum  as errors can not only be dan
gerous to traffic participants, but are also expensive to fix. One such practice is
continuous integration, in which continuous testing allows an always executable
build. To reduce costs and allow more testing in the automobile industry much
of the testing is done on test benches, which can simulate whole cars.
In this work a scheduling algorithm will be developed for scheduling tests on such
test benches. The scheduling of jobs to machines alone already is a very com
plicated task, here additional constraints further complicate it. These consist
of certain characteristics of the tests, like deadlines and specific requirements,
which cause setup times and costs, but also limitations of the scenario in general,
like available times of the test benches. This results in a strongly NPcomplete
problem. Thus no optimal solutions can be expected and the main goal of
this work is to experimentally test and compare different approximate solution
approaches. Methods known from the literature are adapted and tested for
the specific problem, as well as novel approaches developed, since this problem
in its full form has not been examined yet. The examined algorithms include
greedy heuristics, genetic algorithms, simulated annealing, machine learning ap
proaches and mathematical programming.
cars were a product of classical mechanical engineering, today hardly any func
tion works without the help of electronics. Cars are controlled by complicated
software systems, and many innovations are made in this field. Naturally, this
increased amount of software comes with an increased complexity, challenging
car manufacturers to become experts in, amongst others, software engineering.
New methods and tools are needed to handle this complexity and reduce the
number of errors in the software to a minimum  as errors can not only be dan
gerous to traffic participants, but are also expensive to fix. One such practice is
continuous integration, in which continuous testing allows an always executable
build. To reduce costs and allow more testing in the automobile industry much
of the testing is done on test benches, which can simulate whole cars.
In this work a scheduling algorithm will be developed for scheduling tests on such
test benches. The scheduling of jobs to machines alone already is a very com
plicated task, here additional constraints further complicate it. These consist
of certain characteristics of the tests, like deadlines and specific requirements,
which cause setup times and costs, but also limitations of the scenario in general,
like available times of the test benches. This results in a strongly NPcomplete
problem. Thus no optimal solutions can be expected and the main goal of
this work is to experimentally test and compare different approximate solution
approaches. Methods known from the literature are adapted and tested for
the specific problem, as well as novel approaches developed, since this problem
in its full form has not been examined yet. The examined algorithms include
greedy heuristics, genetic algorithms, simulated annealing, machine learning ap
proaches and mathematical programming.
Friday, 16.09.2016
11:00

12:00
Vinzent Skawran
Monday, 29.08.2016
11:00

12:00
Sonja Zimmermann
Friday, 29.07.2016
11:00

12:00
Jochen Schmücking
Tuesday, 26.07.2016
10:00

11:00
Andreas Tollkötter
Wednesday, 20.07.2016
11:00

12:00
Synthesis, the automatic construction of objects related to hard and software, is one of the great challenges of computer science. Although synthesis problems are impossible to solve in general, learningbased approaches, in which the synthesis of an object is based on learning from examples, have recently been used to build elegant and extremely effective solutions for a large number of difficult problems. Such examples include automatically fixing bugs, translating programs from one language into another, program verification, as well as the generation of highlevel code from given specifications.
This talk gives an introduction to learningbased synthesis. First, we develop a generic view on learningbased synthesis, called abstract learning frameworks for synthesis, which introduces a common terminology to compare and contrast learningbased synthesis techniques found in the literature. Then, we present a learningbased program verifier, which can prove the correctness of numeric programs (mainly) automatically, and show how this technique can be modeled as an abstract learning framework for synthesis. During the talk, we present various examples that highlight the power of learningbased synthesis.
Friday, 15.07.2016
12:15

13:15
work by Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, Thomas Zeume
Friday, 08.07.2016
12:15

13:15
work by Denis Kuperberg, Michał Skrzypczak
Friday, 01.07.2016
12:15

13:15
work by Mikołaj Bojanczyk
Friday, 24.06.2016
12:15

13:15
work by Mikołaj Bojanczyk, Michał Pilipczuk
Wednesday, 22.06.2016
13:30

14:30
Friday, 17.06.2016
12:15

13:15
We show that on graphs with n vertices the 2dimensional WeisfeilerLeman algorithm requires at most O(n^2/log(n)) iterations to reach stabilization. This in particular shows that the previously best, trivial upper bound of O(n^2) is asymptotically not tight. In the logic setting this translates to the statement that if two graphs of size n can be distinguished by a formula in first order logic with counting with 3 variables (i.e., in C3) then they can also be distinguished by a C3formula that has quantifier depth at most O(n^2/log(n)).
To prove the result we define a game between two players that enables us to decouple the causal dependencies between the processes happening simultaneously over several iterations of the algorithm. This allows us to treat large color classes and small color classes separately. As part of our proof we show that for graphs with bounded color class size, the number of iterations until stabilization is at most linear in the number of vertices. This also yields a corresponding statement in first order logic with counting.
Similar results can be obtained for the respective logic without counting quantifiers, i.e., for the logic L3.
This is joint work with Pascal Schweitzer.
Wednesday, 15.06.2016
11:00

12:00
Graphs have become ubiquitous to represent structured data in many application domains like computer vision, social network analysis or chem and bioinformatics. Techniques for graph comparison are a prerequisite for the application of a variety of data mining algorithms to these domains. This talk is dedicated to two different strategies for comparing graphs: maximum common subgraph problems and graph kernels.
Maximum common subgraph problems are based on classical graphtheoretical concepts for graph comparison and are NPhard in the general case. We consider variants of the maximum common subgraph problem in restricted graph classes, which are highly relevant for applications in cheminformatics. We develop a polynomialtime algorithm, which allows to compute a maximum common subgraph under socalled block and bridge preserving isomorphism in seriesparallel graphs. To solve the problem for this class of graphs the separators represented by standard graph decompositions are insufficient. We introduce the concept of potential separators and use them algorithmically to overcome this issue. Finally, we discuss polynomialtime computable graph distance metrics in restricted graph classes.
In the second part of the talk we consider graph kernels, which have their origin in specific data mining algorithms, socalled kernel methods. Graph kernels are positive semidefinite functions which measure the similarity between pairs of graphs. A leading design paradigm for graph kernels is the convolution kernel, which decomposes graphs into their parts and sums over all pairs of parts. Assignment kernels, in contrast, are obtained from an optimal bijection between parts. In general however, optimal assignments (maximum weight bipartite matchings) yield indefinite functions, which complicates their use in kernel methods. We characterize a class of base kernels used to compare parts that guarantees positive semidefinite optimal assignment kernels. These base kernels give rise to hierarchies from which the optimal assignment kernels are computed in linear time by histogram intersection. We apply these results by developing the WeisfeilerLehman optimal assignment kernel for graphs. It provides high classification accuracy on widelyused benchmark data sets improving over the original WeisfeilerLehman kernel.
Friday, 10.06.2016
12:15

13:15
work by Amir Abboud, Arturs Backurs, Thomas Dueholm Hansen
Friday, 03.06.2016
12:15

13:15
Dealing with automata on words or trees over infinite alphabets introduces many difficulties. One of the main problems here is defining the automata models in such a way, that they have decent expressive powers while still keeping the emptiness problem decidable. We will introduce a new automata model, the INmemory automaton, first to work only on words and then later to work on trees as well. We will discuss the closure properties of the classes of languages recognized by this model and show that the emptiness problem for INmemory automata is decidable.
Friday, 13.05.2016
12:15

13:15
work by Yann Disser, Jan Hackfeld, Max Klimm
Friday, 29.04.2016
12:00

13:00
Tuesday, 26.04.2016
11:00

11:45
Friday, 22.04.2016
10:00

11:00
Room 9222, E3
We study quantitative extensions of games, logics and automata with the idea of verification for systems with resources in mind. The resources are discrete and can be consumed stepbystep and replenished all at once. We formally model this by finitely many nonnegative integer counters that can be incremented, reset to zero or left unchanged. This particular formalism is inspired by the model of Bautomata for regular cost functions, which are a quantitative extension of regular languages with extensive closure properties and rich algorithmic results by Colcombet.
We consider resource games on resource pushdown systems as formal model for interactive, recursive systems with resource consumption. A resource game is a twoplayer graph game with a qualitative ωregular winning condition and an additional qualitative objective of the first player to minimize the used resources throughout a play. The game graph is the configuration graph of a resource pushdown system, whose transitions are annotated with actions for consuming or replenishing resources. We mainly study the bounded winning problem for these games. This is the question whether there is a uniform upper bound k such that the first player can win the game with an allowed resourceusage of k from all positions in a given set A of initial configurations. We provide algorithms to solve this problem if the set A is regular and show that it also has applications in automata theory.
The second part of this work is dedicated to study the various formal logics that have been considered around boundedness and unboundedness questions recently. There are the quantitative logics costMSO, which extends standard monadic secondorder logic with an atomic formula to count the elements in a set variable, and cost firstorder (CFO), which extends standard FOlogic with an universal quantifier that allows for a certain number of exceptions. Furthermore, there is firstorder+resource relations (FO+RR)  a quantitative variant of firstorder logic that is evaluated on structures with quantitative relations. Lastly, we also consider the qualitative logic MSO+U, which extends MSO with a quantifier to test if there are arbitrarily large finite sets that satisfy some formula.
We show that CFO and FO+RR have the same expressive power on the infinite binary tree with equal level predicate and both define the class of regular cost functions there. Moreover, we establish a general connection between costMSO over arbitrary relational structures and the two firstorder logics on the structure with the respective powerset as universe. Then, we consider the logic quantitative counting MSO (qcMSO), which combines the aspects of costMSO and MSO+U, and show that it can be translated into an extension of FO+RR with an infinity test (called FO+RR^∞) on the powerset structure. This allows us to algorithmically reason about qcMSO with a quantitative extension of automatic structures called resource automatic structures.
We study FO+RR^∞ on resource automatic structures and extensions to the area of infinite words and finite trees. The logic FO+RR^∞ can be effectively evaluated on resource automatic structures on finite words and trees. In the case of infinite words, we can only evaluate the weaker logic FO+RR. Altogether, we obtain algorithmic methods to evaluate weak qcMSO formulas, in which set quantification only ranges over finite sets, on the natural numbers with order and the infinite binary tree.
Wednesday, 20.04.2016
14:00

14:45
Torsten Rogoll
Friday, 08.04.2016
10:00

11:00
Choiceless Polynomial Time (CPT) ist one of the most promising candidates for
a logic for PTIME. As shown by Dawar, Richerby and Rossman, CPT can define
the CaiFürerImmerman query for the case that the underlying graphs are linearly
ordered. However, the CFI query over general graphs remains one of the few known
examples for which CPTdefinability is open.
Our first contribution generalises the result by Dawar, Richerby and Rossman to
the variant of the CFI query where the underlying graphs have colour classes of
logarithmic size. (Note that the case of ordered graphs corresponds to colour
class size one.) Secondly, for the CFI query over complete graphs, we establish
CPTdefinability using only sets of constant rank, which is known to be impossible
for the general case. Furthermore, even though sets of bounded rank are sufficient,
we prove that one has to use setlike objects. To do this, we first give a
characterisation of setlike objects via the automorphism group. Considered
from a more general perspective, our result shows that isomorphisminvariant
mechanisms which use sets as data structures are strictly more powerful than the
ones which only use sequences, a fact which was already known in the absence of
counting.
Joint work with Wied Pakusa and Erkal Selman
Thursday, 31.03.2016
10:00

11:00
Thursday, 17.03.2016
11:00

12:00
Marc Selders
Advice automata are finite automata that have an additional input tape, on which an infinite word  the advice  is written.
The automaton reads input and advice synchronously and changes its state according to both, thus for different advices an
advice automaton may recognize different languages. An advice automatic presentation of a structure is a labelling of the
universe with words from a language that is recognizable by a finite automaton with advice x and an interpretation of the
relations by synchronized advice automata that recognize the induced languages of the labelling using the same advice x. For
different advices a multitude of structures over a common signature may be presented by the same automata. We study classes
of advice automatic structures that allow uniform representation with adviceautomata.
Friday, 19.02.2016
10:00

11:00
Orderinvariant formulas access an ordering on a structure's universe, but the model relation is independent of the used ordering. Orderinvariant formulas capture unordered problems of complexity classes and they model the independence of the answer to a database query from lowlevel aspects of databases. We study the expressive power of orderinvariant monadic secondorder (MSO) and firstorder (FO) logic on restricted classes of structures that admit certain forms of tree decompositions (not necessarily of bounded width).
While orderinvariant MSO is more expressive than MSO and, even, CMSO (MSO with modulocounting predicates), we show that orderinvariant MSO and CMSO are equally expressive on graphs of bounded tree width and on planar graphs. This extends an earlier result for trees due to Courcelle. Moreover, we show that all properties definable in orderinvariant FO are also definable in MSO on these classes. These results are applications of a theorem that shows how to lift up
definability results for orderinvariant logics from the bags of a graph’s tree decomposition to the graph itself.
Joint work with Michael Elberfeld and Martin Grohe.
Friday, 12.02.2016
10:15

11:15
Seminar room 9U09
Stefan Hegselmann
We explore several counting constructs for logics with team semantics.
Counting is an important task in numerous applications, but with a somewhat delicate relationship to logic. Team semantics on the other side is the mathematical basis of modern logics of dependence and independence, in which formulae are evaluated not for a single assignment of values to variables, but for a set of such assignments. It is therefore interesting to ask what kind of counting constructs are adequate in this context, and how such constructs influence the expressive power, and the modeltheoretic and algorithmic properties of logics with team semantics. Due to the secondorder features of team semantics there is a rich variety of potential counting constructs. Here we study variations of two main ideas: forking atoms and counting quantifiers.
Forking counts how many different values for a tuple w occur in assignments with coinciding values for v. We call this the forking degree of v with respect to w. Forking is powerful enough to capture many of the previously studied atomic dependency properties. In particular we exhibit logics with forking atoms that have, respectively, precisely the power of dependence logic and independence logic.
Our second approach uses counting quantifiers „There exist at least mu many x“ of a similar kind as used in logics with Tarski semantics. The difference is that these quantifiers are now applied to teams of assignments that may give different values to mu. We show that, on finite structures, there is an intimate connection between inclusion logic with counting quantifiers and FPC, fixedpoint logic with counting, which is a logic of fundamental importance for descriptive complexity theory. For sentences, the two logics have the same expressive power.
Our analysis is based on a new variant of modelchecking games, called threshold safety games, on a trap condition for such games, and on game interpretations.
This is joint work with Erich Grädel.
Counting is an important task in numerous applications, but with a somewhat delicate relationship to logic. Team semantics on the other side is the mathematical basis of modern logics of dependence and independence, in which formulae are evaluated not for a single assignment of values to variables, but for a set of such assignments. It is therefore interesting to ask what kind of counting constructs are adequate in this context, and how such constructs influence the expressive power, and the modeltheoretic and algorithmic properties of logics with team semantics. Due to the secondorder features of team semantics there is a rich variety of potential counting constructs. Here we study variations of two main ideas: forking atoms and counting quantifiers.
Forking counts how many different values for a tuple w occur in assignments with coinciding values for v. We call this the forking degree of v with respect to w. Forking is powerful enough to capture many of the previously studied atomic dependency properties. In particular we exhibit logics with forking atoms that have, respectively, precisely the power of dependence logic and independence logic.
Our second approach uses counting quantifiers „There exist at least mu many x“ of a similar kind as used in logics with Tarski semantics. The difference is that these quantifiers are now applied to teams of assignments that may give different values to mu. We show that, on finite structures, there is an intimate connection between inclusion logic with counting quantifiers and FPC, fixedpoint logic with counting, which is a logic of fundamental importance for descriptive complexity theory. For sentences, the two logics have the same expressive power.
Our analysis is based on a new variant of modelchecking games, called threshold safety games, on a trap condition for such games, and on game interpretations.
This is joint work with Erich Grädel.
Thursday, 11.02.2016
14:15

17:00
AH 5
Friday, 29.01.2016
10:00

11:00
This article is inspired by two works from the early 90s. The first one is by Bogaert and Tison who considered a model of automata on finite ranked trees where one can check equality and disequality constraints between direct subtrees: they proved that this class of automata is closed under Boolean operations and that both the emptiness and the finiteness problem of the accepted language are decidable. The second one is by Niwinski who showed that one can compute the cardinality of any ωregular language of infinite trees.
Here, we generalise the model of automata of Tison and Bogaert to the setting of infinite binary trees. Roughly speaking we consider parity tree automata where some transitions are guarded and can be used only when the two direct subtrees of the current node are equal/disequal. We show that the resulting class of languages encompasses the one of ωregular languages of infinite trees while sharing most of its closure properties, in particular it is a Boolean algebra. Our main technical contribution is then to prove that it also enjoys a decidable cardinality problem.
Joint work with Arnaud Carayol and Olivier Serre
Friday, 08.01.2016
10:00

11:00
Friday, 11.12.2015
11:00

12:00
Yanje Wang
Monday, 07.12.2015
10:30

11:30
Room 9222
The search for a logic which captures polynomial time is one of the most important challenges in finite model theory. During the last years, significant new insights were obtained by the systematic study of the descriptive complexity of queries from (linear) algebra. These investigations were initiated in 2007 by a striking result of Atserias, Bulatov, and Dawar, who showed that fixedpoint logic with counting (FPC) cannot define the solvability of linear equation systems over finite Abelian groups. Their result triggered the development of new candidates for capturing polynomial time, for instance of rank logic (FPR), which was introduced by Dawar, Grohe, Holm, and Laubner in 2009. Before that, only few other candidates had been proposed, of which certainly the most important one is Choiceless Polynomial Time (CPT), developed by Blass, Gurevich, and Shelah in 1999. This thesis continues the search for a logic capturing polynomial time in the light of the following leading questions. (1) How can the algorithmic principles for solving linear equation systems be captured by logical mechanisms (such as operators or quantifiers)? (2) Are there interesting classes of structures on which the solvability of linear equations systems can be used to capture polynomial time? Ad (1), we study the interdefinability of linear equation systems over finite Abelian groups, rings, and modules. Our aim is to transform linear equation systems over these algebraic domains into equivalent linear equation systems over simpler domains, such as fields, or cyclic groups, via a reduction which is definable in fixedpoint logic. For linear equation systems over ordered Abelian groups, rings, and modules, and also for certain interesting classes of unordered commutative rings, we obtain a reduction to cyclic groups. Moreover, we establish a reduction to commutative rings for the general case. Further, we study rank logic (FPR), which extends FPC by operators to compute the rank of definable matrices over finite fields. Our main result validates a conjecture of Dawar and Holm: rank operators over different prime fields have incomparable expressive power. An important consequence is that rank logic, in the original definition with a distinct rank operator for every prime, fails to capture polynomial time, and should be replaced by a more powerful version with a uniform rank operator. We further show that, in the absence of counting, solvability quantifiers are weaker than rank operators. Ad (2), we introduce a class of linear equation systems, so called cyclic linear equation systems, which are structurally simple, but general enough to describe the CaiFürerImmerman query, and thus separate FPC from polynomial time. Our main result is that CPT can express the solvability of cyclic linear equation systems. Further, we use this definability result to show that CPT captures polynomial time on structures with Abelian colours, a class containing many of the known queries which separate FPC from polynomial time. Our result further solves an open question of Blass, Gurevich, and Shelah: the isomorphism problem for multipedes is definable in CPT.
Friday, 04.12.2015
12:00

13:00
Local specifications of the isomorphism types of designated
substructures (as building blocks) with amalgamation patterns (as
gluing instructions for pairs of blocks) arise in various contexts.
This talk discusses a generic approach to "locally free" finite
realisations, in which groupoids (or inverse semigroups) with
characteristic acyclicity properties serve as global amalgamation
patterns between the building blocks.
Amalgamation techniques also play a role in the construction of those
underlying groupoids: they provide an algebraiccombinatorial route
to Cayley group(oid)s and graphs with strong acyclicity properties.
Among further modeltheoretic applications of such groupoidal
constructions are a new proof of a powerful theorem of Herwig and
Lascar concerning the lifting of local to global symmetries in finite
structures and  not quite unrelated  finite model properties and
finite controllability results for guarded logics.
(Cf. arXiv:1404.4599 (v4) 2015)
Monday, 23.11.2015
16:00

16:45
Friday, 13.11.2015
12:30

13:30
We present two novel linearfpt algorithms for Dominating Set which use
little space when parameterized by treedepth and give lower bounds for
the space consumption of dynamic programming algorithms for Vertex
Cover, 3Coloring and Dominating Set on tree, path and treedepth
decompositions.
Tuesday, 27.10.2015
16:00

17:00
Phillip Keldenich
Friday, 09.10.2015
12:30

13:30
The most prominent connection between automata theory and logic is the
expressive equivalence between finite automata over words and monadic
secondorder logic MSO. We consider a more restricted, not as popular, but also fundamental logic, “monadic transitive closure logic” MTC, which over words has the same expressive power as MSO. In MTC one can proceed from a formula F(z,z’) to F*(x,y) which expresses that a sequence exists from x to y such that each step is in accordance with F.
Thus, MTC is a very natural framework to express reachability properties.
We survey old and recent results which clarify the expressive power and
possible uses of MTC. We start with the equivalence between MTC and MSO
over word models. Then we discuss MTC over trees and labelled grids,
addressing the relation to treewalking and gridwalking automata.
We conclude with some open questions.
Friday, 02.10.2015
11:00

12:00
Andreas Tollkötter
Thursday, 01.10.2015
11:00

12:00
Tuesday, 22.09.2015
11:45

12:45
Maximilian Gödicke
The (1dimensional) WeisfeilerLehmanAlgorithm or Colour Refinement is a widely used subroutine for graph isomorphism algorithms.
It iteratively refines the coloring of its input graph until the coloring is stable. If two graphs have different stable
colorings, they are not isomorphic. We call the number of iterations needed the iteration number. Because of its importance
in practical graph isomorphism algorithms, lower and upper bounds on the iteration number are of practical interest. Krebs
and Verbitsky give a lower bound for the maximal iteration number of uncoloured graphs by constructing graphs on which Colour
Refinement takes nO(sqrt(n)) iterations. However, for random nonisomorphic graphs, the probability that the algorithm does
not distinguish them after only 2 iterations is exponentially small in the size of the graphs.
In order to find tight upper bounds on the maximal iteration number of graphs of size n, we establish a connection between the maximal iteration number for colored and uncolored graphs and present the new computational tool of splitprocedures.
These can be used, given n,k, to find graphs of size n with iteration number k. This tool allows us to construct explicit examples of graphs that need n1 iterations of Color Refinement, which shows that for these n the trivial upper bound on the maximal number of iterations is tight.
In order to find tight upper bounds on the maximal iteration number of graphs of size n, we establish a connection between the maximal iteration number for colored and uncolored graphs and present the new computational tool of splitprocedures.
These can be used, given n,k, to find graphs of size n with iteration number k. This tool allows us to construct explicit examples of graphs that need n1 iterations of Color Refinement, which shows that for these n the trivial upper bound on the maximal number of iterations is tight.
Tuesday, 22.09.2015
11:00

11:45
Daniel Neuen
The Graph Isomorphism Problem is one of the most famous open problems
in Theoretical Computer Science and is up to date neither known to be
in P nor NPcomplete. Although it does seem unlikely for the Graph
Isomorphism Problem to be NPcomplete the best currently known
algorithm only runs in moderately exponential time. On the other hand
for many natural restricted graph classes efficient algorithms are
known. However most of these classes contain only sparse graphs that
do not have arbitrarily large cliques as subgraphs. A
number of natural graph classes also containing dense graphs arise
from geometric graphs as interval graphs which admit
linear time isomorphism tests. For several other geometric graph
classes proposed in the literature the complexity of the Graph
Isomorphism Problem is still unknown. In this work we will consider
unit square graphs which are intersection graphs of unit squares in
the plane. One of the main results will be a polynomial time algorithm
solving graph isomorphism for unit square graphs with bounded diameter.
in Theoretical Computer Science and is up to date neither known to be
in P nor NPcomplete. Although it does seem unlikely for the Graph
Isomorphism Problem to be NPcomplete the best currently known
algorithm only runs in moderately exponential time. On the other hand
for many natural restricted graph classes efficient algorithms are
known. However most of these classes contain only sparse graphs that
do not have arbitrarily large cliques as subgraphs. A
number of natural graph classes also containing dense graphs arise
from geometric graphs as interval graphs which admit
linear time isomorphism tests. For several other geometric graph
classes proposed in the literature the complexity of the Graph
Isomorphism Problem is still unknown. In this work we will consider
unit square graphs which are intersection graphs of unit squares in
the plane. One of the main results will be a polynomial time algorithm
solving graph isomorphism for unit square graphs with bounded diameter.
Wednesday, 26.08.2015
10:00

11:00
Nicolas Mazzocchi
A treatment of the synthesis problem in the framework of game theory was first proposed by McNaughton in 1965. This talk shows
a completeexample of this problem with focus on the automatic relations uniformization by subsequential transducers presented
in the paper "Uniformization in Automata Theory" written by Christof Löding and Arnaud Carayol. First I will present the
uniformization problem through different resultson automatic relations. Then, I will describe the decision procedure and thesynthesis
in case of success. Finally, I will discuss complexity bounds.
Thursday, 16.07.2015
13:00

14:00
Room 9U09, E3
Given a kSAT instance with the promise that there is an assignment satisfying at least t out of k literals in each clause, can one efficiently find a satisfying assignment (setting at least one literal to true in every clause)? The NPhardness of 3SAT implies that this problem is NPhard when t <= k/3, and extensions of some 2SAT algorithms give efficient solutions when t >= k/2.
We prove that for t < k/2, the problem is NPhard. Thus, satisfiability becomes hard when the promised density of true literals falls below 1/2. One might thus say that the transition from easy to hard in 2SAT vs. 3SAT takes place just after two and not just before three.
A strengthening of this result shows that given a (2k+1)uniform hypergraph that can be 2colored such that each edge has nearperfect balance, it is NPhard to even find a 2coloring that avoids a monochromatic edge. This shows extreme hardness of discrepancy minimization for systems of boundedsize sets.
The talk will sketch our proof of the SAT result, which is based on the fact that the only functions passing a natural "dictatorship test" are "juntas" depending on few variables. We might also briefly mention the general principle, based on the lack of certain "weak polymorphisms," that underlies hardness of constraint satisfaction, and an improvement of the hypergraph coloring result ruling out coloring lowdiscrepancy hypergraphs with any constant number of colors.
Based on joint works with Per Austrin and Johan Håstad (FOCS 2014) and Euiwoong Lee (SODA 2015).
Friday, 03.07.2015
11:15

12:15
work by Fedor V. Fomin, Alexander Golovnev, Alexander S. Kulikov, Ivan Mihajlin
Friday, 26.06.2015
11:15

12:15
work by Mitsuru Kusumoto, Yuichi Yoshida
Friday, 19.06.2015
15:00

16:00
Seminarraum 9222
Algorithmic formal methods such as model checking and reactive
synthesis were originally developed in the context of discrete systems
such as hardware circuits and software.
Over the last few years, these techniques have played an increasingly
important role in the development of highassurance cyberphysical
systems, in which discrete components interact with continuous
physical systems.
I will survey some recent results applying formal methods to
continuous systems, pointing out how notions such as invariants,
variants, bisimulation,
and trace equivalence can be adapted in the context of continuous
systems.
Specifically, I will show two examples of theoretical ideas from
formal methods applied to continuous control:
a deductive system for controller synthesis for continuous systems
against temporal specifications,
and a semiformal conformance testing procedure based on a metric
generalization of trace equivalence.
I will conclude with some current challenges in the area.
[Joint work with Jyo Deshmukh, Rayna Dimitrova, and Vinayak Prabhu]
Friday, 19.06.2015
11:15

12:15
work by Dana Silverbush and Roded Sharan
Friday, 12.06.2015
11:15

12:15
work by Martin Grohe, Stephan Kreutzer, Roman Rabinovich, Sebastian Siebertz, Konstantinos Stavropoulos
Friday, 05.06.2015
11:15

12:15
work by Bingkai Lin
Friday, 22.05.2015
11:15

12:15
work by Sandra Kiefer, Pascal Schweitzer, Erkal Selman
Wednesday, 13.05.2015
14:00

15:00
Room 9U09, Erweiterungsbau 3
In verification and synthesis, properties or models of interest are often quantitative, and many quantitative aspects involve counting. For example, one might be interested in the amount of memory required by a system, how many simultaneous requests a system has to process along a run, or how many infixes belonging to some regular language a word contains. In this talk, we present two quantitative counting logics and two models of games with counter. We first study Qmu[#MSO], a counting logic based on the quantitative mucalculus. This logic is designed specifically for transition systems where the states are labeled with finite relational structures. Counting is introduced to Qmu with the help of counting terms of monadic secondorder logic, which are evaluated on the relational structures the states are labeled with. We prove that the modelchecking problem for Qmu[#MSO] on a generalization of pushdown systems is reducible to solving finite counter parity games. We then present the model of counter parity games, which are quantitative games in which a finite set of counters is updated along the edges. Payoffs of finite plays are obtained via the counters, while payoffs of infinite plays are determined via a parity condition. We show that finite counter parity games are effectively solvable, using games with imperfect recall to solve the unboundedness problem for the value. Thus, it follows that the above modelchecking problem for Qmu[#MSO] is decidable. In the last part of the talk, we discuss a quantitative counting extension of monadic secondorder logic called qcMSO and a model of games with counters where payoffs are obtained via a meanpayoff objective on a special counter.
Wednesday, 13.05.2015
11:15

12:15
There are regular languages for: finite words, finite trees, infinite words, various kinds of infinite trees, queries, tuples of words, tuples of trees, etc etc. I will show how monads, a concept from the Mordor of category theory, can be used to unify some of the theory of “regularity". The idea is that for each setting, like finite words or labelled countable total orders, or infinite trees with countably many branches, one chooses a monad, and then many definitions and some theorems come out for free.
Friday, 08.05.2015
12:30

13:30
The question whether there is logic for polynomial time is open for
more than 30 years and it remains the most important challenge
in the field of descriptive complexity theory.
In my talk I summarise what is known about this question today, and
I report on very recent work which shows that rank logic, a powerful logic that
was proposed by Dawar, Grohe, Holm and Laubner as a possible solution,
fails to express all polynomialtime properties (at least, in its original definition).
Friday, 08.05.2015
11:15

12:15
work by Martin Grohe, Pascal Schweitzer
Friday, 24.04.2015
11:15

12:30
work by Michael Elberfeld
Monday, 16.03.2015
14:00

15:00
Model checking (for various temporal logics), as well as checking behavioural equivalence of processes (using various equivalence relations), are powerful tools in system verification. Both problems can be seen as comparing two behavioural specifications. Certain fixpoint logics let us easily express these problems as formulas that can be evaluated on a structure that represents both specifications.
The versatility of fixpoint logics makes them interesting as a basis for practical system analysis tools. Important for the usability of such tools is that they should provide diagnostics to their users. In this talk, we will look at proof graphs: gamelike proof objects for LFP. They provide an alternative semantics for the LFP fixpoint operator, and can serve as the basis of a diagnostics generation framework. This is illustrated with some small examples. If time permits, we will see how this approach can be directly reused for parameterized Boolean equation systems (an equational fixpoint logic).
Friday, 20.02.2015
14:00

15:00
We investigate the power of algebraic techniques to test whether two given graphs are isomorphic.
In the algebraic setting, one encodes the graphs into a system of polynomial equations that is satisfiable if the graphs are isomorphic.
Afterwards, one tries to test satisfiability of this system using, for example, the Gröbner basis algorithm.
In some cases this can be done in polynomial time, in particular, if the equations admit a constant degree refutation in algebraic proof systems such as Nullstellensatz or Polynomial Calculus.
We compare this approach to other known relaxations and show that the kdimensional WeisfeilerLehman Algorithm can be characterized in terms of algebraic proof system over the reals that lies between degreek Nullstellensatz and degreek Polynomial Calculus.
Furthermore, we prove a linear lower bound on the Polynomial Calculus degree of CaiFürerImmerman graphs, which holds over any field of characteristic different from two.
This is joint work with Martin Grohe.
Friday, 13.02.2015
12:30

13:40
We first give a brief introduction of rankwidth and linear rankwidth. Even though rankwidth has been well developed by a series of papers, only few algorithmic properties are developed for linear rankwidth. Especially, the only known fpt algorithm for testing linear rankwidth at most k, for fixed k, is using the pivotminor obstructions for those classes. This algorithm is nonconstructive it uses the finiteness of the obstructions guaranteed from the wellquasiordering property of graphs of bounded rankwidth.We mainly show that the size of pivotminor obstructions for linear rankwidth at most k is doubly exponential in O(k). To show this, we extend the notion of the pseudominor order developed by Lagergren, which is a relaxation of usual minor notion and is used to find an upper bound for the size of minor obstructions for graphs of bounded pathwidth.Interestingly, our result implies that the size of minor obstructions for Frepresentable matroids (where F is a finite field) of (matroid) pathwidth at most k is also doubly exponential in O(k).
Tuesday, 10.02.2015
10:00

11:00
Seminar room i1
Friday, 30.01.2015
10:00

12:00
work by Jochen Messner, Thomas Thierauf
Friday, 23.01.2015
10:00

12:00
work by Robin A. Moser, Gábor Tardos
Thursday, 22.01.2015
14:45

15:45
Delay games are twoplayer games of infinite duration in which one player may delay her moves to obtain a lookahead on her opponent's moves. For omegaregular winning conditions it is known that such games can be solved in doublyexponential time and that doublyexponential lookahead is sufficient.
We improve upon both results by giving an exponential time algorithm and an exponential upper bound on the necessary lookahead. This is complemented by showing EXPTIMEhardness of the solution problem and tight exponential lower bounds on the lookahead. Both lower bounds already hold for safety conditions. Furthermore, solving delay games with reachability conditions is shown to be PSPACEcomplete.
This is joint work with Martin Zimmermann.
Thursday, 22.01.2015
14:00

14:45
We determine the complexity of counting models of bounded size of specifications expressed in Lineartime Temporal Logic. Counting word models is #Pcomplete, if the bound is given in unary, and as hard as counting accepting runs of nondeterministic polynomialspace Turing machines, if the bound is given in binary. Counting tree models is as hard as counting accepting runs of nondeterministic exponentialtime Turing machines, if the bound is given in unary. For a binary encoding of the bound, the problem is at least as hard as counting accepting runs of nondeterministic exponential space Turing machines. On the other hand, it is not harder than counting accepting runs of nondeterministic doublyexponential time Turing machines.
This is joint work with Hazem Torfah.
Friday, 16.01.2015
10:00

12:00
work by Todd L. Veldhuizen
Friday, 09.01.2015
11:00

12:00
work by Vikraman Arvind, Gaurav Rattan
Friday, 12.12.2014
13:00

14:00
We consider the fundamental problem of computing the natural join of a sequence of relations. We show that a simple linear program, which describes the socalled fractional edge cover number of a hypergraph associated with the join expression, gives nontrivial bounds on the size of such joins. Using linear programming duality, we can also show that these bounds are optimal.
This gives us better cost estimates to optimize join queries, which lead to better algorithms.
(This is joint work with Albert Atserias and Daniel Marx.)
Friday, 12.12.2014
11:30

12:30
Joachim Redies
Friday, 12.12.2014
10:00

12:00
work by Jiří Fiala, Pavel Klavík, Jan Kratochvíl, Roman Nedela
Wednesday, 10.12.2014
17:00

18:00
Raum 9222, E3
The classification of recognizable omegaword languages into Borel levels is the basis of many specialized solutions in the fields of formal verification and algorithmic controller synthesis. Each of these levels is characterized by a class of deterministic omegaautomata, namely deterministic weak (reachability and safety), deterministic Buechi, and deterministic Muller automata. This thesis analyses the more general setting of infinitary Mazurkiewicz traces, which model nonterminating, concurrent computation of distributed systems. The study of finitary tracelanguages generalizes that of wordlanguages, and numerous results have been extended to infinitary tracelanguages (due to Gastin, Petit, Diekert, Muscholl, and others). However, the current definitions of asynchronous omegaautomata fail to yield a classification of omegatrace languages that is compatible with the Borel hierarchy. We close this gap, which had been open since the 1990's, by introducing the family of ``synchronizationaware'' automata.
We also demonstrate the semidecidability of the problem to determine whether a given recognizable omegatrace languages is also recognized by a deterministic synchronization aware Buechi automaton.
Although asynchronous automata are useful in implementing distributed monitors and distributed controllers, their constructions are prohibitively expensive even by automatatheoretic standards. On the other hand, ``linearizations'' of infinitary trace languages, which invoke the framework of ``traceclosed'' omegaword languages, can find immediate applications in model checking and formal verification of distributed systems. This is because word automata recognizing traceclosed languages support more efficient analyses of most of the interesting properties pertaining to distributed computations. In this setting, we present another classification of omegatrace languages in terms of a Borellike hierarchy of traceclosed omegaword languages.
Thursday, 04.12.2014
10:30

11:30
AH VI
Für das ConstraintSatisfactionProblem (CSP), das Erfüllbarkeitsproblem der Aussagenlogik (3SAT) und das Graphisomorphieproblem sind keine effizienten Algorithmen bekannt. Um sie zu lösen, werden deshalb heuristische Algorithmen mit polynomieller Laufzeit eingesetzt. Wir untersuchen klassische Heuristiken für die genannten Entscheidungsprobleme dahingehend, ob sie mit schnelleren als den bekannten Algorithmen implementiert werden können.
Die kKonsistenzHeuristik für das ConstraintSatisfactionProblem versucht durch sukzessives Ableiten neuer Bedingungen lokal konsistente Instanzen zu erzeugen und kann mit einer Laufzeit von n^O(k) implementiert werden. Wir zeigen, dass der Anstieg der Laufzeit mit wachsendem Parameter k unvermeidbar ist. Dazu beweisen wir, dass es nicht möglich ist schneller zu entscheiden, ob lokale Konsistenz auf einer binären CSPInstanz erreicht werden kann.
Eine Heuristik für 3SAT ist das Suchen nach einer Resolutionswiderlegung, in der jede Klausel höchstens k Literale enthält. Eine solche Widerlegung der Weite k kann in Zeit n^O(k) gefunden werden. Wir zeigen, dass es unmöglich ist schneller zu entscheiden, ob eine Resolutionswiderlegung der Weite k existiert.
Abschließend gehen wir auf die Komplexität der ColorRefinementHeuristik für das Graphisomorphieproblem ein.
Friday, 28.11.2014
10:00

12:00
work by Ittai Abraham, Cyril Gavoille, Anupam Gupta, Ofer Neiman, Kunal Talwar
Friday, 21.11.2014
12:30

13:30
In the network orientation problem one is given a mixed graph, consisting of directed and undirected edges, and a set of sourcetarget vertex pairs. The goal is to orient the undirected edges so that a maximum number of pairs admit a directed path from the source to the target. This NPcomplete problem arises in the context of analyzing physical networks of proteinprotein and proteinDNA interactions. While the latter are naturally directed from a transcription factor to a gene, the direction of signal flow in proteinprotein interactions is often unknown or cannot be measured en masse. One then tries to infer this information by using causality data on pairs of genes such that the perturbation of one gene changes the expression level of the other gene.
After introducing physical networks of protein interactions and the related orientation problem, the talk reports on two studies related to its solution: First, we have a look at optimal solutions to the problem that are based on an integer linear programming formulation; this formulation is applied to orient proteinprotein interactions in yeast. Second, we consider approximate solutions and their algorithmic aspects. Finally, we will have a look at current and future work around graph orientations and their applications to network biology.
Friday, 21.11.2014
10:00

12:00
Friday, 14.11.2014
10:00

12:00
work by Diego Figueira, Leonid Libkin
Friday, 07.11.2014
10:00

12:00
work by Andreas Björklund, Thore Husfeldt
Friday, 31.10.2014
11:00

13:00
Seminarroom I11 (room 2202)
How can we affordably build trustworthy autonomous, networked systems?
Partly motivated by this question, I describe a shift from the traditional “design+verify” approach to “specify+synthesize” in modelbased engineering. I then discuss our recent results on automated synthesis of correctbyconstruction, hierarchical control protocols. These results account for hybrid dynamics that are subject to rich temporal logic specifications and heterogenous uncertainties, and that operate in adversarial environments. They combine ideas from control theory with those from computer science, and exploit underlying systemtheoretic interpretations to suppress the inherent computational complexity. The expressivity of the resulting design methodology enables us to formally investigate a number of emerging issues in autonomous, networked systems. I conclude my talk with a brief overview of several such issues from my ongoing projects: (i) compositional synthesis for the socalled fractionated systems; (ii) effects of perception imperfections on protocol synthesis; (iii) interfaces between learning modules and reactive controllers with provable guarantees of correctness; and (iv) humanembedded autonomy.
Friday, 24.10.2014
12:30

13:30
In our foray into social networks we have discovered several problems that we perceive would be of interest to theoreticians.
I will briefly describe some of these and then focus on the rather practical problem: that of finding community structure in social/complex networks.
The problem of discovering communities in networks has been very wellstudied in the literature. There are two major roadblocks here: the first is that there is no generally accepted notion of what constitutes a community; and second, any such algorithm is meant to be used on large networks and therefore must be fast enough.
On large networks, typically even quadratic running times are considered infeasible. Existing algorithms use their own definition of a community and, to the best of our knowledge, even the fastest of these take quadratic time.
I will talk about an approach that manages to do well on both these fronts. Instead of settling on a fixed definition of a community, we let the user decide how to define them. We let the user identify a (small) set of "seed nodes" which they classify into communities. We then use a random walk approach and classify the rest of the network. In order to estimate the random walk probabilities, we end up solving a system of linear equations which turn out to be what is called symmetrically diagonally dominant linear system, which by the work of Speilman, Teng and others can be solved in almost linear time.
I will also describe associated experimental work which seems to be very promising.
Wednesday, 22.10.2014
10:00

12:00
Room 5052
We address the safety analysis of chemical plants controlled by programmable logic controllers (PLCs). We consider sequential function charts (SFCs) for the programming of the PLCs, extended with the specification of the dynamic plant behavior. The resulting hybrid SFC models can be transformed to hybrid automata, opening the way to the application of advanced techniques for their reachability analysis.
However, the hybrid automata models are often too large to be analyzed.
To keep the size of the models moderate, we propose a counterexampleguided abstraction refinement (CEGAR) approach, which starts with the purely discrete SFC model of the controller and extends it with those parts of the dynamic behavior, which are relevant for proving or disproving safety.
Tuesday, 21.10.2014
14:00

15:00
We study the parameterized space complexity of modelchecking firstorder logic with a bounded number of variables. By restricting the number of the quantifier alternations we obtain problems complete for a natural hierarchy between parameterized logarithmic space and FPT. We call this hierarchy the tree hierarchy, provide a machine characterization, and link it to the recently introduced classes PATH and TREE. We show that the lowest class PATH collapses to parameterized logarithmic space only if Savitch's theorem can be improved. Finally, we settle the complexity with respect to the treehierarchy of finding short undirected paths and small undirected trees.
This is joint work with Moritz Mueller.