9th workshop of the Center of Modern Computer Science
9. seminář Centra moderní informatiky
The ninth workshop of the Center of Modern Computer Science will take place on Monday May 2, 2016 in the room S1 of Malá Strana building.
Centrum moderní informatiky zve na devátý seminář. Přednášky juniorů centra se konají v průběhu programu v pondělí 2. 5. 2016 v učebně S1 v budově MFF UK na Malé Straně.
|9:00||Roman Neruda (ÚI AV ČR): Meta-learning for data mining|
|9:50||Vít Jelínek: The Beer index and other measures of convexity|
|10:40||Martin Pilát: User preferences in evolutionary multi-objective optimization|
|11:10||David Obdržálek: Particle filters in robot localization|
|11:40||Pavel Surynek: Boolean satisfiability approach to optimal multi-agent path finding|
Roman Neruda: Meta-learning for data mining
Meta-learning approach utilizes experience with learning algorithms to improve the results of machine learning applications. By applying machine learning principles to machine learning process itself it is possible to automate tasks that have so far been dependent on human experience and experiments. One of the tasks of meta-learning for data mining is to recommend a suitable method for new data sets. We focus on generating and testing complete workflows embedding machine learning methods together with preprocessing and their combinations, such as ensembles.
Vít Jelínek: The Beer index and other measures of convexity
The Beer index of a planar set S is the probability that two points A, B chosen randomly uniformly and independently in S can "see each other" in S, i.e., that the whole segment AB is contained in S. This quantity expresses how close a given set S is to being convex. In my talk, I will present several recent results about the relationship between the Beer index and other known measures of convexity. I will also mention generalizations of these results to higher dimensions. The talk is based on my joint work with M. Balko, P. Valtr and B. Walczak.
Martin Pilát: User preferences in evolutionary multi-objective optimization
Multi-objective evolutionary algorithms are among the best multi-objective optimizers. Their main advantage is that they have almost no assumptions about the functions they optimize. However, the resulting distribution of points along the Pareto front is often given implicitly by the selection scheme of the evolutionary algorithm. For a specific class of evolutionary algorithms (those based on decomposition, which decompose the multi-objective problem into a set of single-objective problems and solve them all at once), the distribution of points along the Pareto front is given by a pre-selected "weight vectors". These vectors can be tuned in order to provide distributions preferred by the user. This leads to better Pareto approximations for the user and it also leads to better utilization of computing resources, as uninteresting parts of the Pareto front are not explored and the computation is focused towards the interesting parts. In the talk, we will discuss two different ways which can be used to tune the vectors in a general and robust way.
David Obdržálek: Particle filters in robot localization
Particle filters recently gained popularity for autonomous robot localization. They might have been formerly considered clumsy (in comparison to single pose tracking or parametric filters) but thanks to the hardware evolution, their abilities can be well exploited in real life. In this talk, I will briefly introduce their basic form and focus on their practical usage, showing various tricks how to use them efficiently and fruitfully.
Pavel Surynek: Boolean satisfiability approach to optimal multi-agent path finding
We focus on finding optimal solutions to the multi-agent path finding (MAPF) problem over undirected graphs where the task is to find non-colliding paths for multiple agents, each with a different start and goal position. An en- coding of MAPF to Boolean satisfiability (SAT) is already known to the makespan optimal variant of the problem. We will show instead the first SAT-solver for minimizing the sum of costs enabled by introducing cardinality constraints into the SAT encoding. An experimental evaluation on grid graphs indicate promising performance of the new SAT-based method in comparison with the best variants of previous sum-of-costs search-based solvers.