Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Sonniger Start in das neue Semester (April 2023). Bildinformationen anzeigen

Sonniger Start in das neue Semester (April 2023).

Foto: Universität Paderborn, Besim Mazhiqi

Marcel Wever

Kontakt
Profil
Vita
Publikationen
 Marcel  Wever

Intelligente Systeme und Maschinelles Lernen

Mitglied - Wissenschaftlicher Mitarbeiter - Wissenschaftlicher Mitarbeiter

Sonderforschungsbereich 901

Mitglied - Ehemaliger - B2: Konfiguration und Bewertung

Telefon:
+49 5251 60-3352
Büro:
O4.149
Besucher:
Pohlweg 51
33098 Paderborn
 Marcel  Wever
Sonstiges
02.06.2017 - 31.07.2021

Wissenschaftlicher Mitarbeiter

01.04.2015 - 31.03.2017

Studium Master Informatik

02.06.2017 - 31.07.2021

Wissenschaftlicher Mitarbeiter

01.04.2015 - 31.03.2017

Studium Master Informatik


Liste im Research Information System öffnen

2022

A Survey of Methods for Automated Algorithm Configuration

E. Schede, J. Brandt, A. Tornede, M.D. Wever, V. Bengs, E. Hüllermeier, K. Tierney, in: arXiv:2202.01651, 2022

Algorithm configuration (AC) is concerned with the automated search of the most suitable parameter configuration of a parametrized algorithm. There is currently a wide variety of AC problem variants and methods proposed in the literature. Existing reviews do not take into account all derivatives of the AC problem, nor do they offer a complete classification scheme. To this end, we introduce taxonomies to describe the AC problem and features of configuration methods, respectively. We review existing AC literature within the lens of our taxonomies, outline relevant design choices of configuration approaches, contrast methods and problem variants against each other, and describe the state of AC in industry. Finally, our review provides researchers and practitioners with a look at future research directions in the field of AC.


Algorithm Selection on a Meta Level

A. Tornede, L. Gehring, T. Tornede, M.D. Wever, E. Hüllermeier, in: Machine Learning, 2022

The problem of selecting an algorithm that appears most suitable for a specific instance of an algorithmic problem class, such as the Boolean satisfiability problem, is called instance-specific algorithm selection. Over the past decade, the problem has received considerable attention, resulting in a number of different methods for algorithm selection. Although most of these methods are based on machine learning, surprisingly little work has been done on meta learning, that is, on taking advantage of the complementarity of existing algorithm selection methods in order to combine them into a single superior algorithm selector. In this paper, we introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors. We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework, essentially combining ideas of meta learning and ensemble learning. In an extensive experimental evaluation, we demonstrate that ensembles of algorithm selectors can significantly outperform single algorithm selectors and have the potential to form the new state of the art in algorithm selection.


A comparison of heuristic, statistical, and machine learning methods for heated tool butt welding of two different materials

K. Gevers, A. Tornede, M.D. Wever, V. Schöppner, E. Hüllermeier, Welding in the World (2022)

<jats:title>Abstract</jats:title><jats:p>Heated tool butt welding is a method often used for joining thermoplastics, especially when the components are made out of different materials. The quality of the connection between the components crucially depends on a suitable choice of the parameters of the welding process, such as heating time, temperature, and the precise way how the parts are then welded. Moreover, when different materials are to be joined, the parameter values need to be tailored to the specifics of the respective material. To this end, in this paper, three approaches to tailor the parameter values to optimize the quality of the connection are compared: a heuristic by Potente, statistical experimental design, and Bayesian optimization. With the suitability for practice in mind, a series of experiments are carried out with these approaches, and their capabilities of proposing well-performing parameter values are investigated. As a result, Bayesian optimization is found to yield peak performance, but the costs for optimization are substantial. In contrast, the Potente heuristic does not require any experimentation and recommends parameter values with competitive quality.</jats:p>


2021

AutoML for Multi-Label Classification: Overview and Empirical Evaluation

M.D. Wever, A. Tornede, F. Mohr, E. Hüllermeier, IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), pp. 1-1

Automated machine learning (AutoML) supports the algorithmic construction and data-specific customization of machine learning pipelines, including the selection, combination, and parametrization of machine learning algorithms as main constituents. Generally speaking, AutoML approaches comprise two major components: a search space model and an optimizer for traversing the space. Recent approaches have shown impressive results in the realm of supervised learning, most notably (single-label) classification (SLC). Moreover, first attempts at extending these approaches towards multi-label classification (MLC) have been made. While the space of candidate pipelines is already huge in SLC, the complexity of the search space is raised to an even higher power in MLC. One may wonder, therefore, whether and to what extent optimizers established for SLC can scale to this increased complexity, and how they compare to each other. This paper makes the following contributions: First, we survey existing approaches to AutoML for MLC. Second, we augment these approaches with optimizers not previously tried for MLC. Third, we propose a benchmarking framework that supports a fair and systematic comparison. Fourth, we conduct an extensive experimental study, evaluating the methods on a suite of MLC problems. We find a grammar-based best-first search to compare favorably to other optimizers.


Predicting Machine Learning Pipeline Runtimes in the Context of Automated Machine Learning

F. Mohr, M.D. Wever, A. Tornede, E. Hüllermeier, IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)

Automated Machine Learning (AutoML) seeks to automatically find so-called machine learning pipelines that maximize the prediction performance when being used to train a model on a given dataset. One of the main and yet open challenges in AutoML is an effective use of computational resources: An AutoML process involves the evaluation of many candidate pipelines, which are costly but often ineffective because they are canceled due to a timeout. In this paper, we present an approach to predict the runtime of two-step machine learning pipelines with up to one pre-processor, which can be used to anticipate whether or not a pipeline will time out. Separate runtime models are trained offline for each algorithm that may be used in a pipeline, and an overall prediction is derived from these models. We empirically show that the approach increases successful evaluations made by an AutoML tool while preserving or even improving on the previously best solutions.


Coevolution of Remaining Useful Lifetime Estimation Pipelines for Automated Predictive Maintenance

T. Tornede, A. Tornede, M.D. Wever, E. Hüllermeier, in: Proceedings of the Genetic and Evolutionary Computation Conference, 2021


Automated Machine Learning, Bounded Rationality, and Rational Metareasoning

E. Hüllermeier, F. Mohr, A. Tornede, M.D. Wever, 2021



Towards Green Automated Machine Learning: Status Quo and Future Directions

T. Tornede, A. Tornede, J.M. Hanselle, M.D. Wever, F. Mohr, E. Hüllermeier, in: arXiv:2111.05850, 2021

Automated machine learning (AutoML) strives for the automatic configuration of machine learning algorithms and their composition into an overall (software) solution - a machine learning pipeline - tailored to the learning task (dataset) at hand. Over the last decade, AutoML has developed into an independent research field with hundreds of contributions. While AutoML offers many prospects, it is also known to be quite resource-intensive, which is one of its major points of criticism. The primary cause for a high resource consumption is that many approaches rely on the (costly) evaluation of many machine learning pipelines while searching for good candidates. This problem is amplified in the context of research on AutoML methods, due to large scale experiments conducted with many datasets and approaches, each of them being run with several repetitions to rule out random effects. In the spirit of recent work on Green AI, this paper is written in an attempt to raise the awareness of AutoML researchers for the problem and to elaborate on possible remedies. To this end, we identify four categories of actions the community may take towards more sustainable research on AutoML, i.e. Green AutoML: design of AutoML systems, benchmarking, transparency and research incentives.




2020

Extreme Algorithm Selection with Dyadic Feature Representation

A. Tornede, M.D. Wever, E. Hüllermeier, in: Discovery Science, 2020


Hybrid Ranking and Regression for Algorithm Selection

J.M. Hanselle, A. Tornede, M.D. Wever, E. Hüllermeier, in: KI 2020: Advances in Artificial Intelligence, 2020


AutoML for Predictive Maintenance: One Tool to RUL Them All

T. Tornede, A. Tornede, M.D. Wever, F. Mohr, E. Hüllermeier, in: Proceedings of the ECMLPKDD 2020, 2020

DOI


Reliable Part-of-Speech Tagging of Historical Corpora through Set-Valued Prediction

S.H. Heid, M.D. Wever, E. Hüllermeier, in: Journal of Data Mining and Digital Humanities, 2020

Syntactic annotation of corpora in the form of part-of-speech (POS) tags is a key requirement for both linguistic research and subsequent automated natural language processing (NLP) tasks. This problem is commonly tackled using machine learning methods, i.e., by training a POS tagger on a sufficiently large corpus of labeled data. While the problem of POS tagging can essentially be considered as solved for modern languages, historical corpora turn out to be much more difficult, especially due to the lack of native speakers and sparsity of training data. Moreover, most texts have no sentences as we know them today, nor a common orthography. These irregularities render the task of automated POS tagging more difficult and error-prone. Under these circumstances, instead of forcing the POS tagger to predict and commit to a single tag, it should be enabled to express its uncertainty. In this paper, we consider POS tagging within the framework of set-valued prediction, which allows the POS tagger to express its uncertainty via predicting a set of candidate POS tags instead of guessing a single one. The goal is to guarantee a high confidence that the correct POS tag is included while keeping the number of candidates small. In our experimental study, we find that extending state-of-the-art POS taggers to set-valued prediction yields more precise and robust taggings, especially for unknown words, i.e., words not occurring in the training data.


Towards Meta-Algorithm Selection

A. Tornede, M.D. Wever, E. Hüllermeier, in: Workshop MetaLearn 2020 @ NeurIPS 2020, 2020


Run2Survive: A Decision-theoretic Approach to Algorithm Selection based on Survival Analysis

A. Tornede, M.D. Wever, S. Werner, F. Mohr, E. Hüllermeier, in: ACML 2020, 2020

Algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidate algorithms most suitable for a specific instance of an algorithmic problem class, where "suitability" often refers to an algorithm's runtime. Due to possibly extremely long runtimes of candidate algorithms, training data for algorithm selection models is usually generated under time constraints in the sense that not all algorithms are run to completion on all instances. Thus, training data usually comprises censored information, as the true runtime of algorithms timed out remains unknown. However, many standard AS approaches are not able to handle such information in a proper way. On the other side, survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime, as we demonstrate in this work. We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive. Moreover, taking advantage of a framework of this kind, we advocate a risk-averse approach to algorithm selection, in which the avoidance of a timeout is given high priority. In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.


LiBRe: Label-Wise Selection of Base Learners in Binary Relevance for Multi-Label Classification

M.D. Wever, A. Tornede, F. Mohr, E. Hüllermeier, Springer, 2020

In multi-label classification (MLC), each instance is associated with a set of class labels, in contrast to standard classification where an instance is assigned a single label. Binary relevance (BR) learning, which reduces a multi-label to a set of binary classification problems, one per label, is arguably the most straight-forward approach to MLC. In spite of its simplicity, BR proved to be competitive to more sophisticated MLC methods, and still achieves state-of-the-art performance for many loss functions. Somewhat surprisingly, the optimal choice of the base learner for tackling the binary classification problems has received very little attention so far. Taking advantage of the label independence assumption inherent to BR, we propose a label-wise base learner selection method optimizing label-wise macro averaged performance measures. In an extensive experimental evaluation, we find that or approach, called LiBRe, can significantly improve generalization performance.


Multi-Oracle Coevolutionary Learning of Requirements Specifications from Examples in On-The-Fly Markets

M.D. Wever, L. van Rooijen, H. Hamann, Evolutionary Computation (2020), 28(2), pp. 165–193

In software engineering, the imprecise requirements of a user are transformed to a formal requirements specification during the requirements elicitation process. This process is usually guided by requirements engineers interviewing the user. We want to partially automate this first step of the software engineering process in order to enable users to specify a desired software system on their own. With our approach, users are only asked to provide exemplary behavioral descriptions. The problem of synthesizing a requirements specification from examples can partially be reduced to the problem of grammatical inference, to which we apply an active coevolutionary learning approach. However, this approach would usually require many feedback queries to be sent to the user. In this work, we extend and generalize our active learning approach to receive knowledge from multiple oracles, also known as proactive learning. The ‘user oracle’ represents input received from the user and the ‘knowledge oracle’ represents available, formalized domain knowledge. We call our two-oracle approach the ‘first apply knowledge then query’ (FAKT/Q) algorithm. We compare FAKT/Q to the active learning approach and provide an extensive benchmark evaluation. As result we find that the number of required user queries is reduced and the inference process is sped up significantly. Finally, with so-called On-The-Fly Markets, we present a motivation and an application of our approach where such knowledge is available.


2019

Grammatikwandel digital-kulturwissenschaftlich erforscht. Mittelniederdeutscher Sprachausbau im interdisziplinären Zugriff

M. Merten, N. Seemann, M.D. Wever, Niederdeutsches Jahrbuch (2019)(142), pp. 124-146


Towards Automated Machine Learning for Multi-Label Classification

M.D. Wever, F. Mohr, E. Hüllermeier, A. Hetzer. Towards Automated Machine Learning for Multi-Label Classification. In: European Conference on Data Analytics (ECDA), Bayreuth, Germany, 2019.


Algorithm Selection as Recommendation: From Collaborative Filtering to Dyad Ranking

A. Tornede, M.D. Wever, E. Hüllermeier, in: Proceedings - 29. Workshop Computational Intelligence, Dortmund, 28. - 29. November 2019, KIT Scientific Publishing, Karlsruhe, 2019, pp. 135-146


From Automated to On-The-Fly Machine Learning

F. Mohr, M.D. Wever, A. Tornede, E. Hüllermeier. From Automated to On-The-Fly Machine Learning. In: Informatik 2019, Kassel, 2019.


Automating Multi-Label Classification Extending ML-Plan

M.D. Wever, F. Mohr, A. Tornede, E. Hüllermeier, 2019

Existing tools for automated machine learning, such as Auto-WEKA, TPOT, auto-sklearn, and more recently ML-Plan, have shown impressive results for the tasks of single-label classification and regression. Yet, there is only little work on other types of machine learning problems so far. In particular, there is almost no work on automating the engineering of machine learning solutions for multi-label classification (MLC). We show how the scope of ML-Plan, an AutoML-tool for multi-class classification, can be extended towards MLC using MEKA, which is a multi-label extension of the well-known Java library WEKA. The resulting approach recursively refines MEKA's multi-label classifiers, nesting other multi-label classifiers for meta algorithms and single-label classifiers provided by WEKA as base learners. In our evaluation, we find that the proposed approach yields strong results and performs significantly better than a set of baselines we compare with.


2018


Programmatic Task Network Planning

F. Mohr, T. Lettmann, E. Hüllermeier, M.D. Wever, in: Proceedings of the 1st ICAPS Workshop on Hierarchical Planning, AAAI, 2018, pp. 31-39


On-The-Fly Service Construction with Prototypes

F. Mohr, M.D. Wever, E. Hüllermeier, in: SCC, IEEE Computer Society, 2018


ML-Plan: Automated Machine Learning via Hierarchical Planning

F. Mohr, M.D. Wever, E. Hüllermeier, Machine Learning (2018), pp. 1495-1515

Automated machine learning (AutoML) seeks to automatically select, compose, and parametrize machine learning algorithms, so as to achieve optimal performance on a given task (dataset). Although current approaches to AutoML have already produced impressive results, the field is still far from mature, and new techniques are still being developed. In this paper, we present ML-Plan, a new approach to AutoML based on hierarchical planning. To highlight the potential of this approach, we compare ML-Plan to the state-of-the-art frameworks Auto-WEKA, auto-sklearn, and TPOT. In an extensive series of experiments, we show that ML-Plan is highly competitive and often outperforms existing approaches.


Reduction Stumps for Multi-Class Classification

F. Mohr, M.D. Wever, E. Hüllermeier, in: Proceedings of the Symposium on Intelligent Data Analysis, 2018


ML-Plan for Unlimited-Length Machine Learning Pipelines

M.D. Wever, F. Mohr, E. Hüllermeier, in: ICML 2018 AutoML Workshop, 2018

In automated machine learning (AutoML), the process of engineering machine learning applications with respect to a specific problem is (partially) automated. Various AutoML tools have already been introduced to provide out-of-the-box machine learning functionality. More specifically, by selecting machine learning algorithms and optimizing their hyperparameters, these tools produce a machine learning pipeline tailored to the problem at hand. Except for TPOT, all of these tools restrict the maximum number of processing steps of such a pipeline. However, as TPOT follows an evolutionary approach, it suffers from performance issues when dealing with larger datasets. In this paper, we present an alternative approach leveraging a hierarchical planning to configure machine learning pipelines that are unlimited in length. We evaluate our approach and find its performance to be competitive with other AutoML tools, including TPOT.


Ensembles of Evolved Nested Dichotomies for Classification

M.D. Wever, F. Mohr, E. Hüllermeier, in: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2018, Kyoto, Japan, July 15-19, 2018, ACM, 2018

In multinomial classification, reduction techniques are commonly used to decompose the original learning problem into several simpler problems. For example, by recursively bisecting the original set of classes, so-called nested dichotomies define a set of binary classification problems that are organized in the structure of a binary tree. In contrast to the existing one-shot heuristics for constructing nested dichotomies and motivated by recent work on algorithm configuration, we propose a genetic algorithm for optimizing the structure of such dichotomies. A key component of this approach is the proposed genetic representation that facilitates the application of standard genetic operators, while still supporting the exchange of partial solutions under recombination. We evaluate the approach in an extensive experimental study, showing that it yields classifiers with superior generalization performance.




Supporting the Cognitive Process in Annotation Tasks

N. Seemann, M. Geierhos, M. Merten, D. Tophinke, M.D. Wever, E. Hüllermeier. Supporting the Cognitive Process in Annotation Tasks. In: Postersession Computerlinguistik der 40. Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft, Stuttgart, Germany, 2018.


2017

Automatic Machine Learning: Hierachical Planning Versus Evolutionary Optimization

M.D. Wever, F. Mohr, E. Hüllermeier, in: 27th Workshop Computational Intelligence, 2017

These days, there is a strong rise in the needs for machine learning applications, requiring an automation of machine learning engineering which is referred to as AutoML. In AutoML the selection, composition and parametrization of machine learning algorithms is automated and tailored to a specific problem, resulting in a machine learning pipeline. Current approaches reduce the AutoML problem to optimization of hyperparameters. Based on recursive task networks, in this paper we present one approach from the field of automated planning and one evolutionary optimization approach. Instead of simply parametrizing a given pipeline, this allows for structure optimization of machine learning pipelines, as well. We evaluate the two approaches in an extensive evaluation, finding both approaches to have their strengths in different areas. Moreover, the two approaches outperform the state-of-the-art tool Auto-WEKA in many settings.



Active Coevolutionary Learning of Requirements Specifications from Examples

M.D. Wever, L. van Rooijen, H. Hamann, in: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2017, pp. 1327--1334

Within software engineering, requirements engineering starts from imprecise and vague user requirements descriptions and infers precise, formalized specifications. Techniques, such as interviewing by requirements engineers, are typically applied to identify the user’s needs. We want to partially automate even this first step of requirements elicitation by methods of evolutionary computation. The idea is to enable users to specify their desired software by listing examples of behavioral descriptions. Users initially specify two lists of operation sequences, one with desired behaviors and one with forbidden behaviors. Then, we search for the appropriate formal software specification in the form of a deterministic finite automaton. We solve this problem known as grammatical inference with an active coevolutionary approach following Bongard and Lipson [2]. The coevolutionary process alternates between two phases: (A) additional training data is actively proposed by an evolutionary process and the user is interactively asked to label it; (B) appropriate automata are then evolved to solve this extended grammatical inference problem. Our approach leverages multi-objective evolution in both phases and outperforms the state-of-the-art technique [2] for input alphabet sizes of three and more, which are relevant to our problem domain of requirements specification.


Liste im Research Information System öffnen

Die Universität der Informationsgesellschaft