Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Perspektivenwechsel. Bildinformationen anzeigen


Foto: Universität Paderborn

Julian Lienen

 Julian Lienen

Intelligente Systeme und Maschinelles Lernen

Wissenschaftlicher Mitarbeiter

+49 5251 60-3345
Pohlweg 51
33098 Paderborn

Liste im Research Information System öffnen


Instance weighting through data imprecisiation

J. Lienen, E. Hüllermeier, International Journal of Approximate Reasoning (2021)

Monocular Depth Estimation via Listwise Ranking using the Plackett-Luce Model

J. Lienen, E. Hüllermeier, R. Ewerth, N. Nommensen, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021

From Label Smoothing to Label Relaxation

J. Lienen, E. Hüllermeier, in: Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI, AAAI Press, 2021, pp. 8583-8591

Credal Self-Supervised Learning

J. Lienen, E. Hüllermeier, in: arXiv:2106.11853, 2021

Self-training is an effective approach to semi-supervised learning. The key idea is to let the learner itself iteratively generate "pseudo-supervision" for unlabeled instances based on its current hypothesis. In combination with consistency regularization, pseudo-labeling has shown promising performance in various domains, for example in computer vision. To account for the hypothetical nature of the pseudo-labels, these are commonly provided in the form of probability distributions. Still, one may argue that even a probability distribution represents an excessive level of informedness, as it suggests that the learner precisely knows the ground-truth conditional probabilities. In our approach, we therefore allow the learner to label instances in the form of credal sets, that is, sets of (candidate) probability distributions. Thanks to this increased expressiveness, the learner is able to represent uncertainty and a lack of knowledge in a more flexible and more faithful manner. To learn from weakly labeled data of that kind, we leverage methods that have recently been proposed in the realm of so-called superset learning. In an exhaustive empirical evaluation, we compare our methodology to state-of-the-art self-supervision approaches, showing competitive to superior performance especially in low-label scenarios incorporating a high degree of uncertainty.


Monocular Depth Estimation via Listwise Ranking using the Plackett-Luce model

J. Lienen, E. Hüllermeier, in: arXiv:2010.13118, 2020

In many real-world applications, the relative depth of objects in an image is crucial for scene understanding, e.g., to calculate occlusions in augmented reality scenes. Predicting depth in monocular images has recently been tackled using machine learning methods, mainly by treating the problem as a regression task. Yet, being interested in an order relation in the first place, ranking methods suggest themselves as a natural alternative to regression, and indeed, ranking approaches leveraging pairwise comparisons as training information ("object A is closer to the camera than B") have shown promising performance on this problem. In this paper, we elaborate on the use of so-called \emph{listwise} ranking as a generalization of the pairwise approach. Listwise ranking goes beyond pairwise comparisons between objects and considers rankings of arbitrary length as training information. Our approach is based on the Plackett-Luce model, a probability distribution on rankings, which we combine with a state-of-the-art neural network architecture and a sampling strategy to reduce training complexity. An empirical evaluation on benchmark data in a "zero-shot" setting demonstrates the effectiveness of our proposal compared to existing ranking and regression methods.


Liste im Research Information System öffnen

Die Universität der Informationsgesellschaft