23rd International Workshop on Learning Classifier Systems IWLCS 2020

Preliminary Agenda

Session 1

Welcome Note

Anthony Stein, Masaya Nakata, David Pätzel

Keynote: Interpretability challenges and opportunities in rule-based machine learning

Ryan J. Urbanowicz

An Overview of LCS Research from IWLCS 2019 to 2020

David Pätzel, Anthony Stein, Masaya Nakata

A Scikit-learn Compatible Learning Classifier System

Robert F. Zhang, Ryan J. Urbanowicz

XCS as a Reinforcement Learning Approach to Automatic Test Case Prioritization

Lukas Rosenbauer, Anthony Stein, Roland Maier, David Pätzel, Jörg Hähner


Session 2

Generic Approaches for Parallel Rule Matching in Learning Classifier Systems

Lukas Rosenbauer, Anthony Stein, Jörg Hähner

An Adaption Mechanism for the Error Threshold of XCSF

Tim Hansmeier, Paul Kaufmann, Marco Platzner

Learning Classifier Systems: Appreciating the Lateralized Approach

Abubakar Siddique, Will N. Browne, Gina M. Grimshaw

PEPACS: Integrating Probability-Enhanced Predictions to ACS2

Romain Orhand, Anne Jeannin-Girardon, Pierre Parrend, Pierre Collet

Investigating Exploration Techniques for ACS in Discretized Real-Valued Environments

Norbert Kozlowski, Olgierd Unold

General discussion

Keynote Announcement

We are excited to announce that Prof. Dr. Ryan J. Urbanowicz from the University of Pennsylvania (US) has kindly confirmed to give a keynote at IWLCS 2020.


Interpretability challenges and opportunities in rule-based machine learning

One of the key advantages of learning classifier system (LCS) algorithms is their potential for interpretability as a machine learner. This is largely the product of their classic rule-based representation that utilizes a set of human readable, IF:THEN rules. But what does it mean for an algorithm to be interpretable? How do aspects of a given problem or application hamper interpretability in the context of an LCS algorithm? In this talk we will consider challenges and opportunities with respect to the interpretability of different LCS algorithms in contrast with other types of evolutionary and non-evolutionary machine learners.


Dr. Urbanowicz is an Assistant Professor of Informatics in the Department of Biostatistics, Epidemiology, and Informatics at the Perelman School of Medicine of the University of Pennsylvania PA, USA. His educational background is interdisciplinary, at the intersection of biology, engineering, computer science, and biostatistics. His current research focuses on the development, evaluation, and application of bioinformatics, artificial intelligence, and machine learning methods in biomedical and clinical problems. This includes work focused largely on the development and application of evolutionary rule-based machine learning methods, for which he pioneered statistical and visualization strategies to make automated knowledge discovery a practical reality for these types of algorithms.

Aims and Scope of IWLCS

Learning Classifier Systems (LCSs) are a class of powerful Evolutionary Machine Learning (EML) algorithms that combine the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. They form predictions by combining an evolving set of localized models each of which is responsible for a part of the problem space. While the localized model themselves are trained using machine learning techniques ranging from simple adaptive filters to more complex ones such as artificial neural networks, their responsibilities are optimized by powerful heuristic such as GAs.

Over the last four decades, LCSs have shown great potential in various problem domains such as behaviour modeling, online control, function approximation, classification, prediction and data mining. Their unique strengths lie in their adaptability and flexibility, them making only a minimal set of assumptions and, most importantly, their transparency. Topics that have been central to LCS for many years are more and more becoming a matter of high interest for other machine learning communities as well these days; the prime example is an increase in human interpretability of generated models which especially the booming Deep Learning community is keen on obtaining (Explainable AI). This workshop serves as a critical spotlight to disseminate the long experience of LCS research in these areas, to present new and ongoing research in the field, to attract new interest and to expose the machine learning community to an alternative, often advantageous, modeling paradigm. Particular topics of interest are (not exclusively):

  • advances in LCS methods (local models, problem space partitioning, classifier mixing, …)
  • evolutionary reinforcement learning (multi-step LCS, neuroevolution, …)
  • formal developments in LCSs (provably optimal parametrization, time bounds, generalization, …)
  • interpretability of evolved knowledge bases (knowledge extraction, visualization, …)
  • advances in LCS paradigms (Michigan/Pittsburgh style, hybrids, iterative rule learning, …)
  • hyperparameter optimization (hyperparameter selection, online self-adaption, …)
  • applications (medical domains, bio-informatics, intelligence in games, self-adaptive cyber-physical systems, …)
  • optimizations and parallel implementations (GPU acceleration, matching algorithms, …)
  • other evolutionary rule-based ML systems (artificial immune/evolving fuzzy rule-based systems, …)

Deadlines

(extended due to COVID-19 pandemic)

Submission deadline: April 17, 2020

Decisions due: May 1, 2020

Camera-ready version: May 8, 2020

Early and mandatory author registration deadline: May 11, 2020


Submission Information

Papers are expected to report on innovative ideas and novel research results around the topic of LCS and general Evolutionary Rule-based Machine Learning (ERBML). Reported results and findings have to be integrated with the current state of the art and should provide details and metrics allowing for an assessment of practical as well as statistical significance. Contributions bringing in novel ideas and concepts from related fields such as general ML and EC are explicitly solicited, but authors are at the same time strongly encouraged to clearly state the relevance and relation to the field of LCS and ERBML.

Submissions must


Instructions for presenters

The presenting authors of accepted workshop papers are asked to prepare a 15 minute oral presentation to be held via live stream. Each presentation is followed by a 5 minute discussion. The sequence of presentations is given in the final program (to be published soon).

Please find the general instructions for the presentation of workshop papers here.


Organization Committee

Anthony Stein

University of Hohenheim (DE)

Anthony Stein is a Tenure Track Professor at the University of Hohenheim, where he heads the Department of Artificial Intelligence in Agricultural Engineering. He received his B.Sc. in Business Information Systems from the University of Applied Sciences in Augsburg in 2012. He then switched to the University of Augsburg to proceed with his master's degree (M.Sc.) in computer science with a minor in information economics which he received in 2014. Since November 2019, he also holds a doctorate (Dr. rer. nat.) in computer science from the University of Augsburg. Since his master's thesis project, he has dived into the nature of Learning Classifier Systems. From that time on, he is a passionate follower and contributor of ongoing research in this field. His research focuses on the applicability of EML techniques in self-learning adaptive systems which are asked to act in real world environments that bear challenges such as data imbalance and ongoing change. Therefore, in his work he investigates the utilization of interpolation and active learning methods to change the means of how classifiers are initialized, insufficiently covered problem space niches are filled, or adequate actions are selected. A further aspect he investigates is the question how Learning Classifier Systems can be enhanced toward a behavior of proactive knowledge construction. Since 2018, he is an elected organizing committee member of the International Workshop on Learning Classifier Systems (IWLCS) and serves as reviewer for GECCO’s EML track. He also co-organizes the Workshop Series on Autonomously Learning and Optimizing Systems (SAOS) for four years now. At GECCO 2019, he started the next edition of the introductory tutorial on Learning Classifier Systems.

Masaya Nakata

Yokohama National University (JP)

Masaya Nakata is an associate professor at the Faculty of Engineering, Yokohama National University. He received the B.A., M.Sc. Ph.D. degrees in informatics from the University of Electro- Communications, Japan, in 2011, 2013, 2016 respectively. He was a visiting student of the School of Engineering and Computer Science in Victoria University of Wellington from 2014. He was a visiting student of the Department of Electronics and Information, Politecnico di Milano, Milan, Italy, in 2013, and of the Department of Computer Science, University of Bristol, Bristol, UK, in 2014. His research interests are in evolutionary computation, reinforcement learning, data mining, more specifically, in learning classifier systems. He has received the best paper award and the IEEE Computational Intelligence Society Japan Chapter Young Researcher Award from the Japanese Symposium of Evolutionary Computation 2012. He is a co-organizer of International Workshop on Learning Classifier Systems (IWLCS) for 2015-2016, as well as 2018-2019.

David Pätzel

University of Augsburg (DE)

David Pätzel is a PhD student at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2015 and his M.Sc. in the same field in 2017. His main research is directed towards Learning Classifier Systems, especially XCS and its derivatives, with a focus on developing a more formal understanding of LCS that can be used to improve existing algorithms by alleviating known weaknesses as well as discovering new ones. Besides that, his research interests include reinforcement learning, evolutionary machine learning algorithms and pure functional programming.

Advisory Board

    • Jaume Bacardit, Newcastle University, UK
    • Will Browne, Victoria University of Wellington, New Zeland
    • Martin V. Butz, University of Tübingen, Germany
    • John Holmes, University of Pennsylvania, US
    • Muhammad Iqbal, Xtracta, New Zealand
    • Pier Luca Lanzi, Politecnico Di Milano, Italy
    • Kamran Shafi, University of New South Wales, Australia
    • Wolfgang Stolzmann, CMORE Automotive, Germany
    • Ryan J. Urbanowicz, University of Pennsylvania, US
    • Stewart W. Wilson, Prediction Dynamics, US

Program Committee

    • Jaume Bacardit, Newcastle University, UK
    • Lashon B. Booker, The MITRE Corporation, US
    • Will N. Browne, Victoria University of Wellington, New Zealand
    • Larry Bull, The University of the West of England, UK
    • Martin V. Butz, University of Tübingen, Germany
    • Ali Hamzeh, Shiraz University, Iran
    • Luis Miramontes Hercog, University of Notre Dame, US
    • Daniele Loiacono, Politecnico di Milano, Italy
    • Masaya Nakata, Yokohama National University, Japan
    • Yusuke Nojima, Osaka Prefecture University, Japan
    • David Pätzel, University of Augsburg, Germany
    • Sonia Schulenburg, Level E Research Limited, UK
    • Shinichi Shirakawa, Yokohama National University, Japan
    • Anthony Stein, University of Hohenheim, Germany
    • Sven Tomforde, Kiel University, Germany
    • Ryan J. Urbanowicz, University of Pennsylvania, US
    • Danilo V. Vargas, Kyushu University, Japan
    • Stewart W. Wilson, Prediction Dynamics, US