23rd International Workshop on Learning Classifier Systems IWLCS 2020

News: Keynote Announcement

We are excited to announce that Prof. Dr. Ryan J. Urbanowicz from the University of Pennsylvania (US) has kindly confirmed to give a keynote at IWLCS 2020.

Aims and Scope of IWLCS

Learning Classifier Systems (LCSs) are a class of powerful Evolutionary Machine Learning (EML) algorithms that combine the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. They form predictions by combining an evolving set of localized models each of which is responsible for a part of the problem space. While the localized model themselves are trained using machine learning techniques ranging from simple adaptive filters to more complex ones such as artificial neural networks, their responsibilities are optimized by powerful heuristic such as GAs.

Over the last four decades, LCSs have shown great potential in various problem domains such as behaviour modeling, online control, function approximation, classification, prediction and data mining. Their unique strengths lie in their adaptability and flexibility, them making only a minimal set of assumptions and, most importantly, their transparency. Topics that have been central to LCS for many years are more and more becoming a matter of high interest for other machine learning communities as well these days; the prime example is an increase in human interpretability of generated models which especially the booming Deep Learning community is keen on obtaining (Explainable AI). This workshop serves as a critical spotlight to disseminate the long experience of LCS research in these areas, to present new and ongoing research in the field, to attract new interest and to expose the machine learning community to an alternative, often advantageous, modeling paradigm. Particular topics of interest are (not exclusively):

  • advances in LCS methods (local models, problem space partitioning, classifier mixing, …)
  • evolutionary reinforcement learning (multi-step LCS, neuroevolution, …)
  • formal developments in LCSs (provably optimal parametrization, time bounds, generalization, …)
  • interpretability of evolved knowledge bases (knowledge extraction, visualization, …)
  • advances in LCS paradigms (Michigan/Pittsburgh style, hybrids, iterative rule learning, …)
  • hyperparameter optimization (hyperparameter selection, online self-adaption, …)
  • applications (medical domains, bio-informatics, intelligence in games, self-adaptive cyber-physical systems, …)
  • optimizations and parallel implementations (GPU acceleration, matching algorithms, …)
  • other evolutionary rule-based ML systems (artificial immune/evolving fuzzy rule-based systems, …)

Deadlines

Submission deadline: April 3, 2020

Decisions due: April 17, 2020

Camera-ready version: April 24, 2020

Author registration deadline: April 27, 2020


Submission Information

Papers are expected to report on innovative ideas and novel research results around the topic of LCS and general Evolutionary Rule-based Machine Learning (ERBML). Reported results and findings have to be integrated with the current state of the art and should provide details and metrics allowing for an assessment of practical as well as statistical significance. Contributions bringing in novel ideas and concepts from related fields such as general ML and EC are explicitly solicited, but authors are at the same time strongly encouraged to clearly state the relevance and relation to the field of LCS and ERBML.

Submissions must

Organization Committee

Anthony Stein

University of Augsburg (DE)

Anthony Stein is a research associate at the Department of Computer Science of the University of Augsburg, Germany. He received his B.Sc. in Business Information Systems from the University of Applied Sciences in Augsburg in 2012. He then switched to the University of Augsburg to proceed with his master's degree (M.Sc.) in computer science with a minor in information economics which he received in 2014. He has completed his doctorate (Dr. rer. nat.) in computer science in November 2019. Since his master's thesis project, he has dived into the nature of Learning Classifier Systems. From that time on, he is a passionate follower and contributor of ongoing research in this field. His research focuses on the applicability of EML techniques in self-learning adaptive systems which are asked to act in real world environments that bear challenges such as data imbalance and ongoing change. Therefore, in his work he investigates the utilization of interpolation and active learning methods to change the means of how classifiers are initialized, insufficiently covered problem space niches are filled, or adequate actions are selected. A further aspect he investigates is the question how Learning Classifier Systems can be enhanced toward a behavior of proactive knowledge construction. Since 2018, he is an elected organizing committee member of the International Workshop on Learning Classifier Systems (IWLCS) and serves as reviewer for GECCO’s EML track. He also co-organizes the Workshop Series on Autonomously Learning and Optimizing Systems (SAOS) for four years now. At GECCO 2019, he started the next edition of the introductory tutorial on Learning Classifier Systems.

Masaya Nakata

Yokohama National University (JP)

Masaya Nakata is an associate professor at the Faculty of Engineering, Yokohama National University. He received the B.A., M.Sc. Ph.D. degrees in informatics from the University of Electro- Communications, Japan, in 2011, 2013, 2016 respectively. He was a visiting student of the School of Engineering and Computer Science in Victoria University of Wellington from 2014. He was a visiting student of the Department of Electronics and Information, Politecnico di Milano, Milan, Italy, in 2013, and of the Department of Computer Science, University of Bristol, Bristol, UK, in 2014. His research interests are in evolutionary computation, reinforcement learning, data mining, more specifically, in learning classifier systems. He has received the best paper award and the IEEE Computational Intelligence Society Japan Chapter Young Researcher Award from the Japanese Symposium of Evolutionary Computation 2012. He is a co-organizer of International Workshop on Learning Classifier Systems (IWLCS) for 2015-2016, as well as 2018-2019.

David Pätzel

University of Augsburg (DE)

David Pätzel is a PhD student at the Department of Computer Science at the University of Augsburg, Germany. He received his B.Sc. in Computer Science from the University of Augsburg in 2015 and his M.Sc. in the same field in 2017. His main research is directed towards Learning Classifier Systems, especially XCS and its derivatives, with a focus on developing a more formal understanding of LCS that can be used to improve existing algorithms by alleviating known weaknesses as well as discovering new ones. Besides that, his research interests include reinforcement learning, evolutionary machine learning algorithms and pure functional programming.

Advisory Board

    • Jaume Bacardit, Newcastle University, UK
    • Will Browne, Victoria University of Wellington, New Zeland
    • Martin V. Butz, University of Tübingen, Germany
    • John Holmes, University of Pennsylvania, US
    • Muhammad Iqbal, Xtracta, New Zealand
    • Pier Luca Lanzi, Politecnico Di Milano, Italy
    • Kamran Shafi, University of New South Wales, Australia
    • Wolfgang Stolzmann, CMORE Automotive, Germany
    • Ryan J. Urbanowicz, University of Pennsylvania, US
    • Stewart W. Wilson, Prediction Dynamics, US

Preliminary Program Committee

    • Jaume Bacardit, Newcastle University, UK
    • Lashon B. Booker, The MITRE Corporation, US
    • Will N. Browne, Victoria University of Wellington, New Zealand
    • Larry Bull, The University of the West of England, UK
    • Martin V. Butz, University of Tübingen, Germany
    • Ali Hamzeh, Shiraz University, Iran
    • Luis Miramontes Hercog, University of Notre Dame, US
    • John Holmes, University of Pennsylvania, US
    • Muhammad Iqbal Xtracta, New Zealand
    • Karthik Kuber, Microsoft, US
    • Pier Luca Lanzi, Politecnico Di Milano, Italy
    • Daniele Loiacono, Politecnico di Milano, Italy
    • Masaya Nakata, Yokohama National University, Japan
    • Yusuke Nojima, Osaka Prefecture University, Japan
    • David Pätzel, University of Augsburg, Germany
    • Sonia Schulenburg, Level E Research Limited, UK
    • Kamran Shafi, University of New South Wales, Australia
    • Shinichi Shirakawa, Yokohama National University, Japan
    • Anthony Stein, University of Augsburg, Germany
    • Wolfgang Stolzmann, CMORE Automotive
    • Takato Tatsumi, University of Electro-Communications, Japan
    • Ryan J. Urbanowicz, University of Pennsylvania, US
    • Danilo V. Vargas, Kyushu University, Japan
    • Stewart W. Wilson, Prediction Dynamics, US