22nd Int. Workshop on Learning Classifier Systems IWLCS 2019
IWLCS 2019 Keynote
We are excited to announce that Prof. Dr. Martin V. Butz from the University of Tübingen, Germany has kindly confirmed to give a keynote in the scope of IWLCS.
Prediction, Anticipation, and Behavior in Learning Cognitive Systems
by Martin V. Butz, University of Tübingen
Various disciplines and lines of cognitive science research view our brain as a generative, event-predictive cognitive system. My own cognitive psychological research confirms that our mind is continuously in the future -- at least to a certain extent -- dependent on current event-predictive estimates. In accordance, I will sketch-out an integrative theory on learning predictions, anticipations, and generating behavior with the help of the learned predictive models. Moreover, I will give a short overview over our computational models and artificial cognitive systems, which learn hierarchical, event-predictive encodings from sensorimotor experiences for the purpose of optimizing flexible and highly adaptive, interactive goal-directed behavior.
Prof. Martin Butz is a full professor at the Department of Computer Science and the Department of Psychology at the University of Tübingen. While his main background lies in computer science (Diploma and PhD in Computer Science), and in machine learning in particular, he has been collaborating with cognitive psychologists, computational neuroscientists, roboticists, and linguists for many years. His research focuses on neuro-computational, cognitive modeling and cognitive science more generally. Currently, he is modeling the development of conceptual, cognitive language structures from sensorimotor experiences within artificial learning systems. His recent monography is called “How the Mind Comes Into Being: Introducing Cognitive Science from a Functional and Computational Perspective” (Oxford University Press, 2017).
Aims and Scope of IWLCS
In the research field of Evolutionary Machine Learning (EML), Learning Classifier Systems (LCS) provide a powerful technique which received a huge amount of research attention over nearly four decades. Since John Holland's formalization of the Genetic Algorithm (GA) and his conceptualization of the first LCS - the Cognitive System 1 (CS-I) - in the 1970's, the LCS paradigm has broadened greatly into a framework encompassing many algorithmic architectures, knowledge representations, rule discovery mechanisms, credit assignment schemes, and additional integrated heuristics. This specific kind of EML technique bears a great potential of applicability to various problem domains such as behavior modeling, online-control, function approximation, classification, prediction, and data mining. Clearly, these systems uniquely benefit from their adaptability, flexibility, minimal assumptions, and interpretability.
The working principle of a LCS is to evolve a set of IF(condition)-THEN(action) rules, so-called classifiers, which partition the problem space into smaller subspaces. Thereby, each of these elements encoding the system's knowledge can either be represented by rather straight-forward schemes such as IF-THEN rules, or be realized by more complex models such as Artificial Neural Networks. Accordingly, LCSs are also enabled to carry out different kinds of local predictions for the various niches of the problem space. The size and shape of the subspaces each single classifier covers, is optimized via a steady-state Genetic Algorithm (GA) which pursues a globally maximally general subspace, but at the same time strives for maximally accurate local prediction. This tension was formalized as the "Generalization Hypothesis" by Stewart Wilson in 1995 when he presented todays mostly investigated LCS derivative - the Extended Classifier System (XCS). According to the working principle of LCS/XCS, one could also understand a generic LCS as an Evolving Ensemble of local models which in combination obtain a problem-dependent prediction output. This raises the question: How can we model these classifiers? Or put another way: Which kind of machine learning and evolutionary computation algorithms can be utilized within the well-understood algorithmic structure of an LCS? For example, Radial Basis Function Interpolation and Approximation Networks, Multi-Layer Perceptrons (MLP), as well as Support Vector Machines (SVM) have been used to model classifier predictions.
This workshop opens a forum for ongoing research in the field of LCS as well as for the design and implementation of novel LCS-style EML systems, that make use of evolutionary computation techniques to improve the prediction accuracy of the evolved classifiers. Furthermore, it shall solicit researchers of related fields such as (Evolutionary) Machine Learning, (Multi-Objective) Evolutionary Optimization, Neuroevolution, etc. to bring in their experience. In the era of Deep Learning and the recently obtained successes, topics that have been central to LCS for many years, such as human interpretability of the generated models, are now becoming of high interest to other machine learning communities ("Explainable AI"). Hence, this workshop serves as a critical spotlight to disseminate the long experience of LCS in these areas, to attract new interest, and expose the machine learning community to an alternate advantageous modeling paradigm.
Topics of interest include but are not limited to:
- New approaches for classifier modeling (e.g. ANN, GP, SVM, RBFs, ...)
- New means for the problem space partitioning (condition shapes, ensemble formation, ...)
- New ways of classifier mixing (combination of local predictions, ensemble voting schemes, ...)
- Evolutionary Reinforcement Learning (Multi-step LCS, Neuroevolution, ...)
- Theoretical developments in LCS (provably optimal parametrization, learning bounds, ...)
- LCS flexibility regarding target problems (control tasks, regression, classification, ...)
- Interpretability of evolved knowledge bases (knowledge extraction, visualization, ...)
- System enhancements (operators, problem structure identification, linkage learning, ...)
- Input encoding / representations (binary, real-valued, oblique, non-linear, fuzzy, ...)
- Paradigms of LCS (Michigan, Pittsburgh, Hybrids)
- LCS for Cognitive Control (architectures, emergent behaviors, ...)
- Applications (data mining, medical domains, bio-informatics, intelligence in games, ...)
- Optimizations and parallel implementations (GPU acceleration, matching algorithms, ...)
- Similar Evol. Rule-Based ML systems (Artificial Immune Systems, Evol. FRI Systems, ...)
Papers are expected to report on innovative ideas and novel research results around the topic of LCS and general Evolutionary Rule-based Machine Learning (ERBML). Reported results and findings have to be integrated with the current state of the art and should provide details and metrics allowing for an assessment of practical as well as statistical significance. Contributions bringing in novel ideas and concepts from related fields such as general ML and EC are explicitly solicited, but authors are at the same time strongly encouraged to clearly state the relevance and relation to the field of LCS and ERBML.
Submission deadline (non-extensible)
3 April 2019
17 April 2019
24 April 2019
Early author registration
24 April 2019
Yokohama National University (JP)
Masaya Nakata is an assistant professor at the Faculty of Engineering, Yokohama National University. He received the B.A., M.Sc. Ph.D. degrees in informatics from the University of Electro- Communications, Japan, in 2011, 2013, 2016 respectively. He was a visiting student of the School of Engineering and Computer Science in Victoria University of Wellington from 2014. He was a visiting student of the Department of Electronics and Information, Politecnico di Milano, Milan, Italy, in 2013, and of the Department of Computer Science, University of Bristol, Bristol, UK, in 2014. His research interests are in evolutionary computation, reinforcement learning, data mining, more specifically, in learning classifier systems. He has received the best paper award and the IEEE Computational Intelligence Society Japan Chapter Young Researcher Award from the Japanese Symposium of Evolutionary Computation 2012. He is a co-organizer of International Workshop on Learning Classifier Systems (IWLCS) for 2015-2016.
University of Augsburg (DE)
Anthony Stein is a research associate and Ph.D. candidate at the Department of Computer Science of the University of Augsburg, Germany. He received his B.Sc. in Business Information Systems from the University of Applied Sciences in Augsburg in 2012. He then switched to the University of Augsburg to proceed with his master's degree (M.Sc.) in computer science with a minor in information economics which he received in 2014. Within his master's thesis, he dived into the nature of Learning Classifier Systems for the first time. Since then, he is a passionate follower and contributor of ongoing research in this field. Besides his position in the Organic Computing Group at the University of Augsburg, he currently finishes his Ph.D. in computer science. His research focuses on the applicability of EML techniques in self-learning adaptive systems which are asked to act in real world environments that exhibit challenges such as data imbalance and non-stationarity. Therefore, in his work he investigates the utilization of interpolation and active learning methods to change the means of how classifiers are initialized, insufficiently covered problem space niches are filled, or adequate actions are selected. A further aspect he investigates is the question how Learning Classifier Systems can be enhanced toward a behavior of proactive knowledge construction. Since 2017, he is an organizing committee member of the International Workshop on Learning Classifier Systems (IWLCS). For the third year now, he also co-organizes the Workshop Series on Autonomously Learning and Optimizing Systems (SAOS). Among others, he serves as reviewer for GECCO's EML track, ACM's Transactions on Autonomous and Adapative Systems (TAAS) Journal, and several workshops.
University of Electro-Communications (JP)
Takato Tatsumi is a Ph.D. student in the University of Electro-Communications, Japan and the research fellow of Japan Society for the promotion of Science, Japan. He received B.Sc. and M.Sc. degrees in informatics from the University of Electro-Communications, Japan, in 2015 and 2017, respectively. His research interests are in data mining, evolutionary computation, reinforcement learning, more specifically, in learning classifier systems. He has authored more than 7 peer-reviewed LCS research papers, some of them in journals and conferences such as GECCO, CEC, and IWLCS. He received GECCO 2016 Student Travel Grant and Excellent Paper Award from the Japanese Symposium of Informatics (SSI) in 2017.
- Jaume Bacardit Newcastle University, UK
- Will Browne Victoria University of Wellington, New Zeland
- Martin V. Butz University of Tübingen, Germany
- John Holmes University of Pennsylvania, US
- Muhammad Iqbal Xtracta, New Zealand
- Pier Luca Lanzi Politecnico Di Milano, Italy
- Kamran Shafi University of New South Wales, Australia
- Wolfgang Stolzmann CMORE Automotive, Germany
- Ryan J. Urbanowicz University of Pennsylvania, US
- Stewart W Wilson Prediction Dynamics, US
Program Committee (tentative)
- Jaume Bacardit, Newcastle University, UK
- Lashon B. Booker, The MITRE Corporation, US
- Will N. Browne, Victoria University of Wellington, New Zealand
- Larry Bull, The University of the West of England, UK
- Martin V. Butz, University of Tübingen, Germany
- Ali Hamzeh, Shiraz University, Iran
- Luis Miramontes Hercog, University of Notre Dame, US
- John Holmes, University of Pennsylvania, US
- Muhammad Iqbal Xtracta, New Zealand
- Karthik Kuber, Microsoft, US
- Pier Luca Lanzi, Politecnico Di Milano, Italy
- Daniele Loiacono, Politecnico di Milano, Italy
- Masaya Nakata, Yokohama National University, Japan
- Yusuke Nojima, Osaka Prefecture University, Japan
- Sonia Schulenburg, Level E Research Limited, UK
- Kamran Shafi, University of New South Wales, Australia
- Shinichi Shirakawa, Yokohama National University, Japan
- Anthony Stein, University of Augsburg, Germany
- Wolfgang Stolzmann, CMORE Automotive
- Takato Tatsumi, University of Electro-Communications, Japan
- Ryan J. Urbanowicz, University of Pennsylvania, US
- Danilo V. Vargas, Kyushu University, Japan
- Stewart W. Wilson, Prediction Dynamics, US
to be completed.
Previous Editions of IWLCS
- IWLCS@GECCO 2018 - http://itslab.inf.kyushu-u.ac.jp/~vargas/iwlcs_2018/
- IWLCS@GECCO 2017 - http://itslab.inf.kyushu-u.ac.jp/~vargas/erbml_2017/
- IWLCS@GECCO 2016 - http://www.cas.hc.uec.ac.jp/conferences/iwlcs2016/
- IWLCS@GECCO 2014 - http://homepages.ecs.vuw.ac.nz/~iqbal/iwlcs2014/index.html
- IWLCS@GECCO 2013 - http://homepages.ecs.vuw.ac.nz/~iqbal/iwlcs2013/index.html
- IWLCS@GECCO 2012 - http://home.deib.polimi.it/loiacono/iwlcs2012/index.php?n=Main.HomePage
- IWLCS@GECCO 2011 - http://home.deib.polimi.it/loiacono/iwlcs2011/