Genetic Programming
Symbolic models can be learned with algorithms like Genetic Programming (GP). By manipulating symbolic expressions, which are combinations of likely-to-be-meaningful symbols, operators, and (integrated) constants, GP algorithms result in models that more closely approach humans’ analytical thinking and should therefore generalise better in practice. The challenge here is efficiency (because they are searching for both structure and parameters) and parsimony (i.e. keeping expressions short and comprehensible). These issues are being explored with new algorithms, such as Memetic Semantic GP and GP-GOMEA.
Different variants of these algorithms have been developed. Memetic Semantic GP has been extended to regression problems (the original algorithm was limited to classification) and enhanced with boosting. GP-GOMEA was enhanced in multiple ways, including joint tree-structure and constant evolution, multi-modal, multi-tree, multi-objective and function class learning.
See relevant publications →Human-guided learning
Due to their black-box nature, existing artificial intelligence (AI) models are difficult to interpret, and hence trust. Practical, real-world solutions to this issue cannot come only from the computer science world. TRUST-AI proposes involving human intelligence in the discovery process. It employs 'explainable-by-design' symbolic models and learning algorithms, and adopts a human-centric, 'guided empirical' learning process that integrates cognition.
The project is designing TRUST, a trustworthy and collaborative AI platform, ensuring its adequacy to tackle predictive and prescriptive problems, and creating an innovation ecosystem in which academics and companies can work independently or together. The proposed ‘human-guided learning’ should be the next ‘go-to paradigm’ for a wide range of sectors, where human agency / accountability is essential. These include healthcare, retail, energy, manufacturing, among others.
See relevant publications →Counterfactual Analysis
Human-guided learning is based on using (human) cognitive processes to evaluate intermediate results and provide further guidance to the machine. Counterfactual analysis is one such process and TRUST-AI is making progress in this area. We formalized and quantified two relevant human heuristics in producing and evaluating causal explanations: the feasibility of a counterfactual explanation and its directed coherence. These terms, as well as different types of user constraints, were incorporated in a new algorithm for counterfactual search: CoDiCE.
See relevant publications →