TRUST-AI is a Horizon project delivering methods and technology in the broad area of interpretability and human-driven AI. The areas where our approaches are being tested include Health, Retailing, and Energy.
Apintech LTD joins the Organisation Committee of EnerVision 2025, presenting TRUST-AI and its innovative approaches in the energy sector.
Zehra Cataltepe, CEO of TAZI AI, discusses AI transformation in financial services and the role of TRUST AI in ensuring safe and compliant AI applications.
TAZI AI showcases no-code machine learning solutions at Finovate London, demonstrating how to democratize AI tools for both technical and business users in the fintech sector.
LTPLabs shares expertise in AI trust-building, leading to TRUST-AI's inclusion in a European project's compendium of case studies on AI explainability and best practices.
LTPLabs contributes to the AI Leaders project, sharing expertise on AI transparency and ethical implications through insights from Senior Associate André Craveiro Morim.
INESC TEC contributes to APDIO's Workshop on Operations Research and AI, exploring the integration of artificial intelligence with dynamic operations management challenges.
CWI researchers present advanced work on GP-GOMEA algorithm enhancement at PPSN 2024, improving symbolic regression through simultaneous optimization of expression structures and constants.
University of Tartu researchers present innovative work on counterfactual explanations at XAI 2024, introducing diffusion distance and directional coherence to enhance AI interpretability.
CWI researchers present innovative work on explainable meta-learning for tumor growth modeling using genetic programming at GECCO 2024, advancing transparent AI applications in medical research.
INESC TEC explores AI applications in logistics at Rangel's Logistics Day, showcasing how artificial intelligence can optimize operations and tackle real-time challenges in the logistics sector.
INESC TEC and LTPLabs collaborate with SONAE in two validation workshops, strengthening industry-academia partnerships in AI technology validation and implementation.
Interested in Genetic Programming and Explainable AI? Check the webinar on the top performing MSGP algorithm, organized by Marc Schoenauer and Alessandro Leite, from INRIA, France, our TRUST-AI partner.
Interested in Genetic Programming and Explainable AI? Check the webinar on the top performing GP-GOMEA algorithm, organized by Peter Bosnan, CWI, Netherlands, our TRUST-AI partner.
University of Tartu presents CODICE, a counterfactual search framework that enhances explainability in AI systems, aligning with TRUST-AI's goals for transparent and reliable AI.
G. Figueira from INESC TEC presents on explainable AI and the evolution from static optimization to sequential decision-making at Data Makers Fest 2023.
TRUST-AI is a Horizon project delivering methods and technology in the broad area of interpretability and human-driven AI. The areas where our approaches are being tested include Health, Retailing, and Energy.
INRIA researchers present groundbreaking work on Interactive Latent Diffusion Model at the Genetic and Evolutionary Computation Conference (GECCO 2023) in Lisbon, combining evolutionary frameworks with text-to-image models.
In the course of the TRUST AI HORIZON project, we faced the challenge of pushing up the TRL ladder, a specific forecasting component that by its very nature was rather limited as to its exploitation potential.
INRIA researchers present innovative work on Memetic Semantic Genetic Programming at the 26th European Conference on Genetic Programming (EuroGP 2023), advancing symbolic regression methods.
The European Innovation Council (EIC) and the European Medicine Agency (EMA) organised the EIC-EMA Info Day on January 31, 2023.
An article entitled "Explainable Approaches for Forecasting Building Electricity Consumption" is now available online as a preprint.
Data sharing is a long-established practice. There are several general-purpose cloud platforms (google, dropbox, etc.) as well as more specialized sharing services such as, for example, those provided by xenodo
The world of the 4th Industrial Revolution is underpinned by sharing and collaboration approaches. More and more, these are entering the mainstream and gradually putting aside the proprietary mindset and models of the past.
Energy demand forecasting is practiced in several time frames; different explanatory variables are used in each case to serve different decision support mandates. For example, in the short, daily, term building level, forecasting may serve as a performance baseline.
Energy demand is essential when planning for infrastructure and grid investment (TadahiroN., Shigeyuki H., 2010).
We are honoured to announce that in the Framework of Horizon 2020, our TRUST AI Project: 952060, Call: H2020-EIC-FETPROACT-2019 has been started as of October 2020.
How can doctors rely on a system that tells them the right time to operate on a patient with a rare tumour? How can a retailer be sure that the algorithm did not favour a supplier over one of the competitors? And what about the consumers?
Artificial Intelligence (AI) is a game-changer for a variety of sectors. AI’s boost to the global economy by 2030 is forecasted to be of $15.7 trillion (source PwC), and by 2021 80% of emerging technologies will have AI foundations (source Gartner).
L’intelligence artificielle et en particulier l’apprentissage profond produisent des résultats toujours plus impressionnants, mais selon des processus souvent inaccessibles au raisonnement humain.
El proyecto que utiliza la teoría de la evolución de Darwin para explicar la Inteligencia Artificial
La Inteligencia Artificial (IA) está por todas partes. Gracias a todo este mecanismo oculto, una máquina es capaz de aprender hasta 1.000 millones de parámetros y usarlos luego para dar la respuesta acertada en cuestión de segundos.