The Workshop starts Online at 12.00 PM CET (Milano). To participate you must:
1) register to ICPR;
2) create Underline account using the same e-mail you used for ICPR registration;
3) access the WS via this link https://www.micc.unifi.it/icpr2020/index.php/access-to-virtual-icpr2020/ (note tutorial video on the right)
About
The recent focus of AI and Pattern Recognition communities on the supervised learning approaches, and particularly to Deep Learning / AI, resulted in considerable increase of performance of Pattern Recognition and AI systems, but also raised the question of the trustfulness and explainability of their predictions for decision-making. Instead of developing and using Deep Learning as a black box and adapting known Neural Networks architectures to variety of problems, the goal of explainable Deep Learning / AI is to propose methods to “understand” and “explain” how the these systems produce their decisions. AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings raise many ethical and policy concerns that impede wider adoption of this potentially very beneficial technology. In various Pattern Recognition and AI application domains such as health, ecology, autonomous driving cars, security, culture it is mandatory to understand how the predictions are correlated with the information perception and decision making by the experts and impact society and business.. The goals of this full day workshop are to bring together research community which is working on the question of improving explainability of AI and Pattern Recognition algorithms and systems. The Workshop is a part of ICPR'2020 and supported by research project XAI-LABRI
Topics
- “Sensing” or “salient features” of Neural Networks and AI systems - explanation of which features for a given configuration yield predictions both in spatial (images) and temporal (time-series, video) data;
- Optimal Visualization of salient features and areas in input data contribution into decision making;
- Attention mechanisms in Deep Neural Networks and their explanation;
- For temporal data, the explanation of which features and at what time are the most prominent for the prediction and what are the time intervals when the contribution of each data is important;
- How the explanation can help on making Deep learning architectures more sparse (pruning) and light-weight;
- When using multimodal data how the prediction in data streams are correlated and explain each other;
- Automatic generation of explanations / justifications of algorithms and systems’ decisions;
- Decisional uncertainly and explicability
- Evaluation of the explanations generated by Deep Learning and other AI systems.
Program Committee
- Christophe Garcia (LIRIS, France)
- Dragutin Petkovic (SFSU,USA)
- Alexandre Benoît( LISTIC,France)
- Mark T. Keane (UCD, Ireland)
- Georges Quenot(LIG, France)
- Stefanos Kolias (NTUA, Grece)
- Jenny Benois-Pineau(LABRI, France)
- Hervé Le Borgne (LIST, France)
- Noel O’Connor (DCU, Ireland)
- Nicolas Thome(CNAM, France)
Dates
- Submission deadline :
October 10th 2020 October 17th 2020 23:59 GMT Workshop author notification: November 10th 2020Camera-ready submission: November 18th 2020- Finalized workshop program: December 7th 2020
- Workshop event: online January 11, 2021
Paper Submission
The Proceedings of the EDL-AI 2020 workshop will be published in the Springer Lecture Notes in Computer Science (LNCS) series. Papers will be selected by a single blind (reviewers are anonymous) review process. All selected papers will be published and subset of them will be presented at the workshop. Submissions must be formatted in accordance with the Springer's Computer Science Proceedings guidelines . Two types of contribution will be considered:
- Full paper (12-15 pages)
- Short papers (6-8 pages)
Submission site: Submission
WS Program
12:00-12:15 Welcome and overview of the Workshop (workshop organizers)
12:15-13:00 Plenary talk "Towards AI Ethics and Explainability", Prof. D. Petkovic, San Francisco State University, (USA)
13:00 – 14:20 Morning Session
- 1. C. Henin, D. Le Métayer, (France)
“A Multi-layered Approach for Tailored Black-box Explanations” - 2. M. T. Keane, E. Kenny, (Ireland)
”Explanatory Variations for Deep Learning Using Twin Systems” - 3. S. M. Muddamsetty, M. N. S. Jahromi and T. B. Moeslund, (Denmark)
“Expert level evaluations for explainable AI (XAI) methods in the medical domain” - 4. A. Halnaut, R. Giot, R. Bourqui, D. Auber, (France)
“Pixel oriented visualization of samples across layers of a classification based DNN” 14:20 – 14:30 Break
14:30 - 15:50 – Afternoon session
- 1. D. Petkovic, A. Alavi, D. Cai and M. Wong, (USA)
“Toward Explainable AI: Random Forest Mandel and Sample Explainer “ - 2. H.Jung, Y. Oh, J. Park and M.- S. Kim, (South Korea)
“Jointly Optimize Positive and Negative Saliencies for Black Box Classifiers” - 3. P. Zhu, R. Zhu, S. Mishra and V. Saligrama, (USA)
“Low Dimensional Visual Attributes: An Interpretable Image Encoding” - 4. F. Cruciani, L. Brusini, M. Zucchelli, G. R. Pinheiro,
F. Setti, I. Boscolo Galazzo, R. Deriche, L. Rittner, M. Calabrese and G. Menegaz, (Italy, France, Brazil)
“Explainable 3D-CNN for Multiple Sclerosis patients stratification” 15:50 - 16:00 Break
16:00 - 17:00 Poster Session
- 1. A. Lopez-Cifuentes, M. Escudero-Viñolo and J. Bescós, (Spain)
“Visualizing the Effect of Semantic Classes in the Attribution of Scene Recognition Models” - 2. K. Huesmann, L. Garcia-Rodriguez, L. Linsen and B. Risse, (Germany)
“The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks” - 3. K. Abdiyeva, M. Lukac and N. Ahuja, (Singapore, Kazakhstan, USA)
“Remove To Improve?” - 4. G. Nguyen, Sh. Chen, T. Joon Jun and D. Kim, (South Korea)
“Explaining How Deep Neural Networks Forget by Deep Visualization” - 5. M. Jacquemont, Th. Vuillaume, A. Benoit, G. Maurin and P. Lambert, (France)
“Deep Learning for Astrophysics, Understanding the Impact of Attention on Variability Induced by Parameter Initialization” - 6. A. Apicella, S. Giugliano, F. Isgro and R. Prevete, (Italy)
“A general approach to compute the relevance of middle-level input features” - 7. M. Veerappa, M. Anneken and N. Burkart, (Germany)
“Evaluation of Interpretable Association Rule Mining Methods on time-series in the Maritime Domain” - 8. G. Jouis, H. Mouchère, F. Picarougne and A. Hardouin, (France)
“Anchors vs Attention: comparing XAI on a real-life use case” - 9. M. Scalas, K. Rieck and G. Giacinto, (Italy)
“Explanation-driven Characterisation of Android Ransomware” - 10. A. Galli, S. Marrone, V. Moscato and C. Sansone, (Italy)
“Reliability of eXplainable Artificial Intelligence in Adversarial Perturbation Scenarios” - 11. M. Oussalah, (Finland)
“AI Explainability. A Bridge between Machine Vision and Natural Language Processing” - 12. O. Gorokhovatskyi and O. Peredrii, (Ukraine)
“Recursive Division of Image for Explanation of Shallow CNN Models” 17:00 – 17:50 Panel discussion
- R. Cucchiara (UNIMORE, IT), P. Radeva (UB, SP), G. Quenot (CNRS-LIG, FR), J. Benois-Pineau (UB, FR), D. Petkovic (SFSU, USA)
Moderator: D. Petkovic (SFSU, USA)
Panelists will each present a 5 min their position and challenges they see,
after which the audience will be engaged in moderated discussion 17:50 - 18:00 Closing remarks (WS organizers)