DISARGUE will improve the interaction of domain-experts (fact-checkers, journalists) with AI systems to detect and fight disinformation by allowing them to: (i) better understand the decisions taken by the system and, (ii) to deploy NLG-based natural language argumentation techniques to counteract misinformation in social media in a real time manner. While current misinformation detection and fact-checking systems are obtaining good classification performance, their ability to explain how they achieve their predictions remains highly problematic. The science to technology breakthrough of DISARGUE focuses on generating argumentation to explain, in the multimodal and multilingual detection phase, the decisions taken by the AI system thereby supporting domain-experts to make informative decisions in their day-to-day fact-checking activities. Furthermore, it will address, in the mitigation phase, the counteracting of disinformation by generating automatic counter-arguments with the aim of mitigating the effects of the spreading of misinformation. Finally, by experimenting on socially/domain-expert guided argumentation generation by few-shot learning, it will help to generate high performing and deployable technology for each of the domains/topics of interest related to misinformation. DISARGUE opens a new and exciting avenue of research in AI-based explanatory argumentation to fight misinformation.