Misinformation Fighting Directory — Defudger: A Conversation With Dominik Mate Kovacs
This article is part of our new Misinformation Fighting Directory where we interview organizations and projects that have built solutions or launched initiatives to help fight fake news and misinformation online.
The following is an interview we recently had with Dominik Mate Kovacs, COO and co-founder of Defudger.
Tell us about the team behind your project:
Defudger is a deep-tech startup developing robust AI-based algorithms, for the verification and detection of manipulated and synthetically generated media content. Defudger was started in Denmark in 2018, and it is established today in Berlin, Germany. Defudger’s core IP is a multi-layer detection system that verifies the authenticity and detects manipulations of audio-visual content. The company has constructed an exceptional team, comprising experts in computer vision, machine learning, AI and blockchain, as well as operational, management and business expertise. The team is supported by a strong expert advisory board and academic partners. Defudger’s vision is to make the digital world transparent, protecting truth, democracy, and freedom of speech. The three founders combine a strong understanding of the industry with deep technical expertise in computer science, data science, machine learning, and blockchain.
Kristof is a graduate of the Copenhagen Business School, with an MSc in Marketing and Management. He has been an active member of startup life in Denmark since 2013, has worked at and also funded startups in IT, media and marketing areas. Before funding Defudger, he managed his video marketing – IT agency. He is experienced in business development and growth hacking and has a special interest in emerging technologies.
Zoltan is a data scientist from the Danish Technical University. He has experience with Machine Learning and AI and worked as a teacher assistant during his university studies. He has been overseeing the development of the Defudger algorithm as the company’s CTO as well as setting up the learning datasets that the models have been trained on. Zoltan also has excellent insight into synthetic media which brings in tremendous value to the company.
I’m a computer vision expert with extensive knowledge about Python and AI related programming methods. I also studied at the Danish Technical University and have been working as a Python/Java software developer before founding Defudger. I oversee the development processes and coordinate the core team, and I’m also responsible for building the framework for the ML and detection models.
What’s the mission behind your organization?
Our mission is to make the world a transparent place protecting democracy and freedom of speech across the EU and worldwide through the implementation of cutting-edge technologies that improve the performance of computer vision, blockchain and AI. The focus of the company is to facilitate the detection of synthetically generated audio-visual content. Defudger will create new market opportunities, firstly targeting media and online platforms – providing next-generation verification and detection tools – while consistent with the company’s vision. Our solution uniquely combines data forensics, computer vision, machine learning, and blockchain technologies into a robust system. Cutting edge technologies allow Defudger to detect even the most sophisticated forgeries, outperforming existing tools. We set focus to create a system superior in detection and validation processes and also commercial viability, social impact and scalability.
How do you help fight misinformation and fake news?
Defudger’s multi-layer system can detect even minor manipulations made of digital, audio-visual content with digital media forensics, computer vision, AI and ML; and prevent proliferation of fake content pieces by hashing them onto our proprietary blockchain. As deepfake technology is being democratized it is becoming increasingly urgent to provide news and media agencies and digital platforms with a reliable, user friendly and efficient system that can determine whether the image of video file has been tampered with by malicious actors. Our three-layer tool that integrates state-of-the-art technology to counter these actors and detect fakes with a high accuracy (currently at 85% confidence level). As technology is quickly developing we need to keep our system up-to-date – this will be possible through machine learning, blockchain database, and through the implementation of new AI models and technologies. We provide publishers and media organization with a tool that can be used effectively for content that have been manipulated with traditional forgery methods (editing software, changing metadata) and with the most advanced technologies (deepfake, AI and synthetic generation). Our web based application enables journalists and fact-checkers to check images and videos on-demand, before they publish it. Current tools and technologies are niche solutions that will not be effective or scalable for the volume of manipulated content we will see in this decade; we, however, offer an effective technology and commercially viable tool for verification.
What is the impact of misinformation online to society?
The real impact of the growing interest in fake news has been the realization that the public might not be well-equipped to separate quality information, furthermore, might not have the right insight into the world of fake news either. Lack of content authenticity, suspicious, misleading or potential disinformation could potentially have an international impact. Fact-checkers are very effective at detecting misleading and false content. However, they have a hard time communicating their results to media consumers. Moreover, fake news spread even after it has been debunked and still has malicious effects on society. Therefore, they are in need of tools to get access to reliable data.
Our aim is to restore trust in traditional fact-checking organizations and mediators.
What’s the future of misinformation online?
The wider availability of computing power and the development of AI technology accelerated the commodification of deepfake technology. Since its first appearance in late 2017, the synthetic audio-visual content generation technology has developed rapidly, both in terms of technological sophistication and societal impact. The weaponization of deepfakes and synthetic media is influencing the cybersecurity landscape, further enhancing traditional cyber threats and enabling entirely new attack vectors.
The quality of synthetic content is getting exponentially better, similarly to the process to create it faster and easier – the technology endangering society and democracy if put in the wrong hands. In particular, China, USA, and Russia are already moving forward with R&D and legislation.