SEOUL, March 30 (Yonhap) -- The Korea Advanced Institute of Science and Technology (KAIST) said Tuesday its research team launched the country's first mobile app that detects deepfakes -- images or videos digitally manipulated with artificial intelligence (AI) -- to curb misinformation and prevent potential harm to victims targeted by the technology.
The software, named KaiCatch, can accurately detect deepfakes by using AI technology that recognizes abnormal distortions in a subject's face in images, according to KAIST, South Korea's top science and technology university.
App users can upload images or frames of videos on the app, which calculates the likelihood of the image being manipulated for 2,000 won (US$1.76) per image, according to KAIST's Lee Heung-kyu, a professor at the school of computing, who is behind KaiCatch.
Lee said he has developed image manipulation detection software since 2015, collecting a mass database of images and video data.
The researcher expects KaiCatch to help the broader public detect deepfakes, which have become a major concern in South Korea as they have been used to create porn involving female celebrities.
A petition on the presidential office's website early this year, which called for strong punishment against deepfake porn users, earned over 390,000 signatures.
"This is just the starting point for KaiCatch," Lee said. "We plan to keep updating the software to detect new deepfake technology."
KaiCatch is currently only available on the Android operating system in Korean, but Lee said his team is planning to release an iOS version for Apple users and support other languages, including English, Chinese and Japanese.
(News Focus) No parcel day: Why S. Korean delivery workers are taking a day off on Aug. 14
Advertising controversy grips S. Korean mukbang YouTubers
Seoul's last-ditch home supply plan still in doubt over its viability
Korean foodmakers ramp up overseas push amid COVID-19 pandemic
Bumpy road lies ahead for Samsung, even after heir avoids detention