SOXTA TASVIR VA VIDEOLARNING ASL MOHIYATINI TUSHUNISH
PDF

Keywords

Deepfake, GAN, avtokodlovchilar (autoencoders), soxta tasvirlar, soxta videolar, sun’iy intellekt, neyron tarmoqlar.

Abstract

Deepfake algoritmi foydalanuvchiga juda haqiqiy tuyuladigan, ammo 
aslida soxta bo‘lgan tasvirlar, audio va videolar yaratish imkonini beradigan texnologiya.  
Bunday darajadagi texnologiyaga chuqur o‘rganish (Deep Learning), mashina yordamida 
o‘rganish (Machine Learning), sun’iy intellekt (Artificial Intelligence) va neyron tarmoqlar 
(Neural Networks) sohalaridagi mukammallashuv orqali erishilgan. Ushbu jarayonda 
generativ adversarial tarmoq (GAN), avtokodlovchilar (autoencoders) kabi algoritmlar 
kombinatsiyasi ishlatilishi mumkin. 
Har qanday texnologiya singari, deepfake ham ijobiy va salbiy oqibatlarga egaligi hech 
kimga sir emas. Ijobiy jihatdan, deepfake texnologiyasi gapirish qobiliyatini yo‘qotgan yoki 
gapirishda nuqsoni bo’lgan odamlarga yangi, yaxshilangan va sifatli ovoz yaratib berishda 
yordam berishi mumkin. Tijorat sohasida esa u animatsiya va filmlar sifatini oshirishda, 
ijodiy g‘oyalarni hayotga tatbiq etishda yoki yaqinlarini yo‘qotgan insonlar uchun ularni 
deepfake orqali tiklab, ularni jonlantirish ularga psixologik jihatdan yordam berishi 
mumkin. 
Salbiy tomonlari esa shundaki, juda real ko‘rinadigan soxta tasvirlar, videolar yoki 
audio yozuvlar shaxsiy hayot daxlsizligiga, tashkilotlar faoliyatiga, demokratiyaga, hatto 
milliy xavfsizlikka tahdid solishi mumkin. 
Ushbu maqola deepfake texnologiyasining paydo bo‘lish tarixi, uning qanday ishlashi va 
unda qo‘llaniladigan asosiy algoritmlar haqida ma’lumot beradi. Shuningdek, ilmiy 
adabiyotlarda olib borilgan muhim tadqiqotlarni tahlil qiladi hamda deepfake’ni aniqlash 
usullari va zararli oqibatlarni oldini olish uchun qanday samarali profilaktik choralar 
ko’rilganligini ko’rib chiqadi 

PDF

References

https://www.realitydefender.com/insights/history-of-deepfakes

2.Dolhansky, B., et al.: The deepfake detection challenge dataset. arXiv preprint arXiv:2006.07397. (2020)

3.Juefei-Xu F., Wang R., Huang Y., Guo Q., Ma L., Liu Y. Countering Malicious DeepFakes: Survey, Battleground, and Horizon. Int. J. Comput. Vis. 2022;130:1678–1734. doi: 10.1007/s11263-022-01606-8.

4.Korshunova I., Shi W., Dambre J., Theis L. Fast Face-Swap Using Convolutional Neural Networks; Proceedings of the IEEE International Conference on Computer Vision (ICCV); Venice, Italy. 22–27 October 2017; pp. 3697–3705.

5.Nirkin Y., Masi I., Tuan A.T., Hassner T., Medioni G. On Face Segmentation, Face Swapping, and Face Perception; Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition; Xi’an, China. 15–19 May 2018; pp. 98–105.

6.Mahajan S., Chen L., Tsai T. SwapItUp: A Face Swap Application for Privacy Protection; Proceedings of the IEEE 31st International Conference on Advanced Information Networking and Applications (AINA); Taipei, Taiwan. 27–29 March 2017; pp. 46–50.

7.Wang H., Dongliang X., Wei L. Robust and Real-Time Face Swapping Based on Face Segmentation and CANDIDE-3; Proceedings of the PRICAI 2018: Trends in Artificial Intelligence; Nanjing, China. 28–31 August 2018; pp. 335–342.

8.Liu S., Lian Z., Gu S., Xiao L. Block shuffling learning for Deepfake Detection. arXiv. 20222202.02819

9. Juefei-Xu F., Wang R., Huang Y., Guo Q., Ma L., Liu Y. Countering Malicious DeepFakes: Survey, Battleground, and Horizon. Int. J. Comput. Vis. 2022;130:1678–1734. doi: 10.1007/s11263-022-01606-8.

10.Thies J., Zollhofer M., Stamminger M., Theobalt C., Nießner M. Face2face: Real-time face capture and reenactment of RGB videos; Proceedings of the IEEE conference on computer vision and pattern recognition; Las Vegas, NV, USA. 27–30 June 2016; pp. 2387–2395.

11. Kim H., Garrido P., Tewari A., Xu W., Thies J., Niessner M., Pérez P., Richardt C., Zollhofer M., Theobalt C. Deep video portraits. ACM Trans. Graph. (TOG) 2018;37:1–4. doi: 10.1145/3197517.3201283.

12. Nirkin Y., Keller Y., Hassner T. FSGAN: Subject agnostic face swapping and reenactment; Proceedings of the IEEE/CVF International Conference on Computer Vision; Seoul, Korea. 27 October–2 November 2019; pp. 7184–7193.

13. Doukas M., Koujan M., Sharmanska V., Roussos A., Zafeiriou S. Head2Head++: Deep Facial Attributes Re-Targeting. IEEE Trans. Biom. Behav. Identity Sci. 2021;3:31–43. doi: 10.1109/TBIOM.2021.3049576.

14. Cozzolino D., Thies J., Rossler A., Riess C., Niener M., Verdoliva L. Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv. 20181812.02510

15. Matern F., Riess C., Stamminger M. Exploiting Visual Artifacts to Expose DeepFakes and Face Manipulations; Proceedings of the IEEE Winter Applications of Computer Vision Workshops; Waikoloa Village, HI, USA. 7–11 January 2019; pp. 1–10.

16. Sabir E., Cheng J., Jaiswal A., AbdAlmageed W., Masi I., Natarajan P. Recurrent Convolutional Strategies for Face Manipulation Detection in Videos; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; Long Beach, CA, USA. 16–17 June 2019; pp. 1–8.

17. Amerini I., Galteri L., Caldelli R., Del Bimbo A. Deepfake Video Detection through Optical Flow Based CNN; Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop (ICCVW); Seoul, Republic of Korea. 27–28 October 2019; pp. 1205–1207.

18. Wang Y., Dantcheva A. A video is worth more than 1000 lies. Comparing 3DCNN approaches for detecting deepfakes; Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG); Virtual. 16–20 November 2020; pp. 515–519.

19. Zhao X., Yu Y., Ni R., Zhao Y. Exploring Complementarity of Global and Local Spatiotemporal Information for Fake Face Video Detection; Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Singapore. 22–27 May 2022; pp. 2884–2888.

20. Bharati A., Singh R., Vatsa M., Bowyer K. Detecting facial retouching using supervised deep learning. IEEE Trans. Inf. Secur. 2016;11:1903–1913. doi: 10.1109/TIFS.2016.2561898.

21. Dang L.M., Hassan S.I., Im S., Moon H. Face image manipulation detection based on a convolutional neural network. Expert Syst. Appl. 2019;129:156–168. doi: 10.1016/j.eswa.2019.04.005.

22.Kim D., Kim D., Kim K. Facial Manipulation Detection Based on the Color Distribution Analysis in Edge Region. arXiv. 20212102.01381

23. Berthelot D., Schumm T., Metz L. Began: Boundary equilibrium generative adversarial networks. arXiv. 20171703.10717

24. Liu M., Tuzel O. Coupled generative adversarial networks; Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016); Barcelona, Spain. 5–10 December 2016; pp. 469–477.

25.Schonfeld E., Schiele B., Khoreva A. A u-net based discriminator for generative adversarial networks; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA. 13–19 June 2020; pp. 8207–8216.

26. Choi H., Park C., Lee K. From inference to generation: End-to-end fully self-supervised generation of human face from speech. arXiv. 20202004.05830

27. Yu N., Davis L., Fritz M. Attributing fake images to gans: Learning and analyzing gan fingerprints; Proceedings of the IEEE/CVF International Conference on Computer Vision; Seoul, Republic of Korea. 27 Octover–2 November 2019; pp. 7556–7566.

28. Marra F., Gragnaniello D., Verdoliva L., Poggi G. Do GANs leave artificial fingerprints?; Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval (MIPR); San Jose, CA, USA. 28–30 March 2019; pp. 506–511.