Generating modern persian carpet map by style-transfer

Document Type : Research Paper

Authors

1 Department of Computer Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

2 Department of Carpet, Saba Faculty of Art and Architecture, Shahid Bahonar University of Kerman, Kerman, Iran

Abstract

Today, the great performance of Deep Neural Networks(DNN) has been proven in various fields. One of its most attractive applications is to produce artistic designs. A carpet that is known as a piece of art is one of the most important items in a house, which has many enthusiasts all over the world. The first stage of producing a carpet is to prepare its map, which is a difficult, time-consuming, and expensive task. In this research work, our purpose is to use DNN for generating a Modern Persian Carpet Map. To reach this aim, three different DNN style transfer methods are proposed and compared against each other. In the proposed methods, the Style-Swap method is utilized to create the initial carpet map, and in the following, to generate more diverse designs, methods Clip-Styler, Gatys, and Style-Swap are used separately. In addition, some methods are examined and introduced for coloring the produced carpet maps. The designed maps are evaluated via the results of filled questionnaires where the outcomes of user evaluations confirm the popularity of generated carpet maps. Eventually, for the first time, intelligent methods are used in producing carpet maps, and it reduces human intervention. The proposed methods can successfully produce diverse carpet designs, and at a higher speed than traditional ways.

Keywords

Main Subjects


[1] L. A. Gatys, A. S. Ecker, and M. Bethge, Image style transfer using convolutional neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414{2423.
[2] N. Ashikhmin, \Fast texture transfer," IEEE computer Graphics and Applications, vol. 23, no. 4, pp. 38{43, 2003.
[3] C. Zhao, \A survey on image style transfer approaches using deep learning," in Journal of Physics: Conference Series, vol. 1453, no. 1. IOP Publishing, 2020, p. 012129.
[4] L. Sheng, Z. Lin, J. Shao, and X.Wang, Avatar-net: Multi-scale zero-shot style transfer by feature decoration," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8242{8250.
[5] S. Gu, C. Chen, J. Liao, and L. Yuan, Arbitrary style transfer with deep feature reshue," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8222{8231.
[6] C. Li and M. Wand, Combining markov random  elds and convolutional neural networks for image synthesis," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2479{2486.
[7] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, Universal style transfer via feature transforms," Advances in neural information processing systems, vol. 30, 2017.
[8] X. Li, S. Liu, J. Kautz, and M.-H. Yang, Learning linear transformations for fast image and video style transfer," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3809{3817.
[9] X. Huang and S. Belongie, Arbitrary style transfer in real-time with adaptive instance normalization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1501{1510.
[10] Y. Zhang, F. Tang, W. Dong, H. Huang, C. Ma, T.-Y. Lee, and C. Xu, Domain enhanced arbitrary image style transfer via contrastive learning," arXiv preprint arXiv:2205.09542, 2022.
[11] T. Q. Chen and M. Schmidt, Fast patch-based style transfer of arbitrary style," arXiv preprint arXiv:1612.04337, 2016.
[12] Y. Zhang, C. Fang, Y. Wang, Z. Wang, Z. Lin, Y. Fu, and J. Yang, Multimodal style transfer via graph cuts," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5943{5951.
[13] M. A  , A. Abuolaim, M. Hussien, M. A. Brubaker, and M. S. Brown, Cams: Color-aware multi-style transfer," arXiv preprint arXiv:2106.13920, 2021.
[14] G. Kwon and J. C. Ye, Clipstyler: Image style transfer with a single text condition," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 062{18 071.
[15] Z.-S. Liu, L.-W. Wang, W.-C. Siu, and V. Kalogeiton, Name your style: An arbitrary artist-aware image style transfer," arXiv preprint arXiv:2202.13562, 2022.
[16] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., Learning transferable visual models from natural language supervision," in International Conference on Machine Learning. PMLR, 2021, pp.8748{8763.
[17] T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401{4410.
[18] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223{2232.
[19] H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein, \Palette-based photo recoloring." ACM Trans. Graph., vol. 34, no. 4, pp. 139{1, 2015.
[20] R. Gal, O. Patashnik, H. Maron, G. Chechik, and D. Cohen-Or, \Stylegan-nada: Clip-guided domain adaptation of image generators," arXiv preprint arXiv:2108.00946, 2021.
[21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, Microsoft coco: Common objects in context," in European conference on computer vision. Springer, 2014, pp. 740{755.
[22] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma et al., Visual genome: Connecting language and vision using crowdsourced dense image annotations," International journal of computer vision, vol. 123, no. 1, pp. 32{73, 2017.
[23] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, Yfcc100m: The new data in multimedia research," Communications of the ACM, vol. 59, no. 2, pp. 64{73, 2016.