[1] L. A. Gatys, A. S. Ecker, and M. Bethge, Image style transfer using convolutional neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414{2423.
[2] N. Ashikhmin, \Fast texture transfer," IEEE computer Graphics and Applications, vol. 23, no. 4, pp. 38{43, 2003.
[3] C. Zhao, \A survey on image style transfer approaches using deep learning," in Journal of Physics: Conference Series, vol. 1453, no. 1. IOP Publishing, 2020, p. 012129.
[4] L. Sheng, Z. Lin, J. Shao, and X.Wang, Avatar-net: Multi-scale zero-shot style transfer by feature decoration," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8242{8250.
[5] S. Gu, C. Chen, J. Liao, and L. Yuan, Arbitrary style transfer with deep feature reshue," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8222{8231.
[6] C. Li and M. Wand, Combining markov random elds and convolutional neural networks for image synthesis," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2479{2486.
[7] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, Universal style transfer via feature transforms," Advances in neural information processing systems, vol. 30, 2017.
[8] X. Li, S. Liu, J. Kautz, and M.-H. Yang, Learning linear transformations for fast image and video style transfer," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3809{3817.
[9] X. Huang and S. Belongie, Arbitrary style transfer in real-time with adaptive instance normalization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1501{1510.
[10] Y. Zhang, F. Tang, W. Dong, H. Huang, C. Ma, T.-Y. Lee, and C. Xu, Domain enhanced arbitrary image style transfer via contrastive learning," arXiv preprint arXiv:2205.09542, 2022.
[11] T. Q. Chen and M. Schmidt, Fast patch-based style transfer of arbitrary style," arXiv preprint arXiv:1612.04337, 2016.
[12] Y. Zhang, C. Fang, Y. Wang, Z. Wang, Z. Lin, Y. Fu, and J. Yang, Multimodal style transfer via graph cuts," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5943{5951.
[13] M. A , A. Abuolaim, M. Hussien, M. A. Brubaker, and M. S. Brown, Cams: Color-aware multi-style transfer," arXiv preprint arXiv:2106.13920, 2021.
[14] G. Kwon and J. C. Ye, Clipstyler: Image style transfer with a single text condition," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18 062{18 071.
[15] Z.-S. Liu, L.-W. Wang, W.-C. Siu, and V. Kalogeiton, Name your style: An arbitrary artist-aware image style transfer," arXiv preprint arXiv:2202.13562, 2022.
[16] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., Learning transferable visual models from natural language supervision," in International Conference on Machine Learning. PMLR, 2021, pp.8748{8763.
[17] T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401{4410.
[18] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223{2232.
[19] H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein, \Palette-based photo recoloring." ACM Trans. Graph., vol. 34, no. 4, pp. 139{1, 2015.
[20] R. Gal, O. Patashnik, H. Maron, G. Chechik, and D. Cohen-Or, \Stylegan-nada: Clip-guided domain adaptation of image generators," arXiv preprint arXiv:2108.00946, 2021.
[21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, Microsoft coco: Common objects in context," in European conference on computer vision. Springer, 2014, pp. 740{755.
[22] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma et al., Visual genome: Connecting language and vision using crowdsourced dense image annotations," International journal of computer vision, vol. 123, no. 1, pp. 32{73, 2017.
[23] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, Yfcc100m: The new data in multimedia research," Communications of the ACM, vol. 59, no. 2, pp. 64{73, 2016.