[1] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
[2] CHENG W H, SONG S J, CHEN C Y, et al. Fashion meets computer vision: A survey[J]. ACM Computing Surveys , 2022, 54(4): 1-41.
[3] 施倩, 罗戎蕾. 基于生成对抗网络的服装图像生成研究进展[J/OL].现代纺织技术, 2022,31: 19-29. [2023-01-23]. https://kns.cnki.net. DOI:10.19398/j.att.202203056.
SHI Qian, LUO Ronglei. Research progress of clothing image generation based on generative adversarial networks[J/OL]. Advanced Textile Technology. 2022,31:19-29[2023-01-23]. https://kns.cnki.net.DOI:10.19398/j.att.202203056.
[4] CHEN H, LEI S, ZHANG S G, et al. Man-algorithm cooperation intelligent design of clothing products in multi links[J]. Fibres and Textiles in Eastern Europe. 2022,30(1): 59-66.
[5] 赵梦如. 人工智能在服装款式设计领域的应用进展[J]. 纺织导报, 2021(12): 74-77.
ZHAO Mengru. Application progress of artificial intelligence in clothing style design[J]. China Textile Leader, 2021(12): 74-77.
[6] JIANG S, LI J, FU Y. Deep learning for fashion style generation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(9): 4538-4550.
[7] 唐稔为, 刘启和, 谭浩. 神经风格迁移模型综述[J]. 计算机工程与应用, 2021, 57(19): 32-43.
TANG Renwei, LIU Qihe, TAN Hao. Review of neural style transfer models[J]. Computer Engineering and Applications, 2021, 57(19): 32-44.
[8] JING Y C, YANG Y Z, FENG Z L, et al. Neural style transfer: A review[J]. IEEE Transactions on Visualization and Computer Graphics, 2020, 26(11): 3365-3385.
[9] MO D, ZOU X, WONG W K. Neural stylist: Towards online styling service[J]. Expert Systems with Applications, 2022, 203: 117333.
[10] GATYS L, ECKER A, BETHGE M. A neural algorithm of artistic style[J]. Journal of Vision, 2016, 16(12): 356.
[11] WANG H Y, XIONG H T, CAI Y Y. Image localized style transfer to design clothes based on CNN and interactive segmentation[J]. Computational Intelligence and Neuroscience, 2020, 2020: 8894309.
[12] LI Y J, FANG C, YANG J M, et al. Universal style transfer via feature transforms[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA. New York: ACM, 2017: 385-395.
[13] GOODFELLOW I, POUGET-ABADIE J, MIRZA M B, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems:Volume 2. Montreal, Canada. New York: ACM, 2014: 2672-2680.
[14] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA. IEEE, 2017: 5967-5976.
[15] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//IEEE International Conference on Computer Vision (ICCV). Venice, Italy. IEEE, 2017: 2242-2251.
[16] MO S, CHO M, SHIN J. InstaGAN: Instance-aware Image-to-Image Translation[J]. ArXiv, 2018: 1812.10889. https:// https://arxiv.org/abs/1812.10889.
[17] SALIMANS T, GOODFELLOW I, ZAREMBA W, et al. Improved techniques for training GANs[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain. New York: ACM, 2016: 2234–2242.
[18] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2004, 13(4): 600-612.
[19] YI Z, ZHANG H, TAN P, et al.DualGAN: Unsupervised dual learning for image-to-image translation[C]// Proceedings of the IEEE International Conference on Computer Vision(ICCV). Venice, Italy. IEEE, 2017: 2868-2876.
[20] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas, NV, USA. IEEE, 2016: 770-778.
[21] 林泓, 任硕, 杨益, 等. 融合自注意力机制和相对鉴别的无监督图像翻译[J]. 自动化学报, 2021, 47(9): 2226-2237.
LIN Hong, REN Shuo, YANG Yi, et al. Unsupervised image-to-image translation with self-attention and relativistic discriminator adversarial networks[J]. ACTA Automatica Sinica, 2021, 47(9): 2226-2237.
[22] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of wasserstein gans[J]. ArXiv, 2017:1704. 00028. https://arxiv.org/abs/1704.00028.
[23] LIANG X D, LIN L, YANG W, et al. Clothes Co-parsing via joint image segmentation and labeling with application to clothing retrieval[J]. IEEE Transactions on Multimedia, 2016, 18(6): 1175-1186.
[24] 陈淮源, 张广驰, 陈高, 等. 基于深度学习的图像风格迁移研究进展[J]. 计算机工程与应用, 2021, 57(11): 37-45.
CHEN Huaiyuan, ZHANG Guangchi, CHEN Gao, et al. Research progress of image style transfer based on depth learning[J]. Computer Engineering and Applications, 2021, 57(11): 37-45.
[25] 李敏, 刘冰清, 彭庆龙, 等. 基于CycleGAN算法的迷彩服装图案设计方法研究[J]. 丝绸, 2022, 59(8): 100-106.
LI Min, LIU Bingqing, PENG Qinglong, et al. A camouflage suit pattern design based on the CycleGAN algorithm[J]. Journal of Silk, 2022, 59(8): 100-106.
[26] 曾宪华, 陆宇喆, 童世玥, 等. 结合马尔科夫场和格拉姆矩阵特征的写实类图像风格迁移[J]. 南京大学学报(自然科学), 2021, 57(1): 1-9.
ZENG Xianhua, LU Yuzhe, TONG Shiyue, et al. Photorealism style transfer combining MRFs⁃based and gram⁃based features[J]. Journal of Nanjing University (Natural Science), 2021, 57(1): 1-9.
[27] QUINON P. Engineered emotions[J]. Science, 2017, 358(6364): 729.
|