Style Conversion of Zhuang Brocade Patterns Based on Low Rank Adaptive and Deep Learning
Downloads
As a key bearer of traditional Zhuang culture, Zhuang brocade patterns face challenges such as low design efficiency and a stylistic disconnect from modern demands. Therefore, this study innovatively proposes an integrated model that combines low rank adaptive fine-tuning, stable diffusion model, and generative adversarial network for intelligent pattern generation and style transformation. This model adopts an image-to-image method based on GAN for pattern generation, and introduces an improved synthesis and cascading transformation method to achieve differentiated style conversion of brocade patterns. In performance validation experiments, the model demonstrates clear clustering boundaries in complex pattern feature recognition, with a peak signal-to-noise ratio of 32.5 dB, an initial style transfer score of 78.3, and an average diversity index of 0.78. When processing 400 data samples, the model requires 1123 MB of memory and responds within 122 ms, significantly outperforming comparison models in both performance and efficiency. The results show that the model proposed in the study has advantages in the intelligent generation of Zhuang brocade patterns and style transfer generation quality, style adaptability and comprehensive use of resources. The research results provide an effective solution for intelligent pattern generation and modern style adaptation, with broader potential applications in protecting and innovating intangible cultural heritage.
Downloads
[1] Wang, X., Yang, S., Wang, W., & Liu, J. (2022). Artistic Text Style Transfer: An overview of state-of-the-art methods and datasets [SP Forum]. IEEE Signal Processing Magazine, 39(6), 10–17. doi:10.1109/MSP.2022.3196763.
[2] Psychogyios, K., Leligou, H. C., Melissari, F., Bourou, S., Anastasakis, Z., & Zahariadis, T. (2023). SAMStyler: Enhancing Visual Creativity with Neural Style Transfer and Segment Anything Model (SAM). IEEE Access, 11, 100256–100267. doi:10.1109/ACCESS.2023.3315235.
[3] Cotogni, M., Arazzi, M., & Cusano, C. (2024). PhotoStyle60: A Photographic Style Dataset for Photo Authorship Attribution and Photographic Style Transfer. IEEE Transactions on Multimedia, 26, 10573–10584. doi:10.1109/TMM.2024.3408683.
[4] Khan, O. S., Iltaf, N., Zia, U., Latif, R., & Jamail, N. S. M. (2024). Efficient Text Style Transfer Through Robust Masked Language Model and Iterative Inference. IEEE Access, 12, 182353–182373. doi:10.1109/ACCESS.2024.3501320.
[5] Moar, C., Tahmasebi, F., Pellauer, M., & Kwon, H. (2024). Characterizing the Accuracy-Efficiency Trade-off of Low-rank Decomposition in Language Models. 2024 IEEE International Symposium on Workload Characterization (IISWC), 194–209. doi:10.1109/IISWC63097.2024.00026.
[6] Mao, W., Yang, S., Shi, H., Liu, J., & Wang, Z. (2023). Intelligent Typography: Artistic Text Style Transfer for Complex Texture and Structure. IEEE Transactions on Multimedia, 25(2), 6485–6498. doi:10.1109/TMM.2022.3209870.
[7] Zhang, Z., Zhang, Q., Xing, W., Li, G., Zhao, L., Sun, J., Lan, Z., Luan, J., Huang, Y., & Lin, H. (2024). ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7396–7404. doi:10.1609/aaai.v38i7.28570.
[8] Yan, H., Zhang, H., Shi, J., Ma, J., & Xu, X. (2023). Inspiration Transfer for Intelligent Design: A Generative Adversarial Network With Fashion Attributes Disentanglement. IEEE Transactions on Consumer Electronics, 69(4), 1152–1163. doi:10.1109/TCE.2023.3255831.
[9] Liu, M., Zhu, A. H., Maiti, P., Thomopoulos, S. I., Gadewar, S., Chai, Y., Kim, H., & Jahanshad, N. (2023). Style transfer generative adversarial networks to harmonize multisite MRI to a single reference image to avoid overcorrection. Human Brain Mapping, 44(14), 4875–4892. doi:10.1002/hbm.26422.
[10] Garg, M., Ubhi, J. S., & Aggarwal, A. K. (2023). Neural style transfer for image steganography and destylization with supervised image to image translation. Multimedia Tools and Applications, 82(4), 6271–6288. doi:10.1007/s11042-022-13596-3.
[11] Wang, Z., Zhou, Y., Shi, Y., & Letaief, K. B. (2024). Federated Low-Rank Adaptation for Large Language Model Fine-Tuning Over Wireless Networks. Proceedings - IEEE Global Communications Conference, GLOBECOM, 20(3), 3063–3068. doi:10.1109/GLOBECOM52923.2024.10901572.
[12] Jin, F., Liu, Y., & Tan, Y. (2024). Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models. IEEE/ACM Transactions on Audio Speech and Language Processing, 32, 4607–4616. doi:10.1109/TASLP.2024.3477330.
[13] Imam, R., Gani, H., Huzaifa, M., & Nandakumar, K. (2025). Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Vision-Language Models. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 5449–5459. doi:10.1109/WACV61041.2025.00532.
[14] Bechar, A., Elmir, Y., Himeur, Y., Medjoudj, R., & Amira, A. (2024). Enhancing Cancer Detection with Fine-Tuned Large Language Models: A Comparative Study on Low-Rank Adaptation. 2024 IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT), 360–365. doi:10.1109/BDCAT63179.2024.00062.
[15] Lu, Y., Wong, W. K., Yuan, C., Lai, Z., & Li, X. (2024). Low-Rank Correlation Learning for Unsupervised Domain Adaptation. IEEE Transactions on Multimedia, 26(11), 4153–4167. doi:10.1109/TMM.2023.3321430.
[16] Mishra, D., & Hadar, O. (2023). Accelerating Neural Style-Transfer Using Contrastive Learning for Unsupervised Satellite Image Super-Resolution. IEEE Transactions on Geoscience and Remote Sensing, 61(4705014), 1–14,. doi:10.1109/TGRS.2023.3314283.
[17] Toshevska, M., & Gievska, S. (2025). LLM-Based Text Style Transfer: Have We Taken a Step Forward? IEEE Access, 13(4), 44707–44721. doi:10.1109/ACCESS.2025.3548967.
[18] Veasey, B. P., & Amini, A. A. (2025). Low-Rank Adaptation of Pre-Trained Large Vision Models for Improved Lung Nodule Malignancy Classification. IEEE Open Journal of Engineering in Medicine and Biology, 6(10), 296–304. doi:10.1109/OJEMB.2025.3530841.
[19] Choi, J., Hong, S., Hong, S., Park, J., & Jung, E. S. (2025). Toward Generating Quality Test Questions and Answers Using Quantized Low-Rank Adapters in LLMs. IEEE Access, 13(2), 87793–87809. doi:10.1109/ACCESS.2025.3570567.
[20] Baby, A., Joseph, G., & Singh, S. (2024). Robust Speaker Personalisation Using Generalized Low-Rank Adaptation for Automatic Speech Recognition. ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 11381–11385. doi:10.1109/ICASSP48485.2024.10446630.
[21] Rasheed, I., Asif, M., Ihsan, A., Khan, W. U., Ahmed, M., & Rabie, K. M. (2023). LSTM-Based Distributed Conditional Generative Adversarial Network for Data-Driven 5G-Enabled Maritime UAV Communications. IEEE Transactions on Intelligent Transportation Systems, 24(2), 2431–2446. doi:10.1109/TITS.2022.3187941.
[22] Gupta, C., Das, R. K., Barik, R. K., Qurashi, S. N., Roy, D. S., & Yadav, S. S. (2025). GANCE: Generative Adversarial Network Assisted Channel Estimation for Unmanned Aerial Vehicles Empowered 5G and Beyond Wireless Networks. IEEE Access, 13(10), 198–213. doi:10.1109/ACCESS.2024.3522847.
[23] Preethi, P., & Mamatha, H. R. (2023). Region-Based Convolutional Neural Network for Segmenting Text in Epigraphical Images. Artificial Intelligence and Applications, 1(2), 103–111. doi:10.47852/bonviewAIA2202293.
[24] Tran, V. N., Choi, P., Le, H. S., Lee, S. H., & Kwon, K. R. (2025). DiffCoR: Exposing AI-Generated Image by Using Stable Diffusion Model Based on Consistent Representation Learning. IEEE Open Journal of the Computer Society, 6(10), 1353–1365. doi:10.1109/OJCS.2025.3575507.
[25] Gaber, T., Ali, T., Nicho, M., & Torky, M. (2025). Robust Attacks Detection Model for Internet of Flying Things Based on Generative Adversarial Network (GAN) and Adversarial Training. IEEE Internet of Things Journal, 12(13), 23961–23974. doi:10.1109/JIOT.2025.3555202.
[26] Kim, S., Jang, B., Lee, J., Bae, H., Jang, H., & Park, I.-C. (2023). A CNN Inference Accelerator on FPGA With Compression and Layer-Chaining Techniques for Style Transfer Applications. IEEE Transactions on Circuits and Systems I: Regular Papers, 70(4), 1591–1604. doi:10.1109/TCSI.2023.3234640.
[27] Moon, S., Kim, S., & Choi, Y. H. (2022). MIST-Tacotron: End-to-End Emotional Speech Synthesis Using Mel-Spectrogram Image Style Transfer. IEEE Access, 10(7), 25455–25463. doi:10.1109/ACCESS.2022.3156093.
[28] Yadav, N. K., Singh, S. K., & Dubey, S. R. (2022). MobileAR-GAN: MobileNet-Based Efficient Attentive Recurrent Generative Adversarial Network for Infrared-to-Visual Transformations. IEEE Transactions on Instrumentation and Measurement, 71(10), 1–9. doi:10.1109/TIM.2022.3166202.
[29] Jimale, A. O., & Mohd Noor, M. H. (2022). Fully Connected Generative Adversarial Network for Human Activity Recognition. IEEE Access, 10(2), 100257–100266. doi:10.1109/ACCESS.2022.3206952.
[30] Behara, R. K., & Saha, A. K. (2024). Analysis of Wind Characteristics for Grid-Tied Wind Turbine Generator Using Incremental Generative Adversarial Network Model. IEEE Access, 12(12), 38315–38334. doi:10.1109/ACCESS.2024.3372862.
[31] Singh, B., & Bhuvaneswari, G. (2020). Grid-tied battery integrated wind energy generation system with an ability to operate under adverse grid conditions. IEEE Transactions on Industry Applications, 56(6), 6882-6891. doi:10.1109/TIA.2020.3024156.
- This work (including HTML and PDF Files) is licensed under a Creative Commons Attribution 4.0 International License.





















