Abstract
Traditional Chinese ink painting and Western oil painting are two unique artistic media with significant cultural value. Multimodal visual networks are integrated to facilitate real-time interaction between artists and AI systems in creative situations. To combine the smooth, subtle textures of Chinese ink with the bright, textured brushstrokes of oil painting was used. To solve visual network problems, FFA-CausalConvoNet uses several layers and artists’ unique visual components to create works of art that mimic human efforts. The objective is to investigate the human-AI collaborative artistic production by creating a multimodal visual network that combines Chinese ink painting and Western oil painting by utilizing advanced AI techniques. A dataset containing labelled images of Chinese ink paintings and Western oil paintings was utilized for training. Gabor filters were used to capture texture and patterns, separating thin, smooth gradients of ink from rough brushstrokes of oil. Histogram of Oriented Gradients (HOG) and Oriented FAST and Rotated BRIEF (ORB) are used for feature extraction and fusion, which improves texture and shape detection. The proposed generation method utilizes a Flexible Firefly Algorithm Driven Causality-Aware Convolutional Neural Network (FFA-CausalConvoNet) to blend these features and create hybrid artwork that performs well, successfully combining with different styles. The suggested method is implemented by using Python 3.10.1. The resulting artworks displayed a unique blend of the oil painting’s texture and the delicate brushwork of Chinese ink with style fusion accuracy (94.5%), text quality score (9.1 score), processing time (3.5 sec) and cultural fidelity (8.5 score). It highlights the potential of AI-human collaboration in making hybrid art, as the proposed model, FFA-CausalConvoNet, effectively incorporates the distinctive qualities of both art forms, opening up new paths for creative expression in cross-cultural art development.
Keywords
Get full access to this article
View all access options for this article.
