Exploiting generative AI as a collaborator to enhance creators’ dissertations
Main Article Content
Abstract
The concept dissertation is a critical and analytical text the author communicates to the public. However, creators may be too subjective and associative in their descriptions, thus losing the ability to effectively communicate with the public. The study aims to enhance public understanding by incorporating moderately objective descriptions to foster a wider acceptance of art and design. Utilizing the text-to-image function of Artificial Intelligence-Generated Content (AIGC), this study explores a method for creators, as artificial intelligence (AI) collaborative partners, to improve the concept dissertation promptly and with faithful simulation feedback based on data. The researcher conducted a practical experiment involving 22 experienced creators in a master’s program, well-versed in the art and design field, collecting data on text, generative images, and observational notes, which were then cross analyzed using content analysis and the general inductive approach. The results indicated that the participants practiced writing objectively by generating AI images and providing immediate feedback, fostering reflection and awareness, and encouraging participants to remain objective and explore ways to improve the text. This study reveals that AI is a tool for content generation and a collaborative partner that respects and responds to the participants’ creative intent in the learning environment. In the AI collaborative method, the creators controlled the subjective–objective balance with the public by emphasizing the complexities of thinking and inspired expressions. This study enhances the collaborative potential between participants and AI, facilitating rapid exploration and transformation and improving communication between artists and the public.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All rights reserved. Apart from citations for the purposes of research, private study, or criticism and review,no part of this publication may be reproduced, stored or transmitted in any other form without prior written permission by the publisher.
References
Berelson, B. (1952). Content analysis in communication research. Free Press.
Bih, H.-D. (2010). Why didn’t the professor tell me? Bih Press.
Cambridge University Press. (n.d.-a). Description. In Cambridge dictionary. https://dictionary.cambridge.org/zht/%E8%A9%9E%E5%85%B8/%E8%8B%B1%E8%AA%9E/description
Cambridge University Press. (n.d.-b). Objective. In Cambridge dictionary. https://dictionary.cambridge.org/zht/%E8%A9%9E%E5%85%B8/%E8%8B%B1%E8%AA%9E/objective
Cambridge University Press (n.d.-c). Subjective. In Cambridge dictionary. https://dictionary.cambridge.org/zht/%E8%A9%9E%E5%85%B8/%E8%8B%B1%E8%AA%9E/subjective
Gadamer, H.-G. (1989). Truth and method. A&C Black.
Julien, G. (2024). How artificial intelligence (AI) impacts inclusive education. Educational Research and Reviews, 19(6), 95–103. https://ftp.academicjournals.org/journal/ERR/article-full-text-pdf/A59EF5172309
Kuo, Y.-P. (2024, May 25). AI collaboration Bangkok design week content analysis correspond Thailand’s creative economy strategy from Bangkok design week theme [Paper presentation]. Empowerment of Al for a Sustainable Design, The 29th International Design symposium. Taipei, Taiwan.
Lee, V. V., van der Lubbe, S. C. C., Goh, L. H., & Valderas, J. M. (2024). Harnessing ChatGPT for thematic analysis: Are we ready? Journal of Medical Internet Research, 26, Article e54974. https://www.ncbi.nlm.nih.gov/pubmed/38819896
Lee, Y. K., Park, Y.-H., & Hahn, S. (2023). A portrait of emotion: Empowering self-expression through AI-generated art. arXiv:2304.13324. https://doi.org/10.48550/arXiv.2304.13324
Lim, J., Leinonen, T., Lipponen, L., Lee, H., DeVita, J., & Murray, D. (2023). Artificial intelligence as relational artifacts in creative learning. Digital Creativity, 34(3), 192–210. https://www.tandfonline.com/doi/abs/10.1080/14626268.2023.2236595
Liu, V., & Chilton, L. B. (2022). Design guidelines for prompt engineering text-to-image generative models. In S. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson, & K. Yatani (Eds.), CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (Article 384, pp. 1–23). Association for Computing Machinery. https://dl.acm.org/doi/abs/10.1145/3491102.3501825
Ortis, A., Farinella, G. M., Torrisi, G., & Battiato, S. (2021). Exploiting objective text description of images for visual sentiment analysis. Multimedia Tools and Applications, 80(15), 22323–22346. https://www.dmi.unict.it/ortis/articoli/MTAP_2020.pdf
Panno, J. [@juliopannook]. (2022, September 8). Tiroteo wearing my Tiroteo suit [Video]. Instagram. https://www.instagram.com/p/CiPpwLXrJk1/?igshid=YmMyMTA2M2Y%3D
Peshkin, A. (1988). In search of subjectivity—One’s own. Educational Researcher, 17(7), 17–21.
Sinnott-Armstrong, W. (2018). Think again: How to reason and argue. Oxford University Press.
Thomas, D. R. (2003). A general inductive approach for qualitative data analysis. School of Population Health, University of Auckland. https://frankumstein.com/PDF/Psychology/Inductive%20Content%20Analysis.pdf
Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), 237–246. https://doi.org/10.1177/1098214005283748
Yin, H., Zhang, Z., & Liu, Y. (2023). The exploration of integrating the Midjourney artificial intelligence generated content tool into design systems to direct designers towards future-oriented innovation. Systems, 11(12), Article 566. https://www.mdpi.com/2079-8954/11/12/566