A recent study conducted by researchers from North Carolina State University has revealed diverse opinions among parents regarding the use of AI-generated images in children’s books. While many parents are open to integrating these images, they express significant concerns about the potential impact on children’s understanding and safety. The findings highlight the necessity of human oversight in the creation of AI illustrations.
The study involved 13 parent-child pairs, where children aged 4 to 8 read two out of three stories featuring AI-generated art, human art enhanced by AI, and traditional illustrations created solely by humans. After engaging with the stories, children rated their enjoyment and reactions to the accompanying images. Parents were interviewed in detail to gather their views on the illustrations, focusing on specific concerns regarding safety and emotional accuracy.
Qiao Jin, the first author of the paper and an assistant professor at North Carolina State University, noted, “We wanted to explore how and whether AI-generated images affected the experience of reading stories for parents and kids.” The research revealed that children were particularly sensitive to the emotional content of illustrations, often noticing discrepancies between the emotions portrayed in the images and those expressed in the text.
Concerns About Accuracy and Safety
Parents and children demonstrated varying concerns based on the nature of the stories read. For example, when engaging with realistic or science-based narratives, parents and older children expressed heightened worries about the accuracy of AI-generated images. Older children were quick to identify discrepancies in size or behavior depicted in the illustrations, while parents were especially troubled by errors that could potentially encourage unsafe behavior.
Despite a general openness to AI images, many parents voiced fundamental reservations about the possibility of AI replacing human artists. They were particularly uncomfortable with AI being used to generate the text of stories, preferring narratives crafted by human authors. The study also tested the effectiveness of labeling images as AI-generated, finding that most participants either overlooked the labels or found them distracting during reading.
The Importance of Human Review
According to Jin, parents expressed a desire for a clear notification on the cover of books indicating whether AI had been involved in the creation of illustrations. This would enable them to make informed purchasing decisions. “Parents preferred a simple cover label, not page-level flags, stating whether AI was used to illustrate the book,” Jin explained.
The study emphasizes the significance of incorporating expert reviews when using AI to generate illustrations for children’s literature. Irene Ye Yuan, the corresponding author from McMaster University, reiterated the findings, stating, “Certain types of errors in AI-generated images can pose problems for parents and children, depending on the nature of the stories.”
The paper, titled “’They all look mad with each other’: Understanding the Needs and Preferences of Children and Parents in AI-Generated Images for Stories,” has been published in the International Journal of Child-Computer Interaction. This research was supported by the OpenAI Researcher Access Program. The results underline the need for a balanced approach to integrating AI technology in children’s literature, ensuring that safety, emotional connection, and artistic integrity remain priorities.


































