Predicting adolescent violence in Wartegg-ZeichenTest drawing images based on deep learning

This thesis deals with the problem of negative behaviour due to changes in mental and physical stress in adolescence. In particular, it is a study to solve the health care problem of students exposed to violence. Among the problematic behaviours, students exposed to violence, especially, have many p...

Full description

Bibliographic Details
Main Authors: Kyung-yeul Kim, Young-bo Yang, Mi-ra Kim, Ji Su Park, Jihie Kim
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/09540091.2023.2286186
Description
Summary:This thesis deals with the problem of negative behaviour due to changes in mental and physical stress in adolescence. In particular, it is a study to solve the health care problem of students exposed to violence. Among the problematic behaviours, students exposed to violence, especially, have many problems with healthcare. A projective test using pictures can elicit information from adolescents through direct experiences represented by pictures to which the subject unconsciously reacts. Few methods analyse images drawn by adolescents as image data. This study analyses data from 134 violent students who received fifth-degree punishment for violent behaviour and 134 nonviolent students. We use the convolutional neural network (CNN)(softmax), CNN (support vector machine (SVM)), with the style transfer generative adversarial network, and ensemble techniques to analyse drawn images using WZT and predict violence through deep learning. We predict violence from pictures with an accuracy of 93%–98%. This study is the first to automatically analyze and predict violence with a deep learning model in images drawn by adolescents on WZT. It also features WZT to proactively conduct violence investigations to improve health care for students. Advances in deep learning for image feature extraction are expected to provide more research opportunities.
ISSN:0954-0091
1360-0494