Health & Wellness, Artificial Intelligence & UX Design
Visual Artwork for Human Affections (VAHA)
A groundbreaking app blending generative AI and journaling to transform emotions into abstract visual art for deeper self-expression.
Project Type
School project
Project Duration
5 weeks
Team Members
Liqian Z. & Yunfeng Q.
Tools Stack
Python (with libraries such as TensorFlow, Keras, etc.) and Figma
My Roles
Product Designer, AI Engineer, and Project Manager
Deliverables
✅ A predictive and generative AI model that explore the relationship between human emotions and visual artwork, uncovering how different emotional states can be represented through abstract visual art.
✅ An innovative mobile emotion tracker and journaling app designed to help users understand and express their emotions through personalized visual art.
Design Impacts
🔹Led the creation of a groundbreaking platform that transforms emotional experiences into visual representations, redefining how users express and communicate emotions.
🔹Designed and implemented an AI-driven system that generates artwork based on emotional input, showcasing AI’s ability to interpret and visualize complex human emotions.
01 Background
Many individuals struggle with expressing or interpreting their emotions effectively. This challenge is even more pronounced in children and adults with Autism Spectrum Disorder (ASD), who may find it difficult to articulate their feelings through conventional means. According to recent statistics, approximately 1 in 36 children and 1 in 45 adults in the U.S. are diagnosed with Autism Spectrum Disorder (ASD) (CDC 2023).
Existing tools lack the ability to provide a nuanced, personalized representation of emotions, leading to a gap in emotional understanding and communication.
1 in 36
U.S. childern
1 in 45
U.S. adults
Reference: Centers for Disease Control and Prevention. (2023). Data & Statistics on Autism Spectrum Disorder. Retrieved from https://www.cdc.gov/autism/data-research/index.html.
02 Design Ideation
02.1 AI Model
The VAHA system integrates two established models: the VGG16 and the GAN. The VGG16, a Convolutional Neural Network (CNN) model, is specifically adapted for facial recognition in our system. The GAN (Generative Adversarial Network) model is responsible for generating the visual art images by integrating the classifications of human emotions obtained from the VGG16 model.
The VAHA model can produce captivating artistic representations of various emotional states including happy, angry, fearful, disgusted, surprised, neutral, and sad, incorporating a diverse range of associated artistic styles.
02.1 Model's Pipeline
03 Mobile App Design
03.1 UI Assets / Branding
03.2 Wireframes
03.3 User Flow
Detect & Generate
Use phone camera to detect facial expressions using VAHA's emotional state predictive model within app.
Next, let VAHA's generative AI create a never-before-seen artwork, based on your emotion.
Record & Review
Record your daily emotions and artworks with VAHA's journaling feature.
Review your emotional trends and other mental activities.
Post & Share
Share your artworks via social media.
Or post it to VAHA's global gallery to like and comment other users' artworks.
04 Product Demo
Video Demo
Figma Demo
Limitations
🔹In the VAHA model, both emotion and visual artwork possess a highly abstract nature, making them inherently subjective to individual interpretation.
🔹The accuracy of emotion recognition and the quality of generated artwork depend heavily on the underlying AI models and datasets. Current emotion recognition technologies may not fully capture the subtleties and nuances of human emotions, particularly across diverse populations with varying cultural backgrounds and expressions
Future Implications
🔹In the realm of art therapy, the VAHA model holds significant potential for enhancing therapeutic outcomes. By employing the VAHA model, therapists can create personalized visual representations that accurately capture a client's emotional state, thereby facilitating a deeper understanding of their emotions and experiences.
🔹The VAHA model could also have promising effects for enhancing communication for individuals with autism spectrum disorder. Individuals with ASD often face challenges in interpreting and conveying emotions, making traditional therapeutic approaches less effective. By utilizing an image generation model tailored to their unique emotional experiences, these individuals can benefit from personalized visual representations that help bridge the communication gap.