Mirrored Empathy
Project
Documentation
UI Example of Output- Sad

UI Example of Output- Happy

Physical Build Plan

Electronic Build Plan



OpenAI testing

Mediapipe Testing
Emotional Classifier

Speech Testing

Arduino Code classifying emotions as colors for the python script to send over an emotion to change led color

Lighting test after keyword "sad"

Arduino Build

Testing
As of 6/1/25 the face tracking feature freezes the live feed after a face is detected and the gpt conversation starts lagging.



GUI of the Mirror text

Adding Date and Time to GUI


One way Mirror Acrylic Panel

Monitor setup behind mirror
Materials
Needed
Hardware Requirements
-
Two-Way Acrylic Mirror Panel (18"x24") - For a reflective surface that allows screen light to pass through
-
Display Monitor (HD or higher) - For minimal UI text and output overlays behind the mirror
-
Raspberry Pi 4 - Runs AI scripts, processes webcam input, and controls LED
-
USB Webcam - Mounted above or below the mirror to capture facial expressions
-
Microphone + Speaker Combo - For real-time speech input and AI-generated audio output
-
Individually Addressable RGB LED Strip - Mounted on the back of the mirror for ambient emotional lighting
-
Power Supply and USB Hubs - To support the raspberry pi and LED strips
Software & Platform Requirements
-
Mediapipe (OpenCV Python Library) - For basic face and expression recognition
-
PYt2s (Text-to-Speech Library)- For AI voice responses
-
ChatGPT (OpenAI API) - For generating personalized responses based on input
-
SpeechRecognition (Python Library) - For capturing user voice queries
-
rpi_ws281x (LED control library) - For driving mood-based lighting based on sentiment analysis