AWS Logo
Menu
The Story of "Heal Afrika" and Xenophobic Voices

The Story of "Heal Afrika" and Xenophobic Voices

An interactive 3D experience exploring xenophobia in South Africa. A 3D head rises from dark, displaced lines, blending art, data, and tech to spark conversation.

Published Jan 10, 2025
Last Modified Jan 16, 2025

Inspiration and Starting Point

The spark for this project came from a YouTube video by Yuri Artiukh, where he delved into depthTexture post-processing. This technique involves rendering the depth of a 3D scene as a Frame Buffer Object (FBO) texture, offering powerful tools for visual effects. Yuri also referenced Voices of Racism by the Human Rights Commission of New Zealand, a site using similar methods.
I began by implementing a grid of lines. The depth of the scene was saved as a texture, allowing me to displace the lines based on the scene’s depth using custom shaders. Additionally, I incorporated the curl noise algorithm to create fluid, wave-like displacements along the Y-axis.

The Challenge of Morph Targets

Animating the 3D head was the most complex part of the project. Morph targets, which enable animations by transitioning between predefined 3D mesh states, were critical for achieving lifelike movements.
Initially, I experimented with Face Cap for real-time facial animations using my iPhone. However, the results weren’t crisp enough. I pivoted to Blender, coupled with a Python script, but struggled to achieve the desired precision. Eventually, Autodesk Maya provided the solution, allowing me to refine the animation process and convert FBX files to the GLB format using another Python script.
After trial and error with five different 3D head models, I settled on a low-poly version that already included morph targets. This streamlined the process, allowing me to focus on driving animations programmatically.

Using AI for Audio

The audio aspect was just as integral to the project as the visuals. I began by extracting negative sentiments from a spreadsheet of Twitter data. These text-based sentiments were then fed into an AI audio generator.
To heighten the emotional impact, I selected an angry voice from the AI’s library and generated audio files based on the tweets. The generated audio reflected the sentiments’ tone and amplified the intensity of the experience. This approach not only brought the dataset to life but also added a visceral layer to the project.

Driving the Animation

To bring the head to life, I used Rhubarb Lip Sync, a tool that generates mouth cues from audio files. These cues were mapped to specific morph targets, such as aPose, oPose, and smile. This mapping ensured the head’s expressions synced perfectly with the audio.
In the code, I implemented a system to play audio clips randomly upon interaction and matched the mouth movements to the audio cues. The head would move forward and backward dynamically, creating an immersive effect as users engaged with the scene.

Building the Visuals

The grid of lines was created using Three.js, with a custom shader for displacement. Each line's movement was influenced by the scene’s depth data and enhanced with curl noise. The depthTexture provided a sense of depth and dimension, making the scene more dynamic.
The 3D head was loaded as a GLTF model and added to the scene. I ensured the lighting complemented the dark and eerie aesthetic, using ambient light to bring subtle highlights to the model.


Hosting the Project with EC2

To bring this project to the world, I hosted it using Amazon EC2. This step was both a technical challenge and a learning experience. I revisited skills such as SSH access to a virtual machine, using the AWS Console, and setting up a web server with Nginx. Additionally, I got hands-on with essential Linux commands like sudo and yum to manage installations and configurations. For version control, I configured SSH for GitHub, allowing me to securely clone my repository to the server.
Below are the steps I followed to set up, configure, and deploy the project:
1. Set Up the EC2 Instance
To host the Xenophobic Voices project, I started by launching an EC2 instance on AWS.

Launch an EC2 Instance:

  1. I logged in to the AWS Management Console and navigated to the EC2 Dashboard.
  2. From there, I clicked on "Launch Instance" to create a new instance.
  3. I selected an Amazon Machine Image (AMI). For my project, I chose Amazon Linux since it's optimised for EC2 instances and offers a good balance between performance and simplicity.
  4. Next, I selected the t2.micro instance type because it's eligible for the free tier, which suited my requirements for this project.

Configure the Instance:

  1. I then configured the instance details, like setting up network settings and ensuring I had sufficient storage for my app and database.
  2. I ensured that the storage was large enough to handle both my web application and any necessary data.

Configure Security Group:

  1. I set up a security group to allow traffic on essential ports. This included:
    • HTTP/HTTPS traffic on ports 80/443 to ensure my app could be accessed through a web browser.
    • SSH access on port 22 to allow secure access to the instance for management and configuration.
After setting these up, I launched the instance and downloaded the private key (.pem file) for secure SSH access.
With these steps complete, I was ready to connect to the instance and move forward with setting up the environment.
2. Connect to the EC2 Instance and Set Up the Environment

Connect to the EC2 Instance:

  1. After launching the EC2 instance, I used SSH to connect to it.
    • In the terminal, I ran the following command to securely connect to the EC2 instancessh -i "your-key.pem" ec2-user@your-ec2-public-ip
      • I replaced "your-key.pem" with the actual name of the private key file I downloaded and "your-ec2-public-ip" with the public IP address of the EC2 instance.

Clone the Repository:

  1. Once connected to the EC2 instance, I needed to set up the project files. I used git to clone the repository for the Xenophobic Voices project.
    • I installed Git first if it wasn't already available:sudo yum install git -y
    • Then, I navigated to the desired directory and cloned the repository:git clone https://github.com/your-repo/xenophobic-voices.git
      cd xenophobic-voices

Set Up the Web Server (Nginx):

  1. With the project cloned, the next step was to set up a web server to serve the application. I chose Nginx, which is a powerful and efficient web server.
    • To install Nginx on Amazon Linux, I ran: sudo yum install nginx -y
    • After Nginx was installed, I needed to configure it to serve my project. I created a server block for my app nano /etc/nginx/conf.d/xenophobic-voices.conf
    • Inside the configuration file, I added the following block to proxy requests to the backend:server {
      listen 80;
      server_name your-domain.com www.your-domain.com; location / {
      proxy_pass http://localhost:3000;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;
      }
      }
    • After saving the configuration file, I enabled the configuration and restarted Nginx to apply the changes:
      sudo systemctl start nginx
      sudo systemctl enable nginx
      sudo nginx -t
      sudo systemctl restart nginx
With these steps, I successfully connected to the EC2 instance, cloned the project repository, and set up Nginx to serve the application. The project was now accessible via a web browser on port 80.
The hosting process reminded me of the importance of mastering fundamental tools in tech and how they intersect with the creative process. Each step reinforced the technical backbone of the project, ensuring that the 3D experience was accessible and performant.

Reflections and Lessons Learned

This project was both an artistic and technical journey. It challenged me to push the boundaries of what’s possible with shaders, Three.js, and morph target animations. There were moments of frustration, especially during the animation phase, but each hurdle taught me valuable lessons about persistence and problem-solving.
By blending art and technology, this project speaks to the power of visual storytelling. It’s a representation of sentiments, a conversation starter, and a reflection of the complex narratives surrounding immigration and identity.
For a deeper dive into the inspiration and the broader social context behind this project, read my LinkedIn article: From Tweets to Action: The Story Behind Xenophobic Voices.
To anyone looking to explore similar projects, my advice is simple: embrace the challenges, dive deep into the documentation, and don’t shy away from experimenting. Creativity often lies in the spaces where logic and imagination meet.
 

Comments