sighs

by Chelly Jin, Jack Turpin, and Youjin Chung

An exploration on how the analysis and application of nonverbal communication in computers possibly illuminate different perspectives in human/computer interfaces.

Collecting Data

We provided a list of 6 characterisitcs and prompted each person to sigh in a manner that they felt best identified or characterized these words.

The Characteristics

  • dismay
  • dissatisfaction
  • boredom
  • futility
  • relief
  • love lorn

We collected data from friends and acquaintances; however, a good portion of our data came from our Mechanical Turk Request. Our parameters in the Mechanical Turk Request:

Mechanical Turk Parameters Image

Define

  1. Open Frameworks to retrieve the audio data.

  2. Wekinator to train the model Utilized wekinator to record the audios to train the model. Image of Wekinator and Data Training

  3. Processing to showcase the output. Image of Processing output

Generate

Utilized Word2Vec and t-SNE Word embeddings. When given a word, it links the word to the closest characteristic (dismay, dissatisfaction, boredom, futility, relief, love lorn) and plays an audio snippets of the respective characteristic.

This is a screenshot of the audio sample folder: word2vec jupyter image

Utilizing Jupyter to train the model: word2vec jupyter image word2vec jupyter image word2vec jupyter image

These are the trained text: word2vec jupyter image word2vec jupyter image

Footnotes