most RNN’s belong to the general category of “generative” models, in that these models provide a departure from strict classification or prediction tasks. it’s important to remember, though, that even generative models still tell us alot about how we project rational thought onto the world, and depend greatly on how our design, contextualization, objectives, and choice of training data!

for a loose narrative of using RNN’s for artistic purposes see ross goodwin ”adventures in narrated reality” - see also “Sunspring” the movie he made with RNN-generated script

allison parrish: “When we teach computers to write, the computers don’t replace us any more than pianos replace pianists—in a certain way, they become our pens, and we become more than writers. We become writers of writers.”

some thoughts on exploring semantic space with experimental writing from Allison Parrish, and these two posts from katie rose pipkin (see also their moth generator twitter bot)

for technical background on RNN’s:

compared to maximum-likelihood language models (e.g. Markov chains), RNN’s manage to balance structure at different scales (context-awareness) while generating completely new material (no copying).

compared to using other natural langauge processing tools on their own (like “bag-of-words”), RNN’s in general, and LTSM’s in particular, help keep track of overall sentiment or longer arcs.

some projects using RNN’s:

some code repositories of RNN-based projects: