Jekyll2018-05-13T08:49:16+00:00https://publicityreform.github.io/findbyimage/findbyimage/Find By Image"Find By Image; Machine Learning For Artists" is a class in the UCLA School of the Arts and Architecture (Art+Arc 100).
Day 5 Notes - Wekinator2018-04-19T04:52:15+00:002018-04-19T04:52:15+00:00https://publicityreform.github.io/findbyimage/findbyimage/day-5-notes<p><a href="http://www.wekinator.org/">Wekinator</a> is a (GUI-based) application that makes it easier to connect real-time inputs to real-time outputs, with a machine learning model in the middle.</p>
<p>The objective of Wekinator, to paraphrase the application’s designer <a href="http://www.doc.gold.ac.uk/~mas01rf/Rebecca_Fiebrink_Goldsmiths/welcome.html">Rebecca Fiebrink</a>, is to treat the training data itself as a user interface, opening the possibility of designing new types of interaction.</p>
<p>Wekinator makes use of “supervised learning” - you train the model by providing examples of inputs with their corresponding outputs: “if this, then that”. Using wekinator, you can quickly train a model to recognize patterns in an input stream by associating points, states, or ranges within the input to a corresponding point, state, or range in an output. Once the model has been trained, set it to run - any new input data will be mapped by the model to fit the output stream. (see <a href="http://www.wekinator.org/examples/">this list of examples</a> for suggestions of inputs and outputs to use with wekinator)</p>
<p>Wekinator receives and outputs data using the <a href="https://en.wikipedia.org/wiki/Open_Sound_Control">Open Sound Control (OSC)</a> protocol - any application that can send and receive OSC can easily work with wekinator. OSC is similar to MIDI in that it is a form of control data, but it’s able to do a lot more, faster, and with finer grain. OSC makes use of UDP protocal commonly used in network communications. Messages are sent over a TCP/IP port that you choose. Messages can be sent via OSC between programs, or between machines over a network. Each application can only listen on one port at a time, but it is possible to filter messages using as many symbolic “addresses” as you care to set up: for example, http://localhost:6448/wek/inputs/01/parameter01 and http://localhost:6448/wek/inputs/01/parameter02 might represent two separate message streams sent to wekinator from another application using port 6448.</p>
<p>Wekinator works in one of three modes (or a combination of the three):</p>
<ul>
<li>classification
<ul>
<li>training creates a map of input data (with any number of parameters) separated into (any number of) regions, new data is classified into these regions, sending an output message for each classification (a trigger for each activated region).</li>
</ul>
</li>
<li>continuous
<ul>
<li>inputs are mapped to sliders, a continuous input stream (with any number of parameters) is interpolated to a continuous output stream (with any number of parameters)</li>
</ul>
</li>
<li>dynamic time warping
<ul>
<li>given examples of a specific action as input that takes place over time (a gesture, or sequence), a specific output is triggered whenever that input action is detected. multiple types of input actions can be mapped to multiple types of outputs.</li>
</ul>
</li>
</ul>Wekinator is a (GUI-based) application that makes it easier to connect real-time inputs to real-time outputs, with a machine learning model in the middle.Final Presentation – Abstraction and Composition in Machine Learning2017-06-10T06:11:00+00:002017-06-10T06:11:00+00:00https://publicityreform.github.io/findbyimage/findbyimage/abstraction-and-composition<h1 id="abstraction-and-composition-in-machine-learning">Abstraction and Composition in Machine Learning</h1>
<h4 id="by-zoe-hillary-and-vita">by Zoe, Hillary, and Vita</h4>
<p>We spent this quarter exploring how machine learning models can interpret and generate abstraction with images.
Initially, we looked into style and drawing, to see what it would take to train a model to draw like we do.</p>
<p>Starting with Sketch RNN, we set up a sample program that could use a set of SVG drawings to generate new drawings based on the inputs we fed it. It was successful, and worked with only 80 input images.</p>
<p><img src="http://unsolicitedairdrop.show/assets/forVita/fish1.png" alt="fish SVGS" />
the inputs we used</p>
<p><img src="http://unsolicitedairdrop.show/assets/forVita/fish2.png" alt="generated fish" />
some of the fish we generated</p>
<p>But we were using a set of drawings that all were fish, and we quickly realized that the model program was reading different parts of the fish and mashing them together.
Thus, it wouldn’t work nearly as well with abstract images and compositions, where each SVG has drastically different elements and compositions.</p>
<p>We did some experiments, but they all came out either as messy lines, or exact copies of one input drawing.</p>
<p>We then tried DCGAN, training it on Kandinsky’s paintings, to see how that might work with abstract imagery:</p>
<iframe src="https://www.youtube.com/embed/kU629Z3bHt0?ecver=2" width="640" height="360" frameborder="0" style="position:absolute;width:100%;height:100%;left:0" allowfullscreen=""></iframe>
<p>This project is still in the works. In the future, we would like to run other abstract imageries through different models, and see if we could write our own code to generate abstract images.
Overall, the questions we are left with are mostly about what abstraction even is. Sometimes it means taking something concrete or figurative and transforming it into an correlated but wholely unrecognizable form - but does it always mean this?
When a person makes abstract images, we feel like they are making choices with their intuition, or with some guided process. But what about when a machine creates abstract images? Then, it either feels super random, or super contrived. Is there a way around this?</p>
<p>Sources:</p>
<ul>
<li><a href="https://github.com/hardmaru/sketch-rnn">sketch-rnn</a></li>
<li><a href="https://github.com/carpedm20/DCGAN-tensorflow">DCGAN</a></li>
</ul>Abstraction and Composition in Machine Learning by Zoe, Hillary, and Vitareactionary2017-06-08T20:13:30+00:002017-06-08T20:13:30+00:00https://publicityreform.github.io/findbyimage/findbyimage/reactionary-final<h1 id="reactionary">reactionary</h1>
<h5 id="stalgia">stalgia</h5>
<h2 id="concept">Concept:</h2>
<p>Create a machine-defined relationship between observed contant and human facial response using Youtube ‘reaction videos’. Use this model to create new facial reactions for content that I create and compile it into a single looping video. There is an inherent performativity to the ‘reaction video’ format which exposes the social utility of the reaction which is to express a relative and categorizable ethical framework. This points to a tendency to integrate the exterior image into our understanding of our own relative righteousness.</p>
<h2 id="tools">Tools</h2>
<p><a href="https://github.com/phillipi/pix2pix">Pix2Pix</a>
cGAN using Torch and Lua</p>
<p><a href="https://github.com/alexjc/neural-enhance">Neural Enhance</a>
CNN using Tensorflow and Python</p>
<h2 id="methods">Methods</h2>
<p>After several failed attempts to adapt <a href="https://github.com/dyelax/Adversarial_Video_Generation">Deep Multi-Scale Video Prediction</a> to my needs. I turned to Pix2Pix.</p>
<p>Trained Pix2Pix on dataset, made new videos and used those to generate new reaction faces, these new videos were enlarged with Neural Enhance</p>
<h2 id="datasets">Datasets</h2>
<p>2400 pngs taken from 40 different youtube reaction videos. 3 out of 4 of each videos frames were dropped and one second at 30 fps was taken out of the resulting reaction and source video.</p>
<h2 id="training">Training</h2>
<p>Trained six different times with slight changes to the dataset between each training. Models were trained on a Google Cloud computer with 32 vCPUS and 128 GB of RAM. Training took roughly 3 days for each model.</p>
<p><img src="https://raw.githubusercontent.com/publicityreform/findbyimage/master/assets/stalgia_training_image.png" alt="image of training progress" /></p>
<h2 id="output">Output</h2>
<p>I made a series of videos and then generated reactions to them using the model. The final results were composed into a single-channel looping video piece along with a stream-of-consciousness text that I wrote that addresses several of the themes.</p>
<h2 id="final-piece">Final Piece</h2>
<p><a href="https://drive.google.com/file/d/0B4v9wGHsYuR2WEQwZ2xwR1dIS0k/view?usp=sharing">reactionary</a></p>
<h2 id="texts-used">Texts Used</h2>
<p><a href="https://arxiv.org/abs/1511.05440">Deep multi-scale video prediction beyond mean square error</a>
<a href="https://arxiv.org/abs/1609.04802">Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network</a>
<a href="https://arxiv.org/pdf/1611.07004v1.pdf">Image-to-Image Translation with Conditional Adversarial Networks</a></p>reactionary stalgiaOne-Scene-Final2017-06-08T19:31:30+00:002017-06-08T19:31:30+00:00https://publicityreform.github.io/findbyimage/findbyimage/one-scene-final<h1 id="final-documentation-one-scene">Final Documentation: One Scene</h1>
<h5 id="group-9---sarah-alice-and-eric">Group 9 - Sarah, Alice and Eric</h5>
<p><a href="https://youtu.be/zwdh9L1OKEc"><img src="https://img.youtube.com/vi/zwdh9L1OKEc/0.jpg" alt="youtube video" />)</a></p>
<h2 id="concept">Concept:</h2>
<p>Neural network system as a submissive-responsive controller of the virtual space</p>
<ul>
<li>“Branching” of the state based on user head movement</li>
<li>Machine trying to maximize the time the user stay in the VR environment</li>
</ul>
<h2 id="model">Model</h2>
<p>Deep Q Learning
https://github.com/keon/deep-q-learning</p>
<p>Reinforcement learning in python: Theano + Keras</p>
<p>Source code originally written for playing CartPole game on OpenAI Gym</p>
<p>https://gym.openai.com/envs/CartPole-v1</p>
<p><img src="https://keon.io/images/deep-q-learning/animation.gif" alt="cartpole animation" /></p>
<p>“A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track”</p>
<h2 id="methods">Methods</h2>
<p>We replaced CartPole game with our Unity VR system</p>
<h4 id="input">Input</h4>
<ul>
<li>Form: head direction (coordinate in 3d-space), head movement (acceleration data)</li>
<li>Dataset: collected from a human subject - 50+ trials</li>
</ul>
<h4 id="training">Training</h4>
<ul>
<li>When a human subject is bored, they push the “END” button.</li>
<li>This resets the scene, the machine will retry to maximize the rewards</li>
</ul>
<h4 id="learning">Learning</h4>
<p>by collecting this data array:</p>
<ul>
<li>state 1 (VR gear readings)</li>
<li>action (determined by the machine)</li>
<li>state 2 (VR gear readings)</li>
<li>rewards: for a unit time stay within the VR system, the system gets +1 reward; when terminated by the user, the system gets -10 penalty</li>
</ul>
<h4 id="output">Output</h4>
<p>The model selected 1 action from our discrete action space (30<em>6</em>4<em>4</em>2=5760 possible actions)
which determined each object’s:</p>
<ul>
<li>visibility,</li>
<li>position,</li>
<li>size (scale), and</li>
<li>angular orientation</li>
</ul>
<h2 id="results">Results</h2>
<p><a href="https://youtu.be/zwdh9L1OKEc"><img src="https://img.youtube.com/vi/zwdh9L1OKEc/0.jpg" alt="youtube video" />)</a></p>
<p>Best record?</p>
<h2 id="contribution">Contribution</h2>
<ul>
<li>Sarah: sound + training + documentation</li>
<li>Alice: python + Unity + documentation</li>
<li>Eric: Unity + VR + visuals</li>
</ul>Final Documentation: One Scene Group 9 - Sarah, Alice and EricSighs Final Presentation2017-06-07T00:00:00+00:002017-06-07T00:00:00+00:00https://publicityreform.github.io/findbyimage/findbyimage/sighs-Final-Presentation<h1 id="sighs">sighs</h1>
<p>by Chelly Jin, Jack Turpin, and Youjin Chung</p>
<p>An exploration on how the analysis and application of nonverbal communication in computers possibly illuminate different perspectives in human/computer interfaces.</p>
<h2 id="collecting-data">Collecting Data</h2>
<p>We provided a list of 6 characterisitcs and prompted each person to sigh in a manner that they felt best identified or characterized these words.</p>
<h3 id="the-characteristics">The Characteristics</h3>
<ul>
<li>dismay</li>
<li>dissatisfaction</li>
<li>boredom</li>
<li>futility</li>
<li>relief</li>
<li>love lorn</li>
</ul>
<p>We collected data from friends and acquaintances; however, a good portion of our data came from our Mechanical Turk Request.
Our parameters in the Mechanical Turk Request:</p>
<p><img src="http://diversity.p5js.org/sigh.png" alt="Mechanical Turk Parameters Image" /></p>
<ul>
<li><a href="https://drive.google.com/drive/folders/0B7TIlH6CR6CZQ3NNZUJlckJ1azQ?usp=sharing">Listen to some of the crowdsourced sighs here</a></li>
</ul>
<h2 id="define">Define</h2>
<ol>
<li>
<p>Open Frameworks to retrieve the audio data.</p>
</li>
<li>
<p>Wekinator to train the model
Utilized wekinator to record the audios to train the model.
<img src="http://diversity.p5js.org/sighs6.png" alt="Image of Wekinator and Data Training" /></p>
</li>
<li>
<p>Processing to showcase the output.
<img src="http://diversity.p5js.org/sighs5.png" alt="Image of Processing output" /></p>
</li>
</ol>
<h2 id="generate">Generate</h2>
<p>Utilized Word2Vec and t-SNE Word embeddings.
When given a word, it links the word to the closest characteristic (dismay, dissatisfaction, boredom, futility, relief, love lorn) and plays an audio snippets of the respective characteristic.</p>
<p>This is a screenshot of the audio sample folder:
<img src="http://diversity.p5js.org/sigh1.png" alt="word2vec jupyter image" /></p>
<p>Utilizing Jupyter to train the model:
<img src="http://diversity.p5js.org/sigh2.png" alt="word2vec jupyter image" />
<img src="http://diversity.p5js.org/sigh3.png" alt="word2vec jupyter image" />
<img src="http://diversity.p5js.org/sigh4.png" alt="word2vec jupyter image" /></p>
<p>These are the trained text:
<img src="http://diversity.p5js.org/sigh7.png" alt="word2vec jupyter image" />
<img src="http://diversity.p5js.org/sigh8.png" alt="word2vec jupyter image" /></p>
<h2 id="footnotes">Footnotes</h2>
<ul>
<li><a href="http://openframeworks.cc">Open Frameworks</a></li>
<li><a href="http://www.wekinator.org">Wekinator</a></li>
<li><a href="https://processing.org">Processing</a></li>
<li><a href="https://www.tensorflow.org/tutorials/word2vec">Word2Vec</a></li>
</ul>sighs by Chelly Jin, Jack Turpin, and Youjin ChungFinal Presentation – Generative Narrative2017-06-06T06:11:00+00:002017-06-06T06:11:00+00:00https://publicityreform.github.io/findbyimage/findbyimage/generative-narrative-final-presentation<h1 id="generative-narrative">Generative Narrative</h1>
<p>by Tyler Yin, Reginald Lin, Annie Yu</p>
<p>To explore how machine learning might be used to generate narratives in the context of art-making, we experimented with several different neural networks to produce stories in the form of text and moving imagery.</p>
<h3 id="resources">Resources</h3>
<h4 id="text-based-models">Text-based Models:</h4>
<ul>
<li>Char-RNN, a multi-layer RNN with long short-term memory that generates character-level text</li>
<li>Torch-RNN, a high-performance, reusable RNN, similar to Char-RNN</li>
<li>Neural Storyteller, a model that analyzes an image and produces a story</li>
<li>Word-RNN (tensorflow), similar to Char-RNN but uses pre-trained word vectors</li>
</ul>
<h4 id="image-based-models">Image-based Models:</h4>
<ul>
<li>ConvNetJS – Image Painting, a CNN used to predict pixel colors of an image</li>
<li>Butterflow, for motion interpolation / smoothing / fluid slow motion videos</li>
</ul>
<h3 id="links">Links</h3>
<p>For more context on this project, go to these iterations:</p>
<ul>
<li><a href="https://publicityreform.github.io/findbyimage/generative-narrative-project-proposal.html">Project Proposal</a></li>
<li><a href="https://publicityreform.github.io/findbyimage/generative-narrative-proof-of-concept.html">Proof-of-Concept</a></li>
</ul>
<h3 id="training-data">Training Data</h3>
<p>In order to avoid simply trying to emulate a singular work or author, we compiled text from a variety of sources, including novels written by Haruki Murakami, Joseph Conrad, and Albert Camus. We wanted to focus on works written by authors interested in the human condition, given to introspection, and generally writing in first-person perspective. For access to the text used for training and sampling, refer <a href="assets/a-r-t-folder/fpn.txt">here</a>.</p>
<h3 id="process">Process</h3>
<h4 id="word-rnn">Word-RNN</h4>
<p>Using the tensorflow version of Word-RNN, we sampled for text during different checkpoints to track the progress of the model, keeping primetext and other hyperparameters consistent. After the model was finished training, text was generated and used for priming another generation, which was used for priming another generation…and so on, for seven cycles / chains.</p>
<h4 id="convnetjs--image-painting">ConvNetJS – Image Painting</h4>
<p><img src="assets/a-r-t-folder/5.PNG" alt="A screencapture of the original image next to the generated image" /><br />
In an attempt to produce abstracted videos, we uploaded source images based on some of the generated texts to an image learning neural net and generated frames individually. Each frame was selected for its ability to stand alone, then 6-10 were compiled into a .gif, later converted into an .mp4 for experiments with butterflow.</p>
<h4 id="butterflow">Butterflow</h4>
<p>We processed our results with butterflow, in an attempt to exploit and discover latent space interpolation. Butterflow renders intermediate frames between existing frames using motion interpolation. The most successful and desirable usage of BF was interpolating with few source frames for “time-lapse” videos.</p>
<h3 id="results">Results</h3>
<h4 id="written-narrative">Written Narrative</h4>
<p>Curation of passages from original chain of text, produced by Word-RNN:<br />
<img src="assets/a-r-t-folder/editingWordRNN-3.png" alt="A page of written text generated from Word-RNN" /> <br />
Excerpts:</p>
<ul>
<li>“In other words, I learned about what isn’t. But she was such serious pain. My heart would throb.”</li>
<li>“I have experienced no moral obligations.”</li>
<li>“Can your shadow come back here? I swam some place. In this Town, the Gatekeeper will bring the seat for full men in a race about those things.”</li>
<li>“Fine, he said. He had his reason for his purpose with her studies, of this violent channel, the girls who were started constantly in the world of interest.”</li>
<li>“She could hear the touch of Tengo’s consciousness, a huge, shot recorder and the occasional night-transport truck.”</li>
</ul>
<h4 id="painted-narrative">Painted Narrative</h4>
<p><img src="assets/a-r-t-folder/n1.gif" alt="A moving abstraction generated from ConvNetJS" />
<img src="assets/a-r-t-folder/n1_40sec.gif" alt="A moving abstraction generated from ConvNetJS and processed with butterflow" /> <br />
<img src="assets/a-r-t-folder/n2.gif" alt="A moving abstraction generated from ConvNetJS" />
<img src="assets/a-r-t-folder/n2_33sec_artifacts.gif" alt="A moving abstraction generated from ConvNetJS and processed with butterflow" /> <br />
<img src="assets/a-r-t-folder/n3_1.gif" alt="A moving abstraction generated from ConvNetJS" />
<img src="assets/a-r-t-folder/n3_48sec_artifacts.gif" alt="A moving abstraction generated from ConvNetJS and processed with butterflow" /></p>
<h4 id="interpolated-narrative">Interpolated Narrative</h4>
<iframe src="https://player.vimeo.com/video/220438705?title=0&byline=0&portrait=0" width="640" height="369" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>By interpolating between samples from different checkpoints to obfuscate the text on a visual and surface level, we intended to bring attention to the algorithmic nature of the constructed narrative.</p>
<h3 id="issues">Issues</h3>
<ul>
<li>Neural Storyteller was ideal for the purposes of the project, but skip-thought vectors was too difficult to get working</li>
<li>Focusing on just skip-thought vectors through tensorflow yielded similar results: nothing</li>
<li>Training data could have been more adequately diverse, such that there would be an even distribution of text from original authors</li>
<li>Training the model lasted approximately 12 hours</li>
<li>After 49 epochs with Word-RNN, training loss = 2.687</li>
</ul>
<h3 id="footnotes">Footnotes</h3>
<ul>
<li><a href="https://github.com/hunkim/word-rnn-tensorflow">Word-RNN-Tensorflow</a></li>
<li><a href="http://cs.stanford.edu/people/karpathy/convnetjs/demo/image_regression.html">ConvNetJS</a></li>
<li><a href="https://github.com/dthpham/butterflow">Butterflow</a></li>
<li><a href="http://cs.stanford.edu/people/karpathy/">Andrej Karpathy</a></li>
</ul>Generative Narrative by Tyler Yin, Reginald Lin, Annie YuInterpreting Clouds-Final2017-06-05T19:31:30+00:002017-06-05T19:31:30+00:00https://publicityreform.github.io/findbyimage/findbyimage/interpreting-clouds-final<h1 id="interpreting-clouds-final-ideation">Interpreting Clouds: Final Ideation</h1>
<h5 id="rosalind-sophia-and-zoe">Rosalind, Sophia and Zoe</h5>
<p><img src="https://s3-us-west-1.amazonaws.com/zoeingramsite/c_9.jpg" alt="alt text" /></p>
<p>This series explores the relationship between human psychology and digital machines. Rooted in the concept of Pareidolia, a psychological phenomenon triggered by the temporal lobe of the brain, where neurons are responsible for face and object recognition. This biological ability is vital in identifying predators, as well as parents for sustenance during infant development.</p>
<p>Using the intuitive pattern recognition skills that have developed and evolved over thousands of years of human history, paired with ideas that have only become possible with the invention of computers in the past 70 years, we can explore the way machine learning tools make use of Pareidolia.</p>
<p>Feeding refined and intentional image data through many layers of a Convolutional Neural Network, via TensorFlow, we have created a series of images and titles, given by our machine, in attempt to invoke human psychological phenomena through code.</p>
<p><img src="https://raw.githubusercontent.com/tensorflow/models/master/inception/g3doc/inception_v3_architecture.png" alt="alt text" />
TensorFlow Inception model for ImageNet</p>
<p><a href="https://publicityreform.github.io/findbyimage/interpreting-clouds.html">Proposal</a></p>
<p><a href="https://publicityreform.github.io/findbyimage/interpreting-proof.html">Proof of Concept</a></p>
<p><a href="https://github.com/tensorflow/models/tree/master/inception">Model</a></p>
<p><img src="https://s3-us-west-1.amazonaws.com/zoeingramsite/c_5.jpg" alt="alt text" /></p>
<h4 id="image-description-data">Image Description Data</h4>
<p>c_1.jpg <br />
alp (score = 0.13540)
valley, vale (score = 0.12731)
lakeside, lakeshore (score = 0.07377)
wing (score = 0.06108)
volcano (score = 0.04491)</p>
<p>c_2.jpg <br />
wing (score = 0.38125)
parachute, chute (score = 0.07606)
cliff, drop, drop-off (score = 0.06124)
megalith, megalithic structure (score = 0.03066)
valley, vale (score = 0.02833)</p>
<p>c_3.jpg <br />
geyser (score = 0.15055)
missile (score = 0.14142)
wing (score = 0.11640)
projectile, missile (score = 0.08466)
space shuttle (score = 0.07960)</p>
<p>c_4.jpg <br />
geyser (score = 0.42768)
volcano (score = 0.04334)
megalith, megalithic structure (score = 0.02740)
beacon, lighthouse, beacon light, pharos (score = 0.02340)
rapeseed (score = 0.02084)</p>
<p>c_5.jpg <br />
wing (score = 0.56471)
volcano (score = 0.05122)
alp (score = 0.02148)
valley, vale (score = 0.01712)
maze, labyrinth (score = 0.01375)</p>
<p>c_6.jpg <br />
rapeseed (score = 0.1681 3)
wing (score = 0.06743)
volcano (score = 0.06542)
alp (score = 0.04578)
parachute, chute (score = 0.04453)</p>
<p>c_7.jpg <br />
geyser (score = 0.34126)
volcano (score = 0.29207)
parachute, chute (score = 0.08914)
flagpole, flagstaff (score = 0.00994)
rapeseed (score = 0.00941)</p>
<p>c_8.jpg <br />
volcano (score = 0.68900)
space shuttle (score = 0.07361)
geyser (score = 0.02306)
alp (score = 0.00344)
missile (score = 0.00326)</p>
<p>c_9.jpg <br />
parachute, chute (score = 0.21870)
alp (score = 0.16039)
balloon (score = 0.14397)
wing (score = 0.10658)
cliff, drop, drop-off (score = 0.09097)</p>
<p>c_10.jpg <br />
rapeseed (score = 0.05555)
beacon, lighthouse, beacon light, pharos (score = 0.05364)
hay (score = 0.04863)
parachute, chute (score = 0.03568)
balloon (score = 0.03279)</p>
<p>c_11.jpg <br />
volcano (score = 0.24023)
parachute, chute (score = 0.18067)
jellyfish (score = 0.08899)
geyser (score = 0.04050)
space shuttle (score = 0.01674)</p>
<p>c_12.jpg <br />
space shuttle (score = 0.61488)
volcano (score = 0.10118)
geyser (score = 0.03653)
alp (score = 0.01912)
megalith, megalithic structure (score = 0.01141)</p>
<p>c_13.jpg <br />
hay (score = 0.22265)
rapeseed (score = 0.08806)
beacon, lighthouse, beacon light, pharos (score = 0.04284)
geyser (score = 0.03793)
balloon (score = 0.02594)</p>
<p>c_14.jpg <br />
volcano (score = 0.21806)
lakeside, lakeshore (score = 0.07117)
alp (score = 0.04764)
valley, vale (score = 0.04461)
balloon (score = 0.04059)</p>
<p>c_15.jpg <br />
orchid (score = 0.09552)
hay (score = 0.08442)
valley, vale (score = 0.08411)
alp (score = 0.07704)
balloon (score = 0.05038)</p>
<p>c_16.jpg <br />
geyser (score = 0.16401)
valley, vale (score = 0.05602)
balloon (score = 0.04555)
alp (score = 0.04436)
parachute, chute (score = 0.03637)</p>
<p>c_17.jpg <br />
wing (score = 0.59212)
mountain (score = 0.08956)
parachute, chute (score = 0.06842)
balloon (score = 0.01583)
cliff, drop, drop-off (score = 0.01041)</p>
<p>c_18.jpg <br />
storm (score = 0.11118)
quilt (score = 0.10776)
hay (score = 0.07224)
megalith, megalithic structure (score = 0.04500)
balloon (score = 0.03494)</p>
<p>c_19.jpg <br />
parachute, chute (score = 0.13171)
rapeseed (score = 0.09473)
hay (score = 0.06208)
geyser (score = 0.05794)
volcano (score = 0.03771)</p>
<p>c_20.jpg <br />
volcano (score = 0.73571)
alp (score = 0.04293)
cliff (score = 0.00817)
geyser (score = 0.00719)
parachute, chute (score = 0.00613)</p>
<p>c_21.jpg <br />
lily (score = 0.29654)
hay (score = 0.12165)
parachute, chute (score = 0.03291)
megalith, megalithic structure (score = 0.02794)
flagpole, flagstaff (score = 0.02236)</p>
<p>c_22.jpg <br />
parachute, chute (score = 0.37083)
geyser (score = 0.07544)
balloon (score = 0.04032)
cliff, drop, drop-off (score = 0.03828)
rapeseed (score = 0.03309)</p>
<p>c_23.jpg <br />
anthurium (score = 0.20880)
pole (score = 0.07667)
parachute, chute (score = 0.06739)
hay (score = 0.05628)
balloon (score = 0.04330)</p>
<p>c_24.jpg <br />
volcano (score = 0.09171)
rapeseed (score = 0.07913)
wing (score = 0.06180)
alp (score = 0.06117)
beacon, lighthouse, beacon light, pharos (score = 0.04849)</p>
<p>c_25.jpg <br />
mount, mountain (score = 0.22543)
orchid (score = 0.05927)
geyser (score = 0.03696)
alp (score = 0.02802)
pole (score = 0.02709)</p>
<p><img src="https://s3-us-west-1.amazonaws.com/zoeingramsite/c_53.jpg" alt="alt text" /></p>
<p>c_26.jpg <br />
gastropoda, snail (score = 0.70747)
hay (score = 0.06056)
parachute, chute (score = 0.02240)
balloon (score = 0.01566)
megalith, megalithic structure (score = 0.01025)</p>
<p>c_27.jpg <br />
parachute, chute (score = 0.39040)
rabbit (score = 0.05065)
geyser (score = 0.03941)
wing (score = 0.03940)
cliff, drop, drop-off (score = 0.03193)</p>
<p>c_28.jpg <br />
quilt (score = 0.23014)
sunset (score = 0.13462)
valley, vale (score = 0.13287)
hay (score = 0.10590)
balloon (score = 0.05697)</p>
<p>c_29.jpg <br />
brachyura, crab (score = 0.15854)
volcano (score = 0.13115)
wing (score = 0.08623)
rapeseed (score = 0.04961)
balloon (score = 0.04415)</p>
<p>c_30.jpg <br />
parachute, chute (score = 0.23460)
pumpkin (score = 0.10889)
geyser (score = 0.06706)
balloon (score = 0.03689)
flagpole, flagstaff (score = 0.03011)</p>
<p>c_31.jpg <br />
castle (score = 0.15552)
megalith, megalithic structure (score = 0.13805)
cliff, drop, drop-off (score = 0.07213)
hay (score = 0.06158)
alp (score = 0.05618)</p>
<p>c_32.jpg <br />
raindrop (score = 0.14712)
volcano (score = 0.11780)
parachute, chute (score = 0.06783)
rapeseed (score = 0.03271)
hay (score = 0.03215)</p>
<p>c_33.jpg <br />
fence (score = 0.12746)
wing (score = 0.12450)
volcano (score = 0.03628)
cliff, drop, drop-off (score = 0.01831)
parachute, chute (score = 0.01409)</p>
<p>c_34.jpg <br />
beacon, lighthouse, beacon light, pharos (score = 0.11266)
seashore, coast, seacoast, sea-coast (score = 0.06908)
drilling platform, offshore rig (score = 0.06787)
breakwater, groin, groyne, mole, bulwark, seawall, jetty (score = 0.04437)
megalith, megalithic structure (score = 0.03681)</p>
<p>c_35.jpg <br />
megalith, megalithic structure (score = 0.17392)
bed (score = 0.13818)
sunlight (score = 0.07871)
geyser (score = 0.03953)
obelisk (score = 0.02658)</p>
<p>c_36.jpg <br />
space shuttle (score = 0.82127)
parachute, chute (score = 0.05299)
volcano (score = 0.01788)
wing (score = 0.01409)
jellyfish (score = 0.00408)</p>
<p>c_37.jpg <br />
hair (score = 0.08996)
drilling platform, offshore rig (score = 0.07833)
valley, vale (score = 0.04206)
quill, quill pen (score = 0.04035)
alp (score = 0.02719)</p>
<p>c_38.jpg <br />
obelisk (score = 0.14075)
volcano (score = 0.07921)
rock (score = 0.06990)
parachute, chute (score = 0.05911)
flagpole, flagstaff (score = 0.05766)</p>
<p>c_39.jpg <br />
road (score = 0.25316)
hill (score = 0.24560)
valley, vale (score = 0.05874)
geyser (score = 0.05723)
parachute, chute (score = 0.01972)</p>
<p>c_40.jpg <br />
ocean (score = 0.16835)
sea (score = 0.05969)
buoy (score = 0.05616)
alp (score = 0.05289)
balloon (score = 0.03245)</p>
<p>c_41.jpg <br />
breakwater, groin, groyne, mole, bulwark, seawall, jetty (score = 0.26337)
sandbar, sand bar (score = 0.15033)
seashore, coast, seacoast, sea-coast (score = 0.14316)
beacon, lighthouse, beacon light, pharos (score = 0.06552)
lakeside, lakeshore (score = 0.03841)</p>
<p>c_42.jpg <br />
fish (score = 0.45181)
child (score = 0.03689)
rock (score = 0.03222)
flagpole, flagstaff (score = 0.02761)
rapeseed (score = 0.02313)</p>
<p>c_43.jpg <br />
wave (score = 0.18860)
line (score = 0.08288)
coast (score = 0.04823)
wing (score = 0.04552)
space shuttle (score = 0.02298)</p>
<p>c_44.jpg <br />
projectile, missile (score = 0.36566)
geyser (score = 0.30406)
mountain (score = 0.06263)
missile (score = 0.05371)
wing (score = 0.00785)</p>
<p>c_45.jpg <br />
cat, house cat (score = 0.52226)
space shuttle (score = 0.19230)
lion (score = 0.06225)
geyser (score = 0.04917)
cliff, drop, drop-off (score = 0.00696)</p>
<p>c_46.jpg <br />
seashore, coast, seacoast, sea-coast (score = 0.32422)
sandbar, sand bar (score = 0.10697)
promontory, headland, head, foreland (score = 0.08032)
breakwater, groin, groyne, mole, bulwark, seawall, jetty (score = 0.02668)
geyser (score = 0.02550)</p>
<p>c_47.jpg <br />
wing (score = 0.22465)
sandbar, sand bar (score = 0.06243)
parachute, chute (score = 0.06148)
hair (score = 0.03171)
beacon, lighthouse, beacon light, pharos (score = 0.03040)</p>
<p>c_48.jpg <br />
mountains (score = 0.60866)
alp (score = 0.15226)
space shuttle (score = 0.11528)
geyser (score = 0.00711)
cliff, drop, drop-off (score = 0.00367)</p>
<p>c_49.jpg <br />
obelisk (score = 0.10843)
parachute, chute (score = 0.09707)
flagpole, flagstaff (score = 0.09319)
balloon (score = 0.06839)
beacon, lighthouse, beacon light, pharos (score = 0.04846)</p>
<p>c_50.jpg <br />
field (score = 0.57045)
crops (score = 0.06548)
parachute, chute (score = 0.05370)
bubble (score = 0.02420)
rapeseed (score = 0.02297)</p>
<p>c_51.jpg <br />
radio telescope, radio reflector (score = 0.14726)
orchid (score = 0.10252)
beacon, lighthouse, beacon light, pharos (score = 0.08945)
megalith, megalithic structure (score = 0.07350)
hay (score = 0.03292)</p>
<p>c_52.jpg <br />
geyser (score = 0.70068)
cliff, drop, drop-off (score = 0.01486)
megalith, megalithic structure (score = 0.01234)
parachute, chute (score = 0.011
sunflower (score = 0.01143)</p>
<p>c_53.jpg <br />
lakeside, lakeshore (score = 0.27854)
alp (score = 0.27663)
valley (score = 0.04411)
volcano (score = 0.02797)
cliff, drop, drop-off (score = 0.02630)</p>
<p>c_54.jpg <br />
parachute, chute (score = 0.13246)
geyser (score = 0.11263)
hand (score = 0.07068)
finger, fingers (score = 0.06002)
balloon (score = 0.03517)</p>
<p>c_55.jpg <br />
steam locomotive (score = 0.88489)
volcano (score = 0.02729)
geyser (score = 0.00338)
space shuttle (score = 0.00180)
missile (score = 0.00114)</p>
<p>c_56.jpg <br />
volcano (score = 0.23397)
hay (score = 0.09533)
perennis (score = 0.05813)
geyser (score = 0.05189)
parachute, chute (score = 0.03820)</p>
<p>c_57.jpg <br />
seashore, coast (score = 0.32422)
sandbar, sand bar (score = 0.10697)
sunflower (score = 0.08032)
breakwater, seawall, jetty (score = 0.02668)
geyser (score = 0.02550)</p>
<p>c_58.jpg <br />
table (score = 0.14075)
obelisk (score = 0.07921)
rock (score = 0.06990)
plateau (score = 0.05911)
cliff (score = 0.05766)</p>
<p>c_59.jpg <br />
hair (score = 0.14075)
fur (score = 0.07921)
string, strings (score = 0.06990)
parachute, chute (score = 0.05911)
geyser (score = 0.05766)</p>
<p>c_60.jpg <br />
orchid (score = 0.09552)
alp (score = 0.08442)
valley, vale (score = 0.08411)
hay (score = 0.07704)
balloon (score = 0.05038)</p>Interpreting Clouds: Final Ideation Rosalind, Sophia and ZoeSetting up Tensorflow and Jupyter Notebook inside a Google Cloud Compute instance2017-06-04T08:52:15+00:002017-06-04T08:52:15+00:00https://publicityreform.github.io/findbyimage/findbyimage/install-tensorflow-and-jupyternotebook-in-cloud-compute<h1 id="setting-up-jupyter-notebook-inside-a-google-cloud-compute-instance">Setting up Jupyter Notebook inside a Google Cloud Compute instance</h1>
<h2 id="create-cloud-compute-instance">Create cloud compute instance</h2>
<p>Try the following settings:</p>
<ul>
<li>zone: US-West-1-b</li>
<li>machine: 8 vCPU’s with 52 GB memory</li>
<li>boot disk: Ubuntu 14.04 with 200 GB persistent memory</li>
</ul>
<p>(See overview / more detailed instructions on this <a href="https://publicityreform.github.io/findbyimage/create-compute-instance.html">here</a>)</p>
<h4 id="install-pip-and-python-dev-package">install pip, and python-dev package</h4>
<p><code class="highlighter-rouge">sudo apt-get install python-pip python-dev</code></p>
<p>(hit enter / type Y when prompted)</p>
<p><code class="highlighter-rouge">sudo pip install -U pip</code></p>
<h4 id="install-tensorflow">Install tensorflow</h4>
<p>Info on installing tensorflow <a href="https://www.tensorflow.org/install/">here</a></p>
<p>At this link, look for section Ubuntu > Installing with native pip
python 2.7 CPU only
(or if doing gpu follow that one)</p>
<p>install the latest version of tensorflow by grabbing the link from <a href="https://www.tensorflow.org/install/install_linux#TF_PYTHON_URL">here</a>
(we’re assuming python 2.7, CPU only)</p>
<p><code class="highlighter-rouge">sudo pip install --upgrade longurlfromthearticleabove</code></p>
<h5 id="now-install-git-and-clone-the-tensorflow-repository">Now install git and clone the tensorflow repository</h5>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install git
</code></pre></div></div>
<p>Now clone tensorflow github repository</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/tensorflow/tensorflow.git
</code></pre></div></div>
<h5 id="install-jupyter-notebook">Install jupyter notebook</h5>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo pip install jupyter
</code></pre></div></div>
<p>Try running Jupyter notebook</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>jupyter notebook --no-browser --allow-root
</code></pre></div></div>Setting up Jupyter Notebook inside a Google Cloud Compute instanceGroup9 Whitepaper2017-06-01T11:58:30+00:002017-06-01T11:58:30+00:00https://publicityreform.github.io/findbyimage/findbyimage/group9-whitepaper<h1 id="vision">Vision</h1>
<p>David Marr
Sarah B, Alice C, and Eric F</p>
<p><a href="https://docs.google.com/presentation/d/1StIGnaEPHXWoFi7zeSSc7qcg9UEaR2PIZ2rf8dzmPk4/edit?usp=sharing">slides</a></p>Vision David Marr Sarah B, Alice C, and Eric Fsetup magenta in google cloud engine2017-05-30T08:52:15+00:002017-05-30T08:52:15+00:00https://publicityreform.github.io/findbyimage/findbyimage/magenta<p>create new instance using ubuntu trusty (14.04)</p>
<p>make sure to give it lots of memory (try 8 vCPU’s and 52 gigabytes memory)</p>
<p>once the instane is created and running, launch an ssh terminal</p>
<p>install a compiler:</p>
<p><code class="highlighter-rouge">sudo apt-get install gcc</code></p>
<p>install git:</p>
<p><code class="highlighter-rouge">sudo apt-get install git</code></p>
<p>you can now clone in the magenta repository at any time using:</p>
<p><code class="highlighter-rouge">git clone https://github.com/tensorflow/magenta.git</code></p>
<p>now run the installer for the magenta environment (if this doesn’t work, try the manual instructions <a href="https://github.com/tensorflow/magenta#installation">here</a>):</p>
<p><code class="highlighter-rouge">curl https://raw.githubusercontent.com/tensorflow/magenta/master/magenta/tools/magenta-install.sh > magenta-install.sh</code></p>
<p><code class="highlighter-rouge">bash magenta-install.sh</code></p>
<p><code class="highlighter-rouge">source ~/.bashrc</code></p>
<p>this creates a virtual environment for magenta that you can now launch by typing</p>
<p><code class="highlighter-rouge">source activate magenta</code></p>
<p>you should see <code class="highlighter-rouge">(magenta)</code> in front of the terminal prompt now.</p>
<p>you can type <code class="highlighter-rouge">conda list</code> to see a list of installed packages.</p>
<p>for any dependencies required by magenta, make sure to activate the magenta environment before installing. for sketch_rnn, you need to install a specific version of svgwrite:</p>
<p><code class="highlighter-rouge">conda install -c omnia svgwrite=1.1.6</code></p>
<p>now, if you want to open a jupyter notebook, activate the magenta environment and then type:</p>
<p><code class="highlighter-rouge">jupyter notebook --no-browser --allow-root</code></p>
<p>which will give you a url with a token that you can paste in a browser…</p>
<p>…but first you need to open an ssh tunnel from your local terminal by typing:</p>
<p><code class="highlighter-rouge">ssh -i .ssh/google_compute_engine -L 8888:localhost:8888 yourusername@your-instance-external-ip</code></p>
<p>(you’ll need to have set up ssh keys for this to work properly - see <a href="https://publicityreform.github.io/findbyimage/create-compute-instance.html">how to create a cloud compute instance</a> for more info)</p>create new instance using ubuntu trusty (14.04)