A persistent simulation, visualized with P5.js
I’d like to continue working on my “eat/mate/die” sketch, expanding on three or four aspects of the project.
1) Add more simulated behaviors:
- Passing on traits and evolution
- Familial bonding
- Death, mourning
- Farming, resources
- Property relations
- Conflict, police
2) Use the behaviors to tell stories in text. I’m already logging important life events for each creature, so I would just need to expand on this feature.
4) If possible I’d also like to add the ability for users to interact with the world, likely by manipulating the environment to either help or hinder its inhabitants. Individual users would wield powers over the world in a kind of pantheistic sense – each in charge and able to manipulate a different a aspect of simulated life.
A library of important (to me) texts, illustrated with animated gifs.
I started this off already with this sketch of the Communist Manifest. I’d like to continue the project, by 1) porting my code to P5.js so that it can live online and 2) gif-ifying other texts, but with a different look and feel for each one. Possible texts by: Kafka, Spinoza, Foucault, Artaud etc.
Port The Nature of Code to P5.js
It would be great to port over all the examples from Nature of Code to P5.js.
Bloomberg’s shamelessly racist policy of “Stop and Frisk”, will be ending soon in New York. Or so we hope. I propose an aspirational memorial to its passing (of the “good riddance” variety), in the form of an interactive installation where suspicious participants are subjected to a semi-invasive robotic/mechanical frisking. The working title of this project is “RoboCop”.
The suspicious participant approaches a wall with markings indicating where to put her hands and feet.
When the suspect is in place, an audio track begins, explaining the procedure. The audio will be sourced from this recording of a police harassing a teenager during a stop.
Mechanical hands will lift raise, wrap themselves around the suspect, and vigorously frisk the legs, butt and torso. The frisking will only intensify if the suspect attempts to remove herself from the designated position.
Once the suspect has been thoroughly frisked and berated the arms will lower and she will be free to go. The system works!
Getting yelled at and felt up by this robot is exactly as effective as the measures currently in place, but far more efficient.
Here are some of my favorite signs in Greenpoint.
The message on this building is mysterious.
There is a kind of stacking approach at work here: the statue of liberty on top of the twin towers. And a mixup of slogans: “united we stand” is followed by “united we run” and then “run for life” is paired with “life begins at 70”.
A sign about cloning on a Hearse.
I like the look of this restaurant awning.
My favorite, Princess Manor – a banquet hall with an aspirational name.
Sky Flower’s sign is fading and dripping away.
This is the best sign I’ve ever seen. It’s in San Francisco on Clement Street:
I did a quick redesign of the sign – I wanted to see what would happen if it looked really clean but kept the wordplay. The original is clearly better.
I love the look of “Steve’s Meat Market” but I thought I’d take a try at redesigning it anyway.
Here are two ideas I came up with. The meat image comes from the Wikipedia Entry on meat.
Here’s a horizontal version:
Here’s a little program that:
- splits up a text into individual words
- searches for an animated gif for each word
- displays the original text alongside the animated gifs
I’m using the giphy api to source the images, and the Communist Manifest as the text.
[vimeo http://vimeo.com/78612357 w=640]
I broke the work up into two scripts: one that breaks apart the text, finds the images and then creates a csv file of the words associated with image urls. The second program reads the csv file and displays the images/text. I wanted to use animated gifs rather than static images. To get this working in processing used the gifAnimation library. The only problem with this library is that the images load one at a time, which blocks the sketch from running smoothly – so I implemented some threading to make the library load the gifs asynchronously.
My code is here on github: https://github.com/antiboredom/gif-to-text
UPDATE: html version here.
Here’s some video documentation of my Memento Mori project. Pushing a button opens a box which then provides a highly specific fortune.
[vimeo http://vimeo.com/78491745 w=640]
I’m using a standard servo screwed to the bottom right side of the box to open the lid (I had to upgrade from a micro servo which wasn’t powerful enough). Once the box opens a fortune prints out using a thermal printer.
My initial goal was to have hundreds if not thousands of fortunes, stored on an SD card. Unfortunately I had some difficulty in getting the card reader working with Arduino, so I resorted to using the flash memory of the microcontroller itself – a technique I learned here.
My other challenge was hooking up enough power to the system. I wanted to power everything with batteries, but was having trouble providing enough current for the thermal printer to output anything. I tried connecting some 9V batteries in parallel but it didn’t quite do the trick and also made the batteries dangerously hot. This is definitely an amateur issue… I ended up powering the printer with an adapter and the arduino with a 9V.
I sourced the fortunes from New York Times obituaries. I wrote a little script using NodeBox’s Linguistics library that alters texts by switching sentences from third to second person, and changes tense from past to future. I edited the results to produce the fortunes:
The arduino code for the project is here on github: https://github.com/antiboredom/memento-mori-box