CPSC-334/interactive-dev at main · risxyang/CPSC-334
Going into this project, I was really invested in having an audial component of my interactive system, hopefully in addition to a visual one — and it would be even better if these two dimensions of output had some sort of relation between them. In the previous module, I worked solely in visuals, and I wanted to expand the sensory possibilities of what I might do in this (and future) projects.
Rather than making something with established directions to follow, I was also interested in making a system which felt exploratory, and playfully so. I think, when my mind was working around the constraints of a limited set of physical inputs which can map to a sensorily interesting and varied set of outputs, I was reminded of what it's like to use a musical instrument.
I always had a harder time being creative musically compared to visually, as I felt there were too many constraints to navigate before you could make anything original. I spent a lot of time learning to reproduce other people's compositions, which just did not seem sustainable to me personally as an artistic practice. When learning to use digital art software like photoshop, though there was also a sense of learning to use a limited set of physical inputs to get a digital response, I felt more free to mess around with the software and discover tools one by one, which I could then apply in any sort of order and combination. (This might be because I totally self-taught myself the latter.) Physically, there was a mismatch of where input and output were when drawing digitally — by not being able to draw directly on the image (back when tablets mostly did not have screens, and you had to put your hands in one place, and look for the effects of what you're doing in another— this model is around what I had when I started), and you had to keep a hand on control-z for undo. I was reminded of this setup by the system I was developing here, with all the different hardware components that I was bringing together.
The memory of using both kinds of audial and visual systems in the past informed my ideas for this device I made, and it makes a lot of sense given my differing feelings with both that my project ended up with controls visualized immediately through image, with the quality of being more freeform visually, and less so with audio (the mapping is coded to be pretty direct).
I spent much more time working on technical details for this project than making aesthetic choices. Because I had not worked with hooking up an HDMI display before, or with audio, all in the context of the physical hardware we were given, I thought it would be wise to start with a simple idea: a manipulable grid of visuals, where each is governed by the same rules. Each image would map to an individual audio output, and altogether, this would create a pattern, or maybe a tiny melody.
I was wary at first of trying to use processing, as I wasn't sure how long it would take to get it working on the HDMI display. My first idea was to create an ascii-art editor, where someone could navigate through a grid (just a printed array, really) and toggle the joystick to change the character displayed at any cell in the grid. The characters would be sourced from this site ("Character representation of grey scale images", by Paul Bourke), and a user would be able to control the darkness of each cell in the grid. I would only have to work in python to make this. However, we got an extension on this module, so I decided to try— and it only took me an hour to get a processing program running on the HDMI display. This tutorial was really helpful, and there are details on how I got this to work in my README . I swapped in the idea of an ASCII character grid for a grid of shapes drawn in Processing.
Using processing meant I'd have to get information from the ESP32 into processing, and then the information on the visual state of the image out of processing, into python (if I wanted to use a python based sound synthesis library), or some other synthesizer. I was too scared to learn Supercollider for this project (probably would have gone poorly in the span of 4 days), so I decided that it would be easier to look for options in python. I ended up finding a synthesizer called PySynth, written in Python 3, which was pretty easy to install (there are directions in the linked repo, as well as in my README for setting it up). I could only use the first variant (there are nine, some of which use numpy, which is apparently incompatible when in a script that is executed through Processing — more on this in a moment).
In terms of getting information from the ESP32 into Processing, this just involved asking Processing to read from the right port, with the right baudrate (similar to how we wrote a python script during lab to do this):
void setup()
{
String portName = "/dev/ttyUSB0"; //works with my Raspberry Pi
myPort = new Serial(this, portName, 115200);
//...
}
You can then see what data is available at the port for each run-through of draw():
void draw()
{
if (myPort.available() > 0) // If data is available,
{
val = myPort.readStringUntil('\\n').trim(); // read it and store it in val
int[] input = Arrays.stream(val.split(",")).mapToInt(Integer::parseInt).toArray();
//... access members of input
}
}
That one line which creates input has an equivalent effect to this code, which I had to swap in for the version of this program which runs on the pi. For some reason, the syntax above was not readable by the version of processing which can be downloaded to the pi.