Audio representations of the Game of Life (Tick 5*)

The submissions received for Tick 5* are included below.

Ben Webb (bjw45)


The sound is created using the supplied AudioSequence class. The program creates a seperate, distinct sound for each column of the world (from left to right) and then leaves a gap before continuing to do the same for the next generation. Each sound is generated by using the fraction of live cells to total cells in the column as the frequency and a constant amplitutde.

This is a lossy transform into sound (ie, no information is maintained about what row cells are in), but it illustrates quite nicely the decay of a pattern into a stable state.

Competition entry created using:
java -cp world.jar:sound.jar:. "wing:Life lexicon:8:8:2:2:0110 1001 0101 0011" 25 competition.wav

Christopher Elsby (cjre3)


*** Sound generation algorithm ***

The sound is generated by direct use of AudioSequence and SineWaveSound.

For each cell (both living and dead) in the world, a frequency value (double, range 0 to 1) is kept.
Each cell's frequency value is initialised to 0.
For each generation to be played, the following operations are performed:

1. Add a SineWaveSound for each living cell:
The frequency parameter to SineWaveSound is given by (9 + (cell frequency value) * 23) / 32.
The amplitude parameter is given by 1 / (population + 1), equally weighting all cells while avoiding overflow.

2. Compute a new frequency value for each cell:
For dead cells: The new value is half of the previous one.
For living cells: The new value is a weighted average of:
* 1, with weight 16.
* The cell's previous value, with weight 8.
* The previous value of each living neighbour (not its new value!), with weight 1.
* 0 for each dead neighbour, with weight 1.
Cells beyond the edge of the world are treated as dead when considering neighbours of cells on the edge.

3. Advance the game world to the next generation, according to the rules of the Game of Life.

4. Advance the AudioSequence to the next timeslot.

*** Command line arguments ***

The command line arguments are as follows: PATTERN GENERATIONS OUTFILE [GENERATIONTIME]
PATTERN is a Life world pattern string representing the initial state of the game world.
GENERATIONS is a non-negative integer giving the number of generations to run the game for.
OUTFILE is a string giving the filename to output the sound data to, in WAV format.
GENERATIONTIME is a finite positive floating point number giving the time in seconds to allocate to each generation in the sound.

If GENERATIONTIME is omitted, it shall be taken to be 0.2.

CELLS consists of a space-separated sequence of rows, each containing a sequence of cells represented as '1' for living or '0' for dead.
This specifies a world of the given dimensions with a region starting from (STARTX, STARTY) filled with the given cells.

*** Error handling ***

If invalid arguments are supplied, or an input/output error occurs, the program should print a message describing the error to standard error output, and then exit.

*** Sample output ***

The supplied competition.wav was generated with the following command line arguments:
"twin bees shuttle:Life lexicon:55:45:14:14:011000000000000000000000000 011000000000000000000000000 000000000000000000000000000 000000000000000100000000000 110000000000000110000000011 110000000000000011000000011 000000000001100110000000000 000000000000000000000000000 000000000000000000000000000 000000000000000000000000000 000000000001100110000000000 110000000000000011000000000 110000000000000110000000000 000000000000000100000000000 000000000000000000000000000 011000000000000000000000000 011000000000000000000000000" 100 competition.wav 0.2

Hanying Tang (ht290)


My SoundWorld Class extends AgingWorld Class. It calculates frequency for each cell basing on the age of it, to be exact: f = 0.8 * cellAge / maxCellAge + 0.2

The amplitude is fixed on 0.5.

"Michael James" Sargent (mjs244)


The program takes a world in the game of life, and outputs a WAV file from the state of the world at every generation. Each generation provides 1 second of audio, comprised of 4 tones that get their frequency and amplitude from the population and birth rates of that generation respectively.

The populations and birth rates are modified by an offset and a scaling factor. The offset is applied to the input (be it population or rate), and then the new number is scaled to a value in the range 0-1 from a value in the range 0 to a specified scale. If the value lies outside this range, then modulo operations are used to make it so.

Each of the 4 tones produced from every generation uses different scaling factors and offsets for each of the population and birth rate, so that the tones are distinguishable, since the raw data is the same within each generation. The exact values of the scaling factors and offsets I simply came to by testing.

Manfredas Zabarauskas (mz297)


1. Store the state of the game in a two dimensional array of booleans (re-use ArrayWorld and WorldImpl classes).
2. Provide facilities for world initialization from an input string (re-use Pattern).
3. Use the provided library for writing audio files (AudioSequence/SineWaveSound).
4. Use the distances of live cells from the center of the board and the mean distance to generate "melody" and background track.

Philip Withnall (prw35)


The program should produce a WAV file representing the state of the world for a game of life as it progresses.
The program should take three arguments on the command line:
1. A string describing the initial world to use, in the format described on page 9 of Workbook 3.
2. The number of generations to iterate through when making the sound, or -1 to iterate up to the end of the first loop in the world state, inclusive.
3. The filename to output.
Each consecutive generation should be represented by a 0.1s time slot in the WAV file. Each time slot should comprise the sum of a single sine wave for each cell in the world, with frequency and amplitude determined by the following formulae:
* frequency = (cell X position / world width + cell Y position / world height) / 2.0
* amplitude = 1.0 / ((cell age + 1) * maximum population)
The frequency and amplitude are both on arbitrary scales in the range [0.0, 1.0], whose interpretation is left at the discretion of the programmer. The reference implementation uses them directly with SineWaveSound from
The cell age is defined as 0 for cells which are alive in the current generation, 1 for cells which are currently dead, but were alive in the previous state.
The maximum population is defined as the greatest number of cells alive in any one generation of the game, tested over all the generations up to the end of the first cycle in the game.
A cell's position is a pair of indices between (0, 0) and (width - 1, height - 1).

The attached competition.wav file is a rendering of "Glider Gun" by Bill Gosper from

Simon Beaumont (sjb273)


Notes for using -- Simon Beaumont - sjb273

SoundLife is an audible implementation of Conway's Game of life.

To use this program call SoundLife format_str #_generations out_file

This will then loop through the generations of the world given in format_str
and calculates a sine wave to represent each generation. The amplitude of
the wave will be represented by how many live cells there are in that
generation relative to how many cells there can be on the specified game board.

The frequency of the wave is determined by a random number generator contained
in Java's Math library.

This is then written out to a .wav file with the file path specified as

Tejas Guruswamy (tg307)


Implementation details of tg307-tick6star (SoundLife)

This implementation iterates across a Game of Life grid of height h, width w, and total population p, and if cell at (x,y) is alive, adds a SineWaveSound to the current timeslot with frequency = (x/w + y/h)/2 and amplitude p/(w*h)*0.1.
Each new row and each new generation advances the sound timeslot.
The timeslot duration is calculated by 50.0/(w*h) so larger grids play faster.

Xiang Long (xl296)


This program was created by taking advantage of MIDI. The supplied AudioSequence class was not used, instead the Java MIDI and sound standard libraries were interacted with directly. A software synthesizer instance was created and the default system soundbank was loaded. A MIDI sequencer was not needed, since during the analysis of Life patterns MIDI control messages can be sent to the synthesizer on-the-fly. The synthesizer automatically caches the messages which corresponds to musical notes and generates the appropriate sound waveforms from the samples in the soundbank. The waveform is written to an audio stream and it is captured and written to file.

The algorithm used by this program is described below:

1) First decide the number of instruments available (i), the number of notes used in the scale (s) and number of octaves in the available range per instrument (t). For example, this programmed used 6 instruments over 2 octaves of the C Major pentatonic scale (s = 5).
2) Decide which instruments are used and store their corresponding MIDI program numbers into an int array with length i. Declare another int array with length s which has the MIDI key values of all the notes (in descending order) of the top octave in the range.
3) Send MIDI program change messages to the synthesizer so that channels 0 to (t - 1) are preset with the instruments you have chosen. Be careful that although you can use channels 0 to 15, channel 10 will only produce drum sounds.
4) Declare an array of Note objects with length i * s * t. A Note object simply stores the MIDI channel and key values of a single note in its fields. This way a single Note object uniquely identifies a note played by an intrument.
5) Assign a unique note-channel pair to each Note object in the array. There is a lot of freedom in the method used assign the notes, but this program divided the array into i equal regions and iteratively assigned notes in each region in descending order over the full range of octaves available. Each region corresponds to a different instrument (MIDI channel). Half of the instruments are assigned to play high notes while the other half is assigned to play low notes. The arrays declared in step 2 are referred to to aid notes assignment.
6) When a world state arrives to be added to the MIDI sequence, the program scans the columns of the world from left to right. For each cell that is alive in the column, its y position is translated to exactly one of the elements in the Notes array. If the y position is larger than the length of the array, it is translated modulo (i * s * t).
7) Using the note data stored in each of the translated Note objects, send the appropriate MIDI message to the synthesizer to play the desired note on the correct channel. A timestamp variable is kept in track throughout the program so that it remembers the current position in the sequence. Notes from the same world column are sent with the same timestamp so that they will sound simultaneously. The timestamp is advanced if some live cells were present in the current column. This ensures that there are no long gaps in the music.
8) After processing all the columns in the world state, advance the timestamp so there is a small gap in the music and then wait for the next world state.
9) After all of the generations are processed, a tidying-up method is called. Its main purpose is to free the MIDI resources and capture the finished output audio stream produced by the synthesizer and encodes it to a file.

The competition.wav was produced by rendering 30 generations of "7 pyramid:Life lexicon:31:60:13:27:0001000 0000000 0011100 0000000 0111110 0000000 1111111".