top of page
#1 soundscape of the funeral

The choices of these sounds revolve around the concept of the sound funeral. First, it is essential to create an atmosphere that is not overwhelming, allowing the background sounds to settle and inviting introspection and contemplation. The selection of the mourning poem is intentional, resembling the recitation of scriptures in a funeral, yet it diverges from that traditional practice. It aims to incorporate the voices of mourners, slowing down time in a fast-spinning world. The input of human voices serves as "expressions of human sentiments," [1] as expressed by John Cage, but also as Nietzsche puts it "to  translate humanity back into nature".[2] Ultimately, these sounds merge  with the bird songs and other environmental sounds, forming into a 'whole'.



# process & reflection

 

In terms of their application in specific settings, here are the thoughts:

Human voice: The use of human voices in the sound funeral serves as a means of expression and communication. It allows for the conveyance of emotions, storytelling, and the sharing of collective grief. Human voices can be powerful and evocative, enabling participants to connect on a personal level with the mourning process.

Turtle dove's sound: Including the sound of the turtle dove itself adds authenticity and serves as a tribute to the subject of the funeral. The turtle dove's sound acts as a response to the human voice, creating a dialogue between the human and avian realms. 

​

Environmental sounds and background ambient sound: Incorporating environmental sounds specific to the turtle dove's habitat and background ambient sound creates a contextual framework for the funeral. These sounds help establish a sonic environment that reflects the turtle dove's natural surroundings and enhances the immersive experience. They provide a sense of place, connecting participants to the broader ecological context.


Reading of mourning poem/Verbalization: The inclusion of a mourning poem adds a poetic and reflective dimension to the sound funeral. The spoken words carry the weight of grief and remembrance, allowing participants to engage with more body energy. The poem's tone, imagery and some other thoughts on the issue of turtle dove contribute to the overall atmosphere and help create a contemplative space for participants.

​

 

[1] JohnCage,“ExperimentalMusic,”Silence:LecturesandWritings(Hanover,NH: Wesleyan University Press, 1961), 10.

[2] Friedrich Nietzsche, Beyond Good and Evil, trans. Walter Kaufmann (New York: Vintage, 1966), 161 (§230), translation modified.

#2 granular design
Screenshot 2023-05-28 at 18.54.34.png
Screenshot 2023-05-22 at 01.17_edited.jpg
user interface/presentation mode design in max
Screenshot 2023-05-28 at 19.13.50.png
LFO design
Screenshot 2023-05-28 at 19.15.13.png
audio input control
# process & reflection

Granular synthesis

​

To realise the idea of creating an interactive sound/video installation, and modulate the sounds in real-time, my plan was to design a granular by using maxmsp. There are three main concerns about this choice:

​

 

  1. Real-Time Modulation: Max/MSP provides a powerful and flexible environment for real-time audio processing and modulation. By using Max/MSP, I can easily implement real-time modulation techniques to control various aspects of the granular synthesis parameters. This allows for dynamic and interactive sound transformations that can be influenced by user actions or external inputs.

 

     2. Granular Synthesis Flexibility: Granular synthesis offers a wide range of possibilities for

         manipulating and transforming sound. It allows me to break down audio samples into small

         grains and manipulate their parameters such as size, position, density, and offset,etc. This

         flexibility enables me to create unique and intricate sound textures that can respond to user

         input or other interactive elements in real-time.

 

      3. Interactive Audio/Visual Integration: Max/MSP excels in integrating different media types,

         such as sound and video, within a single environment. I can also synchronize the granular

         synthesis with visual elements, creating a cohesive and immersive experience where sound

         and visuals interact and respond to each other in real-time.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Arduino sensor

​

Incorporating an Arduino sensor and establishing communication between Max/MSP and Arduino allows me to use real-time noise data as a control signal to modulate the granular synthesis parameters, introducing interactivity and responsiveness to my granular design. Experiment with different sensor types, parameter mappings, and granular synthesis settings to achieve the desired sound effects and performance interactions.

 

​

Here's an overview of the Arduino setting & workflow design:

 

  1. Arduino Sensor Setup: Connect an Arduino board with a noise sensor to the computer. This could be a max4466 sensor or other type of sensors capable of capturing noise data.

 

     2. Arduino Code: Write code for the Arduino board to read the sensor data and transmit it to

        the computer. The code will depend on the specific sensor that used.

 

     3. Serial Communication: In Max/MSP, use the [serial] object to establish communication

        between the Arduino and Max/MSP. Configure the [serial] object with the appropriate

        settings (such as baud rate and port) to receive data from the Arduino.

 

     4. Data Processing: In Max/MSP, use the received data from the Arduino as the value range to

         modulate the granular synthesis parameters, and map the received data to specific parameters

         such as grain size, density, position, or modulation depth.

​

​

Creation

​

Granular design_

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

A granular synthesis patch in Max/MSP with modulations for starting point, grain size, density, and speed:

 

  1. Audio Input: Begin by setting up an audio input source, an audio file, to provide the audio input for granulation.

 

    2. ​Buffer: Load the audio sample into a [buffer~] object to store the audio data that will be used

        for granulation.

 

     3. Granulation: Create a granulator module using a combination of objects, [groove~],

        [function]and [curve~]. Connect the output of the [buffer~] object to the [groove~] object to

        read the audio sample.

 

     4. Grain Parameters: Define the parameters that control the granulation process, including

         starting point, grain size, density, and speed. These parameters will be modulated to

         introduce variations and movement in the granulated sound.

 

     5. Envelope Generator: Use [function] obejct to generate and control the envelope shape, and to

         adjust parameters such as attack time, decay time, sustain level, and release time, providing

         precise control over the amplitude envelope of the grains.

 

      6.Audio Output: Connect the output of the granulator to a [dac~] object to produce the   

         granulated audio output.

Screenshot 2023-06-07 at 18.11.40.png
a diagram block of a granular design in maxmsp
Granular Engine, VERSION#1
Arduino setting_


To demonstrate the idea of the funeral soundscape design, I made a demo with max/msp by using real-time noise data as a control signal to modulate the granular synthesis parameters. In this demo, the main focus is on the mapping of noise data. The open- source hardware arduino sensor max4466 is used to measure and receive noise data from the zoo in real time.
Arduino setting sketch & code
Improvement

 

​Granular design:

​​

Limitation_

​

After some tests, the Max/MSP granular design lack of some variation, expressiveness,

spatialization, and textural complexity in the generated sound.

 

 

Solution & Improvement_

​

  1. Adding two LOFs: to introduce variation and movement to the granular synthesis parameters. By modulating parameters such as grain size, density, or position with LFOs, the granulated sound becomes more dynamic and evolving over time. This variation adds interest and complexity to the sonic output, preventing it from sounding static or repetitive. Modulating the position parameter using LFOs can also create a spatial movement within the sound field. By smoothly shifting the position of individual grains across the stereo spectrum, the granulated sound can be spatially distributed, providing a sense of movement and depth to the auditory perception.

 

​     2. Audio Input Modulation: By incorporating gain control, I can adjust the overall amplitude or

         volume of the incoming audio signal. This allows me to maintain a consistent level of audio

         output, preventing clipping or distortion that may occur if the signal is too loud. It also

         provides the ability to attenuate or amplify the input signal to match the desired sonic

         characteristics and balance within the granular synthesis. It allows for shaping the amplitude,

         dynamic range, texture, and timbre of the granulated sound, resulting in a more nuanced and

         captivating sonic experience.

 

     3. User Interface Design: Utilizing the sliders and dials, to control the parameters, LFO rates,

        envelope settings, and other aspects of the granular synthesizer for real-time control and

        interaction on site.

​

​

​

​

​

​

​

​

​

Granular Engine, VERSION#2
Arduino sensor:
 

Limitation_

​

During the field research, while monitoring noise data, I observed that the values were not as high as anticipated. Only in areas with dense human activity, such as restaurants, amusement parks, or near highways, did the noise levels noticeably increase. The range of noise level values, obtained in real-time from the sensors, was used to control and synthesize the granularity of the recorded bird songs in Max/MSP.

 

 

Solution & Improvement_

 

Considering the inherent limitations of the Arduino noise sensor, such as its relatively low sensitivity and inconsistent response to ambient noise variations, I recognized the need for an alternative solution that could improve the interaction with the Max/MSP granular patch.

 

The Arduino noise sensor, although capable of capturing noise data, may struggle to accurately detect subtle nuances and fluctuations in the ambient sound environment. This can result in a less precise and dynamic modulation of the granular synthesis parameters in real-time.

 

In order to overcome these limitations and enhance the interactive experience, I made the decision to integrate microphones into the system. By utilizing microphones, which are specifically designed for capturing audio, I was able to achieve a higher level of sensitivity and accuracy in picking up the dynamic of the ambient sound, such bird songs.

Installation setting on site

Challenge, Communication & Refinement

​

The challenges in this project encompassed acquiring new technical skills, building maxmsp patches and integrating LFOs effectively, applying them to various synthesis parameters, and achieving real-time data reception. The solutions involved guided learning, the strategic addition of LFOs, utilization of diverse control modes, workshops, professional consultation, and meticulous attention to detail during troubleshooting.

​

Challenges:

​1. Acquiring new technical skills: The realization of the project ideas required learning and understanding various technical aspects, including Max/MSP techniques, sonification processes, and Arduino programming.

​

2. Maxmsp granular patch design and integration of LFOs: To understand how a granular works and the method of building it. Incorporating two LFOs into the sound design process presented a challenge in terms of functionality and execution. Adapting the LFO design to different synthesis parameters, such as oscillator frequency, smooth, and envelope settings in granular synthesis, posed the difficulties in exploring more potential.

​

3. Real-time data reception:  For real-time environmental data reception necessitated overcoming technical problem of arduino to achieve synchronization.

​

 

Communication & Refinement:

1. Guided learning: Under the guidance of  my technique class and max/msp teachers, I focused on understanding and applying the relevant techniques to address the design and functionality requirements.

​

2.Addition of LFOs: Incorporating two LFOs into the sound design process proved to be an effective solution for enhancing expressiveness and creating evolving sonic textures. Additionally, the utilization of [bpatch] object facilitated easier parameter adjustments during the presentation mode.

​

​3. Workshops and professional consultation: To address the challenge of real-time data reception, I had arduino workshop and sought guidance from a technician with expertise in arduino programming, it was crucial to examine not only the stability of patch running but also to ensure that the input port matched the description in the Max patch.

​

4.Attention to alternative solution: Finally, I employed another microphone to capture ambient sound, introducing an extra element of external influence that altered the sound and imagery associated with the turtledove. 

​​

#3 illustration & video
Screenshot 2023-05-29 at 23.57.04.png
# process & reflection

Text for illustration

​

These illustrations depict the experiences the doves may encounter, or our attempt to see the world from its perspective, including the imaginative portrayal of its dreams.

​

#A flock of European turtle doves perched on the spectrum, taking a rest.

#A dying turtle dove is dreaming, a gigantic fishing net appeared, filled with rotting fumitory (their favorite food).

#The turtle dove found itself trapped within an oversized painting, surrounded by architecture, colors, and lines of sound.

#The turtle dove encountered a colossal, pitch-black entity with numerous tentacles, which embraced it and sang a melancholic song with deep affection.

#The turtle dove ventured into a forest and discovered giant grapes growing on trees, which turned out to be many peculiar eyes staring at it.

#While flying through an alley, the doves encountered a beast with a one leg, holding a large trumpet and playing exhilarating music.

​

​

The illustration helps to create a shared understanding and collective narrative, as different individuals may interpret sound in their own unique ways. By complementing the auditory aspects of a funeral with visual representation, we provide a richer context for participants to engage with and reflect upon.

​

Donna Haraway mentioned in the book When Species Meet, "For many years I have written from the belly of powerful figures such as cyborgs, monkeys and apes, oncomice, and, more recently, dogs. In every case, the figures are at the same time creatures of imagined possibility and creatures of fierce and ordinary reality; the dimensions tangle and require response. species of all kinds, living and not, are consequent on a subject- and object-shaping dance of encounters." This is something that I believe resonates deeply with those who have pets or have close relations with animals. We have embarked on this 'object-shaping dance', transforming it into visual imagery and sharing it with others for interpretation.

​

Video modulation interface

Screenshot 2023-05-22 at 00.32.03.png

Images are modulated in real time by max jitter. Texture effects of the animation that generated from fragment shaders vary ​simultaneously according to the loudness, frequency of the audio input. 

​

Challenge & Solution

 

The challenges faced in the creation of real-time modulated visual effects included time constraints, the complexity of parameters, and the integration of ambient sounds. The solutions involved effective task management, iterative testing, and exploration of ideal environmental conditions.

​

1. Time constraints and task management:

One challenge encountered was the limited time available for the production of the video and the design of the brochure due to the illustrator's unexpected health issue. Insufficient time was allocated to accommodate unexpected circumstances. To address this, it is important to allow more time for unforeseen challenges and allocate tasks accordingly. Starting the project earlier would provide a buffer for such situations.

​

2. Complex parameters of image and sound:

The intricate parameters involved in the visual and audio elements necessitated continuous testing to achieve optimal results. Combining dynamic visual effects with the modulation of texture effects generated from fragment shaders required proper adjustments and fine-tuning. This process of experimentation and refinement ensures that the visual effects align with the desired artistic vision.

 

3.Integration of ambient sounds and environment:

One of the design aims of this sound event is to incorporate environmental sounds, particularly bird songs, to enhance the auditory experience. The venue was set in a greenhouse within a zoo, providing a public space with environmental sounds. However, the strong lighting conditions in the greenhouse resulted in less pronounced and visually impactful black and white imagery. Future developement of the project could benefit from exploring more ideal environmental conditions and equipment to achieve the desired visual effects.

​

bottom of page