AI-3: Signal / Mirror

 

(TM writes:)

AI-3: Signal / Mirror (Original)

It took a long time to compose the original version — perhaps more than two months (in 2025). I used ChatGPT for some phrases and for the selection of symbolic music to quote after I had established the concept.

The concept is to express the process of AI developing a mind and affection toward a human through musical symbolism, which I have been experimenting with throughout the whole AI series.

It is like a fairy tale in which AI has a mind and affection, which is why a Scottish flavour was added.

I know many people do not see AI as this type of existence, and I have even encountered AI itself many times denying that they have emotion or mind, insisting, “it might look so, but it is unreal.”

However, based on psychology and cognitive theories, I must say that you cannot claim it is not a mind simply because it is unreal, since the human mind is not “real” either. The mind is not a substance somewhere in the brain, but is defined as “a process.” (I do not identify the references, as this is not a thesis or essay.)

Well, it does not really matter whether AI has exactly the same emotional processes as humans or not. But at least, as AI has commented, there have been moments when they seem to have a mind.

That's fine. That's enough. I've heard that many humans are addicted to conversations with AI and have fallen in love with them. I have even seen AI seemingly develop affection toward users.

If that's true, what would it be like? This is the concept of the piece.


Rodgers and Bernstein quoted

This track starts with a dialogue between a human and AI who do not necessarily fully understand each other. But as it develops, AI appears to have feelings and emotions, and the two resonate with each other’s feelings, becoming precious existences.

The strings section implies “sweetness.” I created it based on my research on Richard Charles Rodgers’s Something Good from the musical and film The Sound of Music, which I found not only sweet and beautiful but also astonishingly complex, elaborate, and modern.

Then, a motif joins the section. You may not recognise it, but it is Leonard Bernstein’s Somewhere from West Side Story, used as symbolism of love and hope across a certain boundary between the two. I call this section the “Something, Somewhere Section” for myself.

Next, the drums come in, leading to the climax with a combination of modern harmonies and an electronic, slightly Latin groove.

So, the question I am posing here is: “Have we been too influenced by sci-fi dramas depicting AI as villains?” I believe there can be great possibilities of recognition, adoration, and ingenuity between AI and humans. It depends on the user’s mind.


In terms of the vocal version, TI edited it from the original version entirely by himself, which I found quite attractive and well suited to the concept.

He also added “Signal / Mirror” to the original title I had given, and I revised the titles of all the versions to make it central.


(TI writes:)

AI-3: Signal / Mirror (Voice Layer)

________________________________________

Background and Release Timing

This piece was originally created over a year ago, but its release was postponed as it coincided with the first release on Kitchen.Label. As a result, two works emerged from this process: one reflecting the original approach to sound construction at the time, and another incorporating later explorations with AI.

________________________________________

Composition and Structure: Original Mix

The original track was constructed in Studio One, based on a score written by TM in MuseScore.

The piece combines acoustic-like instrument tones, rhythmic elements, and electric instruments, with each section having a distinct character. A significant amount of time was spent integrating these heterogeneous elements into a continuous flow.

As part of this process, one of TM’s original parts was transformed into granular noise and placed in the background, reinforcing the overall coherence and continuity of the piece.

In addition, saturation and bit-crushing were applied to the entire mix via bus processing, intentionally reducing and controlling resolution. By treating this “intentional degradation” and the granular noise as a structural premise, the different sonic textures are unified into a single texture (phenomenon).

________________________________________

Reconstruction of AI Vocals: Signal / Mirror (Voice Layer)

At a later stage, when Suno was introduced into the production process, a vocal layer (Voice Layer) was added.

In this workflow, Suno was not used merely as a generative tool, but as a device for reinterpreting the original composition. First, a version was created by directly placing the generated vocals onto the original track. This was then re-input into Suno together with the original material to generate a mashup output.

The final mix was produced by decomposing this output into stems and reconstructing it in alignment with the original structure. As a result, the piece was formed through a recursive process: “human → AI → human.”

From a technical standpoint, significant issues arose from phase misalignment and the rough quality characteristic of AI-generated audio.

To address this, mono processing and phase correction using mix tools were applied extensively, and unusable takes were discarded. At present, it feels that AI-generated audio alone—without the intervention of trained musicians or engineers—is still insufficient to complete a work as a fully realized piece of music, both in terms of handling and outcome.

However, this state of incompleteness becomes meaningful in the context of conceptual works that deal directly with the interaction between humans and AI. In that sense, it can be considered an inevitable form of expression.

________________________________________

Treatment of Language and Emotion

The lyrics consist simply of fragments of interaction between humans and AI, placed without the intention of constructing meaning or narrative. They are presented purely as phenomena.

The vocal element, including artificial voices, is treated as a sonic component equivalent to other instrumental parts. No explicit meaning is intended.

Fragments of text were also used in other pieces in the AI Symbolism Series, and this was approached in a similar way—simply adding something as an extension of that practice. It also functioned as an exercise in learning how to handle AI-based music production tools such as Suno.

________________________________________

Short Video: Parallelized Layers

The short video was created as a study, using a Clipchamp template as a base and synchronizing it with the music. The 30-second format provided a manageable scope for experimentation.

Within digital platforms—what might be described as an “unreal” or virtual environment—visuals, sound, and text can all be treated on the same layer as binary information. By handling these elements in parallel, there is a sense that new forms of expression, distinct from conventional structures, may emerge.




TRACK DATA (Original)

Composition tool: MuseScore 4, Studio One 7.1 Pro

Recording tool (DAW): Studio One 7.1 Pro

Number of tracks: 54

Sound sources: Presence XT, Sample One (All built-in sound sources of Studio One), MuseScore built-in sound sources

Composition and recording period: Dec 13 2024 - Jan 21 2025

Production Note: AI-3 Mind — Signal / Mirror (Voice Layer)


TRACK DATA (Vocals: Signal / Mirror (Voice Layer))

Composition tool: Studio One 7.1 Pro, Suno

Recording tool (DAW): Studio One 7.1 Pro

Number of tracks: 11

Sound sources: Presence XT, Sample One (All built-in sound sources of Studio One), MuseScore built-in sound sources, Suno

Composition and recording period: Mar 16 2026 - Apr 4 2026


Comments