Flowers

Notes On "FLOWERS"










TRACK DATA

Composition tool: MuseScore 3, Studio One 6 Professional

Recording tool (DAW): Studio One 6 Professional

Number of tracks: 55

Sound source: Presence XT, Impact XT, Sample One (All built-in sound sources of Studio One)

Composition and Recording period: Mar 2 2023 - May 4 2023 , Aug 1 2023 - Aug 4 2023





Alternative Polyrhythms: 5.33-beats Pattern of Dynamics 


(TM writes:)

A note in the notation I wrote reads: "The minimal piano phrase has own rhythms irrelavant to the tempo and time. Composition was made between 28 Mar and 2 Apr 2023."

The tentative title was "Dynamic Polyrhythms," which is my musical concept here. There are three peaks in a four-bars pattern of the minimal piano phrase. Although dynamic signs are just something in the meta level rather than tones themselves, what if the absolute tones and their meta signs have completely different rhythms? "Polyrhythms" are, in general, called the mixture of differently-rhythmed absolute tones. I haven't heard this sort of "polyrhythms" before at all.


In Japan, where both of TI and me reside at the moment, there are so many songs on sakura (or cherry blossoms). Every time the sakura season comes, typically in March to April in most areas of Japan, we hear the sakura songs everywhere like cafés, convenience stores and supermarkets. Just for this reason I don't want to go out... There are many cases I have to, though.

I know this is an important part of the culture here; sakura symbolises the end of winter and a massive number of people get outside just to see it or have parties under the trees. In a year the most people come and see it from abroad. That's just great.

But the thing is, I don't prefer just sakura songs; no, I wouldn't recommend strongly you take a listen. That's because to me those are so similar, and musically not so interesting. I know I'm very strange in the society to claim like that.

I also know they are written not for me, a bit sadly, but I strongly hope such shops to stop having an implicit premise that the sakura songs must be pleasing "all" customers. No, I'm not at all. I do want to avoid calling at your shop, rather.

I also know how such songs are produced. In order to maximise the revenues through download, karaoke or whatever, the producer and musicians make one by taking the past hit sakura songs into account, because sakura songs could become big and long hits presumably even without collaboration with TV shows or films if they succeed. That is why those songs are aligned in a narrow range.

Another thing I know is that many of the readers of this text shouldn't have taken listens to the Japanese sakura songs before. You can easily do so if you search "sakura songs" on YouTube if you hope to, but you don't have to. I wouldn't recommend that.

Thus, I persuaded myself, "If you want to complain, just make one by yourself to satisfy yourself." This track, "Flowers" is an attempt of mine like that.


We had discussion like below at a time of production. Here is what Google translated:


(TM:) There's no such a stupid thing to do with the dynamic signs. But it's not just a matter of "I can control the timing of strength and weakness as I intended" but also the timbre changes with the velocity, which is beautiful.

(TI:) That's really stupid (sorry). That's not what the dynamic signs are for in the first place! (laughs). We should use accent marks for the strength in pinpoint, shouldn't we?
Anyway, if you want to pursue that (although the nuances will change when you drop it to S1), you may want to try introducing a MIDI editor (DOMINO → name free software) (MS and velocity).
https://takabosoft.com/domino
By the way, velocity can be used not only to control strength, but also to open/close the filter or control ENV depending on the tone, so the range is widened.

(TM:) Thanks. I'm not sure if I'll do it again. When I made it, I was thinking that musical marks including the dynamic signs were written for human performers, but that's not the case. Not for performers, I'm just writing MIDI data. Thus, I thought it should be fine if the way to use them isn't the same as for performers. Crescendo and decrescendo don't work at all in MS (MuseScore). That's all there is to it.
But there are almost no songs other than classical music that have overall dynamics in the piece, so why not? That's also a concept.

(TI:) That's exactly what it is. I think about the expandability of the functions of MS (or rather, musical notation). I wanted to do things that the developers didn't expect (because AI can't conceptually create things that aren't expected).
So, certainly there is no music genre that adds dynamics other than classical music (acoustic instruments, singing to be exact). And the reason for this is pretty clear, it's electronic amplification or recording art.
As I have said many times, there are limits to what can be recorded in recording media. It's a big premise that you have to fit within that limit, or there's an absolute limit. Electrical amplification also has limitations due to equipment specs.
So, recently (in the last 70 years, very recent in human history) there are many expressions that have no choice but to depend on them (electrical amplification and recording media) (we are not completely dependent on them). (laughs).

(TM:) Yes, exactly. I think that's why it's meaningful, and because of the limitations of electrical amplification and recording media, we've narrowed the scope of music, stripped production equipment (including musical instruments) as a function, and then completely removed dynamics in a narrow sense from music. It's gone.
I understand that. Very likely to happen. But if amplification and recording are all digital and "anything is able", why don't you even try? I think it's a thoughtless omission. Negligence, narrowness of curiosity, or a sense of existence.
There is also a logic that "the market doesn't want it." However, it is the suppliers who created the stereotypes of the market, and only the suppliers can break or change them.

(TI:) However, there have been various attempts (including coincidences) to use electrical amplification and recording technology to take advantage of the limitations of dynamics.
For example, in the analog era, Jimi Hendrix and Eric Crapton (Takeshi Terauchi claims that he was the original (laughs)) distortion, feedback guitar, compressed sounds represented by the Beatles, etc. In digital, Oval started. Like Glitch Noise.
Sounds that arise from distortion (saturation) include thunder in the natural world, but compressed sounds do not exist in natural sounds.
Since there is a limit to dynamics, you can't record or play back without compression, which is a new musical experiment.
All of them are attempts to take advantage of restrictions on dynamics (such attempts to take advantage of it are impossible in Digital).
So, in the first place, "the market didn't want it" and "the rejection reaction was amazing at first (like the scene of Michael J. Fox in Back To The Future)". The Oval was also initially dismissed as a defective product during the inspection (laughs).
So, Digital can be anything (I think I'm using it), but the number of parameters has increased (maybe only). The fact that there are more parameters does not mean that it is inherently digital.
I've recently come to think it could be just that the simulation was made to look like it.
I mean, in digital, you can't do what you can't do. Although the range of dynamics has expanded in terms of volume, the degree of freedom is actually lower than with analog.
For example, you can't do anything like analog, forcibly do something about it, saturation, and closing the line at the risk of breaking the equipment (before that, you need to stop). This is quite a dilemma when you touch it.

So, as you said, it will be very important to change "how to use and perceive functions" (or the perspective).


It's one of "alternative polyrhythms", as I named in the notes page on another work "Light Through Leaves." A wave-like pattern of dynamics has length for approximately five and 1/3 beats, meaning three ups and downs in four bars. I've never heard a piece that has a polyrhythm between the tones and the dynamics. It isn't what I added later as an effect but is what I designed from the beginning.

So, it might be hard for you to distinguish between them, but if you listen to the minimal phrase very carefully you may find the gradual delay (of the dynamics).

One thing I'd like to add is that the fiddle reflects my love to Ireland, the island or Republic, even though I don't know if there is sakura there. The instrument fits very well to this work; I wonder why. 





...Carefully balanced each part to find the point where the inflections were most natural... 


(TI writes:)

The biggest challenge for this track was how to reproduce the dynamics created by TM without sonically breaking down.

As TM stated, this track has an extremely wide range of dynamics.

Depending on the timbre and its pitch, the peaks can be overloaded, and if the volume is lowered to match the peaks, the smaller notes can be buried by other notes.

On the other hand, too much compression will not reproduce the dynamics that the TM was aiming for.

In that sense, this track was an exercise in compression in the mix.

The adjustments we made were quite simple: we used the most basic compressor available in the free version of Studio One, which has several different types of compressors, and carefully balanced each part of the song to find the point where the intonation is most natural to the ear. I carefully balanced each part of the song to find the point where the inflections were most natural to the ear. But even though each step is simple, the mixing process takes time. This is because it is very difficult to tell where the effect of a compressor is being applied just by listening to it. If there is an obvious change in tone or phrase, at least it is not applied in a natural way, and if you listen to it again later, you will definitely think you have overdone it and will have to go back and redo it. This is where the sensation of handling is very different from other effects.

Also, if a particular band is emphasised, not by volume, I use a dynamic equaliser rather than a compressor, which is set to cut if there is more input than necessary in that particular band. Similarly, there is a multi-band compressor that compresses when there is more input than necessary in a particular band, but I think that cutting rather than compressing often sounds more natural in the case of a particular band.



The track was first completed in May, but was re-mixed before release.

This is because after listening to it a number of times, I wanted to give it more of an 80s LP record-like nuance with live music.

TM mentioned 'cherry blossoms' and 'spring' in reference to Japanese music, but for me, the atmosphere of the music reminded me of Kenji Omura's 1981 album 'Spring Is The Nearly Here', which was also released in 1981.

I also wanted to recreate the atmosphere of that kind of live studio recording.

Therefore, the balance of the mix has not been changed significantly, but the textures have been adjusted further.

I read what recording engineer Ono Seigen said that the generally accepted* sound of a 'live' performance is more 'reverberation (especially early reflections*)' than variations in the timing and touch of the performance, so I made various adjustments by pointing reverb that imitated Recording Studio to all the tracks (see (I think this is probably the first time I've put reverb on all the tracks).


I also adjusted the dynamic range so that the low frequencies were not as loud as the LPs of the time, as in today's digital formats, and the mid-range (from about 200 to 1 kHz) was expanded and the ultra-high frequencies were not extremely emphasised.

Also, the fiddle sound mentioned by TM was only a fiddle sound in the May version, but in the final mix in August, a guitar was unison in places with the fiddle solo part.

It also feels more at home when connected to our previous release, 'Let Me Hear About You'.

 

Comments