WHAT IS FMT'S MUSIC? An Essay by TI


There are still parts of it that I can’t verbalize well, and it’s hard to write in a language that isn’t my native tongue, but I’ll write it down.


"What is music" (written in March 2021)


Western Bokket and I have been exchanging DMs from time to time, and there is a very interesting question that I have been thinking about from time to time, and that is closely related to the question "What is FMT music? It is therefore a continuation of the essay.
 
Here is a quote from his message.

> In many ways it seems to ask me at what point is music music? Is it music when its in its potential energy format before sound has been produced? or does it become so only when transferred to its kinetic energy format through the air waves?
 
> I suppose it really does not matter what music is defined as. What exists is as it is, regardless of what we call it.
 
I agree completely. That's exactly what it is.
“What exists is as it is, regardless of what we call it.”
 
In response to his question, my own definition of music and work is as follows.
 
As I wrote in the Notes* on "Mood Of Five", I think that "music is music when it sounds in my head". However, even if the music is original (at that point), it is not "a work of art".
 
To use the metaphor of the roasted chicken, it tastes like an egg, but it is not really an egg yet.
 
What makes it a work of art is when it can be shared with others in some universal way.

It can be a recording or a live performance, but it can also be a musical score or similar, or digital data.
 
I believe that a musical work becomes a musical work when it is made to sound as air vibrations. However, it does not matter how it is done. For example, a live performance on a musical instrument or a playback of a recording or digital data is also a "musical work".
 
To put it simply, the difference between a "work" and a "musical work" is whether or not there is a "change in the time axis of air vibrations".
 
Strictly speaking, a recording or similar data is not a "musical work" at that moment, it is a "work". It becomes a "work of music" when it is played back as air vibrations.
 
When it comes to "music or not", recordings and digital data are not "music" at that point, but become "music" when they are "played" or "performed" in some way.
 
So I don't accept that my children have done their homework when they say "I've done most of my homework in my head when I was asleep, so it's almost done".
 



[What's the music we're making?]


We've created about 50 tracks as part of The FURICO Music Team, and we've had a lot of people listen to them, and we're happy that they've received a little bit of feedback, but one thing that stands out in that feedback is that it's unique.

Of course, that's what I'm happy about, and that's what I'm aiming for, but what is unique about Unique?

I don't think it's just a musical story, such as unique phrases and harmonies. That kind of musical uniqueness is something that other musicians should have (not everyone, of course), and we're not the only ones who have it.

So, where is the difference between the music we make and other music, and what is the difference?

And in conclusion, what kind of music do we create?

That's what I've realized as I've been asking myself.




First of all


Our music is electronic data. (The structure of the music is very different from the conventional music. (Something that might not be called "music" if you think about it).

To elaborate on this, the music we are creating is not created with sound, but only with electronic data. In other words, we are not doing any work to create sound through physical movement, such as playing an instrument or singing.

To put it another way, conventional music is a human being's body or musical instrument as an interface, generating air vibrations to make "sound". And music is created by assembling these "sounds" in a chronological order, mainly using the body. But we're making music without going through any of that process.

In conventional music making, sound is always generated at the moment when the "phrase" or "rhythm" or whatever is the source of the "music" is born, but no sound is generated at the moment when our music is born. It is the result, not the process, that results in "sound" and "air vibrations.

Of course, there are many times in the production process when we use speakers or headphones to check the sound, but this is only for confirmation and is only an incidental part of the process.

In fact, it's not uncommon for me to get tired of listening to the sound, turn off the speakers and headphone output, write a score to MuseScore, and then visually edit the waveform in Studio One or adjust the volume just by looking at the slope of the numbers and graphs. In many cases, it is only at the stage of outputting the audio that we know what sound, phrase or song it was.

To put it another way, music packages (records, CDs, music distribution data) up to now have all converted sound into (electronic) data, but our music has been converted from electronic data into sound, which is the complete opposite.

To sum it up, our music has a very different structure from the conventional ones. Perhaps it's this difference that makes our musicality so unique and influences every part of our music.

Incidentally, it was in working with other musicians that I became keenly aware of this difference.

Most of the music creation of the collaborating partner (which is the traditional way of making music) was predicated on "immobilizing the performance (or its simulation).

Whether the data is MIDI or WAV, it's just a difference in the recording method of the performance, and most of the data is basically a human performance or a simulation of it. In other words, as I mentioned earlier, I am making music through physical expression or simulation of it.

In this case, the score is only a note for the performance and is only needed to guide the performance or to ensure reproducibility. Certainly, with that kind of creation, the score doesn't really matter much. That's why I thought there were so many people who said, "I don't use sheet music in my compositions.

This is also true of the "new type of musician" who composes with a combination of sampling material. The original sampling material is live music or other everyday sounds, which are packaged and waveformized from what has already become "sound". I put them together side by side on my computer display to create music. The reproducibility of the music is guaranteed by the DAW software.

The common point of both is that they cannot create a score or music data until the music is complete.


Our music, on the other hand, is about "creating a score, that is, composing". The process of creating a score is the creating music.


To be specific, our music, the sounds and phrases that come from it, are thorough, data and abstract. It is created by simply inputting the sounds in your own brain as computer data.


It is not a physical expression at all. At least the phrases that I (TI) create are not the kind of phrases that I want to play or play on the piano at all. As you can see, in order to assemble a work without going through physical expression, you have to have a musical score or its equivalent in the process of creation.

In this case, the score is not just a guide for performance or a note to ensure reproducibility, it is the canvas on which the painter writes his picture.

The same is true for tone, for example, when creating music with physical expression, the vocalist's vocals and the guitarist's guitar and the sounds he can use and the song are one and the same, and it's very difficult to think about them in isolation.

On the other hand, for us, tone is a means to achieve the sound we imagine, and we sometimes use an acoustic instrument with tone, but we just want to apply the sound, and it doesn't matter if it is that instrument or not.

So when we create a song on MuseScore, we're not too concerned about the tone. It can be changed at any time later on. For example, whether it's a violin sound or a brass sound, the phrase doesn't change whether it's a violin sound or a brass sound. Anyway, I'm only concerned with the phrases and the sound that the combination creates.

So, in the songs we create, we use sounds outside of the instrument's register, even on acoustic instruments, without worrying about anything else. This is also music that can be done because it is electronic data.


And the second.

Our music is the ultimate Market-In (where the customers are ourselves, ie., the pursuit of the music we want to listen to).

Another unique thing is that "the listeners are ourselves".

You might think, "No, it is not necessarily unique because there are other musicians and artists who think like that."

Of course, artists, not just musicians, are basically egoistic when it comes to creative work. Most of them would say, "I'm going to do what I want to do, and I'm going to do it the way I want to do it."

However, I think such musicians are also 100% on stage when they perform live, and they probably write and perform while assuming that they are on stage when they record. At the time, they are probably imagining a packed house of listeners.

At the very least, when you're recording, few people should assume "I were pumping my fist up at my live performance and dancing along on the ride" or "I were listening to my music within the audience in a swoon".

(One might imagine the creators of Ambient and the like to be "myself at home in the comfort of my own home," though.)

In contrast, we also create music with an "audience" in mind, but that audience is us (only). We always say that we don't want to play live, but that's because we want to be in the audience and hear it live in the first place.

So, when I'm making music, I'm not an artist, but rather a listener. I create the sounds that I imagine I want to hear as a listener.

We're doing what we love, but that doesn't mean we're doing what we love as expressive people. We’re just listeners, creating the music we like and listening to it the way we like it.

And often the desires of the listener, the recipient of the expression, are more "twisted" and “grotesque” than those of the creator.

For example, don't you want to have "just three burger patties, triple the cheese, but skipping the tomatoes and lettuce, and a large serving of fries"? Or even "pull the patties out of the hamburger and make it vegetarian. Either of these might be possible at Burger King, perhaps.

But that's not possible in music. We have no choice but to create it ourselves. So we are creating our own.

However, such things sometimes turn out to be bizarre or grotesque. Maybe that's why people say that our music is "unique" (or weird).

But even if there is some weirdness around us, we are quite happy to listen to every track we have ever released.

Our music is not the Product-Out music of the artist, it is the ultimate Market-In music (for the sole largest users in our own market).

Our stance in presenting our music is to give everyone else "music that I can say with confidence that I want to listen to," and I'm happy if people like the music that I like too.

If our show were to take place, we would be sitting in the audience together, facing the same direction as the audience.





Thirdly, "our music is music with electronic data, not necessarily equal to electronic music"
(A discussion of the difference between music with electronic data and electronic music)


This is a different story from the "uniqueness" that has been described so far.

Although electronic music as a genre has already been around for a century, it is only in the last 20 years that electronic music as we define it (music produced only with electronic data from the beginning to the end) has emerged.

This is because the great predecessors of this conventional electronic music, such as Stockhausen, Isao Tomita, and Kraftwerk, all recorded sounds by electrically amplifying sounds made by oscillating a vacuum tube or transistor circuit. As I mentioned earlier, this is music created by assembling "sounds", and the essence is no different from combining acoustic instruments and singing to make music. I think it's simply a matter of whether or not there is electricity involved in making the sound. In fact, Isao Tomita also claimed that electronic musical instruments are "instruments that make an electric sound and are no different from acoustic instruments.

Also, electronic music and computers have a relationship that cannot be thought of in isolation. However, the use of the computer as a composition tool has been around since 1960, but due to the immaturity of computer performance, it was limited to the automatic generation of limited phrases and the control of performance information.

It has only been in the last 10 years or so that the entire process from composition to the completion of a work can be created using electronic data, as we are doing now.

However, I don't think there is still that much music that can be appreciated with "music made only of electronic data" compared to the number of music played by humans.

In addition, the music made only with electronic data is "commercial music" or "substitute for live performance" that takes advantage of the merit of being able to create in a low budget quickly to begin with, and there are a few cases that do not seek artistic value in itself.

In addition, there are not many artworks or forms of expression that are completed from the beginning (beginning of creation) to the end (completion to presentation) using only electronic data. For example, it stays in CG and literature on the Internet.

There are also a small number of people who create music only by typing all the way through, but who aspire to music that simulates a live performance.

In this context, recently (in the last 10 years or so), so-called "Internet-Music" genres like Vocaloid P (Vocaloid), Dubstep, and Vapor Wave have emerged.

These, like us, are the type of music that turns electronic data into sound. This is the kind of music that would not be possible without computers and the Internet. In that sense, I think we are finally seeing a music that can be clearly defined as "electronic music". The musicality is very different from ours in some ways, but I think the structure is probably similar to these music.

We are influenced by both Stockhausen and Kraftwerk, but as a musical genre, we don't think of them as "electronic music" (music of electricity), but rather as being closer to these "electronic music" (music made from electronic data).

What is it about categorizing our music? It is often said that "Post-Internet-Music".

One of them said that our music genre and musical stance is "Post-Internet-Music", and in that sense, maybe it's an appropriate expression.


Finally, Electronic music, Internet music and performance expression, live activities


The way we do music can bring about a major change in the way we think about "performance" which is so important in the expression of music.

Music and performance have traditionally been closely related, and in some cases almost synonymous, but we have a clear separation between the two. To be precise, it can also be said that it cannot be united or forced to be carved out.

This is because, as I've explained so far, our music is electronic data turned into sound, and playing it requires us to go in the opposite direction of the music-making process we've been going through.

Conventional live music is, simply put, a "reenactment" of the production of the music, the place of recording, and the situation, but our music doesn't do that, because we have to reenact a situation that doesn't exist in the first place.

This is very hard, and in some cases, impossible.

So, FMT doesn't have a live show every once in a while? But we basically don't expect to play live. I don't know if We are not assuming it, or we can not going to assume it.

(As I mentioned earlier, I've envisioned "sitting in the audience and appreciating my songs with the audience, with a respectable sound system that would be very difficult to do in a  home.")

But apart from that, it's also true that in our music, we don't feel the appeal or significance of live performances as we used to.

In "conventional music", the relationship between the performer and the listener is a face-to-face one, and you can enjoy the performance of the performer, but as a rule, you have to go to the venue and face the performer.

I think this is very difficult for both of us. Especially because our music is not the type of music that has a single listener in one area, but rather a very small (but definite) number of people in every country in the world.

However, that's where "electronic data to sound" type music like ours has a very high affinity with the Internet, and can be listened to anywhere where the Internet is connected and the sound can be heard. You don't have to go out and face the performers, and you don't have to schedule a time.

We see so much potential there, and we also see the possibility that it could lead to a very unique expression.

Comments