![]() ![]() Bolero was written to explore the variations of instruments and what impact it has when the instruments change. Sounds always influenced Musicians and composers equally. Not to play presets but to create the perfect sound to emphasize the intention of the melody. In my opinion MozArt, Beethoven Bach would use synthesizer today. People are yearning for specificity and limitations and intractable eccentricities instead of the spiritual agoraphobia brought on by "limited by your own imagination!" I think the crest of this wave is beginning already. People will value things that do only one thing and you have to kind of wrestle with it. ![]() ![]() I do think the future of music technology is going to be in the direction suggested by this essay and it may prove visionary. Despite (because of?) how functionally limited it is, the personality and the intent of the user comes shining through. A guitar has intrinsic boundaries, but everyone addresses those boundaries differently. This is cool - I'm not dissing the monome, it's brilliant - but it's not the same thing as a guitar. In fact, built into the design of the thing is the challenge "Look at it! It's just a box of lights and buttons! Make it DO something!" Fantastic thing, the monome.īy its very nature, endless possibilities. Thanks to M-Goldie on for asking the question. With the amount of computing power available today, it's possible to make fantastic new computer instruments that are more like physical objects than programs, expressive enough to be worth learning. ![]() We don't send commands to them, we play them, using vibrating surfaces to exchange information with them more subtly than symbols can. It makes sense when you don't have much computing power, or you do want to express musical ideas in terms of algorithms.īut physical instruments are not algorithms. This kind of approach has been used by every music synthesis language or environment that has ever been made, as far as I am aware. Where to begin? It's natural that the environments people have built for harnessing all this power look like the technology they are based on: component-based systems where simple building blocks are connected to make larger components, and so on. Once you do make sound with a computer, you have entered an exciting world where it's arguably possible to make any sound we are capable of hearing. It's hard to configure these systems to make sound dependably-recall only ten years or so ago, when just a handful of brave musicians were willing to rely on computers for live performance, and crashes were pretty common. There aren't many parts, but the variety of physical interactions they offer gives you lots of sonic possibilities.Ĭontrast that with a computer, which has billions of parts that can go into more states than the universe has particles, most of which make no sounds at all. You just made an instrument that you could spend years practicing and getting better at. Take a single string and stretch it over a box with a hole in it. A traditional musical instrument usually has a small number of controls that lead to many possibilities, because you can physically interact with it in different and subtle ways. With computer audio, there are so many possibilities available that I think the idea of an instrument is getting neglected. For players and composers, limitations are essential. How many times have you read that with tool X you are "limited only by your own imagination?" But not everyone has the ability, or the time, to think up a new instrument and then spend a year working out the details. and that's just the graphical ones I can think of, leaving out SuperCollider, ChucK and so on. Max/MSP, Kyma, AudioMulch, Reaktor, Vaz Modular, Tassman, Plogue Bidule. There are a bunch of software environments in which you can make instruments. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |