Reply to Thread
Posts 1 to 8 of 8
  1. #1

    User Info Menu

    An AI CTO, who also plays sax in a jazz quintet, pondered the questions of if, whether and when machines may learn to improvise, and suggests that, absent the "holy grail of human-level general AI," not in the near future. The whole article is worth a read, but the operative point for me was his realization that while the AI machines may be able to play the notes and mimic iconic improvisations, it would still involve basically forcing humans to play along with them in a one-way relationship, rather than through the reciprocity of living interactions. It just wouldn't be real without the "micro and macro temporal, timbral and textural adjustments necessary to groove together and to develop high-level collective improvisation in an unscripted fashion with human musicians."


    The Jazz Guitar Chord Dictionary
  3. #2

    User Info Menu

    Artificial intelligence and jazz improvisation

    Well, you said it :-)

  4. #3

    User Info Menu

  5. #4

    User Info Menu

    Quote Originally Posted by cosmic gumbo View Post
    I like the drummers, It's a shame he stopped making the videos tho,

  6. #5

    User Info Menu

    I've often wondered whether a computer could serve as an improvising partner, or even a whole band.

    This started back in the `70s when I got wind of Walter Sears' guitar synthesizer. At the time I was in school studying Computer Science. (My interest in computer predates that by nearly two decades: my cool aunt (she drove an MG with wire wheels) worked for Grace Hopper writing programs for the ENIAC computer. (Vacuum tubes: fine for guitar amps; a really bad idea for a computer. They ended up doing all the calculations on mechanical calculators.)

    I thought: If you have a guitar interfaced to a synth, then you could use a computer to control the synth. Maybe even try to do some machine learning of brainwave signals to influence the synth's sounds...

    I actually developed an optical pickup to isolate the six strings, and prototyped the front-end of a hybrid analog/digital additive wavetable synth. This was the mid-`70s; I used Schottky TTL for the speeds I needed to generate the upper partials. After sinking five grand into the project (which was a lot back then), I ran up against my inability to design the switching power supply to drive all those high-powered digital chips, and terminated the project. I asked Steve Hillage to invest, but that's a tangent too far...

    I've never lost interest in the intersection of computers and music. My studio is almost entirely automated, from a recording rig that can capture a two-and-half-hour improv session with the push of one button, to a collection of scripts that I run on my Linux desktop machine to master the session, create the artwork and publish the recording. (See Music | LCW ) It takes me about two-and-a-half hours to do all the post-session production for an entire release.

    My trio plays what I call (hesitantly, because our cultural referents are very different) "free jazz" or "free music". As a player, I've often mused about how a lot of what I do while playing with the trio might be represented as a (probably very messy) set of hueristics.

    A lot of my reactive moves could certainly be written down as "IF ... THEN DO ..." rules. But then there's the creative aspect: we're not following a score or even a lead sheet; the inspiration for a piece has to come from *somewhere*. Last I checked, none of my computers has a workable "inspiration chip"...

    Perhaps injecting a bit of randomness, could "inspire" the coded hueristics to come up with something musically interesting. It often seems that the trio works that way, but I'm probably oversimplifying. In fact, I know I am: Sometimes a musical thought enters my mind and I do my best to inject that idea into the flow of the music. But that's a planning problem; AI has been good at that for decades.

    Of course, the biggest hurdle is simply getting the algorithm to pay attention to what *you* are playing. You sure as heck don't want the program to be leading all the time; at least *I* don't. Technologically, we're at the point where a clever DSP algorithm can do a pretty good job of separating polyphonic pitches given an isolated instrument signal and reasonably clean playing technique. We might even be able to decode timbre in a useful manner.

    Likewise, image processing software might be taught to recognize useful features. For example: see the position of my picking and fretting hands, pick up body motions and gestures, notice when I'm about to step on a pedal, and so on.

    I think it'd actually be easier to program a band than a soloist: synthesizing multiple instruments is easy; isolating and following multiple players from a mixed signal is hard. (You *could* provide isolated inputs to the program, but that becomes increasingly difficult with more acoustic instruments.

    As far as the article goes, it has a bias (as do virtually all AI articles these days) toward deep-learning techniques. Hence the emphasis on having the program "learn from the masters" and the inevitable question of how you convince the program to *not* mimic the master.

  7. #6

    User Info Menu

    I'm quite, perfectly, sure that replacing human input into music with AI would be an utterly, completely disastrous step. Just because we can doesn't mean we should.

    I mean, where's the joy???

  8. #7

    User Info Menu

    It might be fine for elevator music :-)

  9. #8

    User Info Menu

    I think popular music (I.e. radio hits) will soon be composed using computer assisted methods. Some of them might use AI / statistical modeling.