The Jazz Guitar Chord Dictionary
Reply to Thread Bookmark Thread
Page 4 of 5 FirstFirst ... 2345 LastLast
Posts 76 to 100 of 122
  1. #76

    User Info Menu

    @Christian, while I agree it's not intelligent, I think you have to agree ChatGPT has passed the Turing test.

    The Google music tool's results that I posted the other day were pretty poor, but I now think it's not just composing, but also attempting to synthesise the overall sound of it's training set, so quite a complex task. I recently saw a video of somebody coaxing ChatGPT to create some Bill Evans-style chord progressions, but in the case just text describing chords, which the operator then played on a keyboard. It wasn't exactly Waltz For Debby, but pretty cool nonetheless. Of course, it has zero understanding of what it's doing. An interesting case of something that's learnt the idiom by lots of exposure to examples, is able to produce some sort of example of its own, and yet by definition didn't use 'music theory' to do so.

    As for this being plagiarism, I'm in two minds. I've seen concerns from graphic designers and artists that their work has been ripped off, and that recognisable portions of their work can be seen in generated content, and that's clearly a problem. Meanwhile, we're all listening to the musical greats and ripping off their licks and style in the hopes that it will all come together into something original.

  2.  

    The Jazz Guitar Chord Dictionary
     
  3. #77

    User Info Menu

    Quote Originally Posted by CliffR
    @Christian, while I agree it's not intelligent, I think you have to agree ChatGPT has passed the Turing test.
    More so than some humans TBH

    To me that just suggests the Turing test isn't a very good test, not that I can think of a better one.

  4. #78

    User Info Menu

    Quote Originally Posted by Christian Miller
    To me that just suggests the Turing test isn't a very good test, not that I can think of a better one.
    Agreed, but not so long ago it was considered an almost unattainable goal, and now ChatGPT just breezed right past it and are instead worrying that sometimes it seems to bullshit when it doesn't know the right answer.

    ChatGPT can't go beyond its training set, while Google have just announced they're releasing their own chat bot that will be able to scrape the web as it goes. (Sorry, getting off topic here, but this stuff blows me away.)

  5. #79

    User Info Menu

    Quote Originally Posted by CliffR
    @Christian, while I agree it's not intelligent, I think you have to agree ChatGPT has passed the Turing test.
    Turing test is a very good test but ChatGpt didn't pass the turing test at all. My background is in AI reasoning and common sense. ChatGpt has no reasoning or common sense capabilities. Turing test isn't passed when the system fools one human or some humans. The concept is it should never give answers that are impossible for a normal human. ChatGpt fails questions that involves reasoning, and common sense assumptions like object continuity, temporal order etc.
    https://twitter.com/letsrebelagain/s...90565988118529

  6. #80

    User Info Menu

    Thanks for the correction. Do you have a source for that definition. It seems considerably more strict than the test as I understood it and, for that matter, as it's described in Wikipedia:

    "Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech.[3] If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give."

  7. #81

    User Info Menu

    Quote Originally Posted by CliffR
    Agreed, but not so long ago it was considered an almost unattainable goal, and now ChatGPT just breezed right past it and are instead worrying that sometimes it seems to bullshit when it doesn't know the right answer.

    ChatGPT can't go beyond its training set, while Google have just announced they're releasing their own chat bot that will be able to scrape the web as it goes. (Sorry, getting off topic here, but this stuff blows me away.)
    Something something Roger Penrose rhubarb rhubarb... MICROTUBULES!!!!

  8. #82

    User Info Menu

    Anyway, bring on the Butlerian Jihad.

  9. #83

    User Info Menu

    Quote Originally Posted by CliffR
    Thanks for the correction. Do you have a source for that definition. It seems considerably more strict than the test as I understood it and, for that matter, as it's described in Wikipedia:

    "Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech.[3] If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give."
    Well the tweet I posted shows it at least failed one judge and it systematically fails on the types of questions that are easy for humans.
    Turing test is a deceptively simple idea but it's a very clever way of avoiding philosophical complications (and distractions) when evaluating AI. It's a pragmatic philosophical position that if it walks like a duck quacks like a duck then it's a duck. If we build a system that for all intents and purposes is indistinguishable from a human then it is intelligent. We don't have to get bogged down with the questions like "but does it really understand the meaning of things it's saying?", "Can it think?", "Does it have a mind?", "Can it do abstractions and build mental models?" etc. etc.

    Obviously ChatGpt isn't indistinguishable from a human. For example it's not employable as a remote software developer even if you also have a human (none-developer) assitant.
    Last edited by Tal_175; 02-07-2023 at 08:27 AM.

  10. #84

    User Info Menu

    Quote Originally Posted by Tal_175
    Well the tweet I posted shows it at least failed one judge.
    Turing test is a deceptively simple idea but it's a very clever way of avoiding philosophical complications (and distractions) when evaluating AI. It's a pragmatic philosophical position that if it walks like a duck quacks like a duck then it's a duck. If we build a system that for all intents and purposes is indistinguishable from a human then it is intelligent. We don't have to get bogged down with questions like "but does it really understand the meaning of things it's saying.", "Does it have a mind.", "Can it do abstractions and build mental models" etc. etc.
    Sure, but the Wikipedia description doesn't say it has to convince all judges, which is what you previously claimed.

    @Christian - So Penrose is responsible for all that New Age quantum magical thinking?

  11. #85

    User Info Menu

    Quote Originally Posted by CliffR
    Sure, but the Wikipedia description doesn't say it has to convince all judges, which is what you previously claimed.
    I didn't claim that it has to convince all judges. It seems like you missed the point I was making.

  12. #86

    User Info Menu

    Quote Originally Posted by CliffR
    Sure, but the Wikipedia description doesn't say it has to convince all judges, which is what you previously claimed.

    @Christian - So Penrose is responsible for all that New Age quantum magical thinking?
    I wouldn't say he's responsible for it. I believe he proved that a computer (following Turing's work) is incapable of certain types of mathematical reasoning, and suggested the quantum phenomena of some kind may have something to do with consciousness. Later he pointed out that that ether are structures in the brain (microtubules) on the same scale, and suggest consciousness may have something to do with wave function collapse.

    I'm sure some people have taken this, added 2 and 2 together and got 5, but you can make the same statement about popular culture Strong AI proponents, who are equally unscientific, they just think they are less so because their faith is secular.

    So most people in popular culture subscribe to either
    Proposition 1 - there is a soul, and consciousness is beyond scientific inquiry
    Proposition 2 - the human brain is modelleable by a sufficiently sophisticated and advanced computer.
    As if they are the only positions

    But I go with what Penrose has said
    Proposition 3 - human brains are not really like computers, even neural nets or advanced ML systems. This has become clearer from recent research. We require a deeper understanding of the brain.
    Furthermore:
    Proposition 4- In order to understand consciousness we need to understand more about physics, especially the relationship between Quantum theory and the observable macroscopic universe which is part of that deeper understanding.
    Most AI people recognise proposition 3 from what I have heard. 4 seems reasonable to me, although I can't recall Penrose demonstrating that to my satisfaction, but I' happy to go with him on it.

    Prop 2 may be true, if a computer can be built that is able to take account of these elements, but that's speculative given today's science and technology.

  13. #87

    User Info Menu

    You said "Turing test isn't passed when the system fools one human or some humans", from which I inferred you meant it was necessary to fool 'all',

    You then went on to say "The concept is it should never give answers that are impossible for a normal human.", which is not part of the definition I found on Wikipedia. I asked you for a source for your more stringent definition, and you pointed out an instance where it had failed to fool one human, which lent weight to my inference that you'd only be happy if it convinced all humans. I guess I misunderstood you. But I do get your larger point that it's a useful test insofar at avoids having to answer those difficult philosophical questions you mentioned.

  14. #88

    User Info Menu

    Quote Originally Posted by Christian Miller
    I wouldn't say he's responsible for it. I believe he proved that a computer (following Turing's work) is incapable of certain types of mathematical reasoning, and suggested the quantum phenomena of some kind may have something to do with consciousness. Later he pointed out that that ether are structures in the brain (microtubules) on the same scale, and suggest consciousness may have something to do with wave function collapse.

    I'm sure some people have taken this, added 2 and 2 together and got 5, but you can make the same statement about popular culture Strong AI proponents, who are equally unscientific, they just think they are less so because their faith is secular.
    Thanks for explanation. I wasn't seriously blaming Penrose for the New Age stuff, but it's interesting to know some more details of his thinking. Perhaps it's worth a read after I get through those jazz books and my big book of stats. (That mathematical reasoning wouldn't be the Halting Problem and/or Goedel's theorem, would it?)

  15. #89

    User Info Menu

    Quote Originally Posted by CliffR
    You said "Turing test isn't passed when the system fools one human or some humans", from which I inferred you meant it was necessary to fool 'all',
    No, that's not what I meant. I meant that the point of the Turing test is not that if you find three judges who get fooled by a hard coded system (very possible), you can run out and yell "Eureka, AI is born!". That makes Turing test a trivial and a meaningless proposition.
    The initially described setup for the test is not perfect and sufficiently detailed but it's intended to capture a philosophical point.

  16. #90

    User Info Menu

    In hindsight I see it was foolish of me to imagine, when we were talking about the Turing Test, that we were talking about the test as described by Turing. :P

  17. #91

    User Info Menu

    Quote Originally Posted by CliffR
    In hindsight I see it was foolish of me to imagine, when we were talking about the Turing Test, that we were talking about the test as described by Turing.
    We are talking about the test described by Turing. But you are talking about the specifics of the test setup, I'm talking about the concept that the test intended to capture which is also all in the wikipedia article that you linked. You should read it if you want to understand what Turing test really is. You'll notice that the point of the test isn't really the number judges, how many questions they ask etc. etc. (which you seem to be focusing on) but the philosophical debate about what "thinking" means for a computer. The article hardly talks about the test setup.

    In that sense passing the Turning test isn't about getting around some specific test configuration (which was never formally defined) but building a system that's for all intents and purposes indistinguishable from a human (in a conversation). Such a system, Turing claims, is intelligent. Some philosophers argue there is more to 'thinking' than imitation. But engineers don't care.
    Last edited by Tal_175; 02-07-2023 at 10:26 AM.

  18. #92

    User Info Menu

    Here is another article that's maybe more accessible than the wikipedia page:
    The Turing test: AI still hasn't passed the "imitation game" - Big Think

  19. #93

    User Info Menu

    OP comment:
    Thread quickly shot down by smug, smartass irrelevant BS. This is just a (talented) horn player talking about how he works on his lines and which may have been of interest to us guitarists who often have maybe "too many" options for our own good when looking at chords. That's all.

    High point: Jimmy blue note's posts (..) and the video Reg posted.

  20. #94

    User Info Menu

    Improvising, always about the room
    (unless it is Searle's Chinese Room!)

  21. #95

    User Info Menu

    Quote Originally Posted by CliffR
    @Christian, while I agree it's not intelligent, I think you have to agree ChatGPT has passed the Turing test.
    Really? I don't think I would agree that. After a few minutes of chatting with ChatGPT it is pretty obvious that it is not human and it will give nonsensical answers to questions.

    If you look at a single answer out of context it appears human but not in longer conversations.

  22. #96

    User Info Menu

    I find all these theory arguments funny. I know a little bit of theory and it hasn't helped my playing that much. But I've also transcribe a fair amount and that hasn't helped me that much.

    At the end of the day, I think you need to become familiar with jazz sounds and idioms and by being familiar, I mean being able to hear and play the repertoire.

    I looked at the original video as just some set of things to try to help you do that.

  23. #97

    User Info Menu

    ^ You need applied theory, not only base theory; you have to develop your ear and musicality, experience with the music; and you need to develop technical skills. No topic is expendable. Hypothetically increasing your didactic knowledge in one area, while not doing anything to improve your operational grasp doesn't mean it's the topic's fault, it means you're doing it wrong.
    Last edited by Jimmy Smith; 02-08-2023 at 12:30 AM.

  24. #98

    User Info Menu

    Let's recap the message;

    "You don't want to be that guy that struggles with the changes."

    We've all been there. And you all know the solution; Do the homework.

    -Why is this so hard? People are looking for shortcuts. We always are...but there are no shortcuts. (You can have the computer do the playing for you and you can have the computer write your forum posts too. Pointless use of AI.)

    Assume a person that spent most of his life listening to hip-hop and other forms of contemporary vocal pop of his generation, then for some reason he wants to play jazz...but the changes and the rhythm are strange to him, so he looks for shortcuts. You have met him; the wannabe soloist. He don't want to do chords, usually because he struggles with the changes. Consequently he struggles with improvisation too.

    This is really not a question about theory. It's about homework; to learn the songs. A fundamental part of learning a song is to learn the changes. In the process of learning the changes one will naturally expand on theory in due time.

    What John Raymond didn't say: "You don't have to learn the changes because you are supposed to play by ear". He didn't say that, because it makes no sense. He makes the point that as a soloist you'll become stiff if you would try to keep up with every chord in real time and there's no need for it. Instead you play the key-centres. For this purpose one needs to do the homework especially since many standards are deceptive and would trick your ears until you've learned the changes.

  25. #99

    User Info Menu

    Quote Originally Posted by CliffR
    Thanks for explanation. I wasn't seriously blaming Penrose for the New Age stuff, but it's interesting to know some more details of his thinking. Perhaps it's worth a read after I get through those jazz books and my big book of stats. (That mathematical reasoning wouldn't be the Halting Problem and/or Goedel's theorem, would it?)
    Goedel’s in there but tbh it’s been years since I read The Emperors New Mind.

  26. #100

    User Info Menu

    Weird thread, but I want to add my $0.02... So yet again we seem to be discussing theory vs ears, or at the very least, why a combination of the two is important. Well Duh... seems so obvious that it really shouldn't warrant mentioning it, at least not here on a Jazz guitar forum anyway.

    I've said it before I think, that when I "freewheel" it based on tonal centres, I don't sound like Sonny Rollins or Dexter Gordon (or guitar equivalent). When I try to spell out every change, I don't sound like Sonny Stitt (or guitar equivalent). Playing on the TC is unsatisfying for me (not good enough to be consistently interesting), and nailing the changes sounds too forced or contrived or something (not good enough to be consistently interesting).

    That said, I enjoy the challenge of making the changes more often than not, and trying to make it sound uncontrived. Noodling on key centres is fine, and works with players who have a ton of good language and a ton of good taste (but not too many guys around that can do that at a high level, in my opinion). In other words, I'd rather listen to intermediate Jazz soloing that is going for the changes, than intermediate Jazz soloing that "skates" over them. But that's just my taste...

    What I've learned from reading these kinds of threads though, is that I really should try to not waste my time reading the thoughts of other intermediate players (obviously not everyone here) where they seem to be insisting that the way they choose to do this Jazz caper is the right way. If we haven't figured out yet that there are many different pathways up the Jazz mountain, then we still have a lot to learn. I reckon....