The Jazz Guitar Chord Dictionary
Reply to Thread Bookmark Thread
Page 4 of 5 FirstFirst ... 2345 LastLast
Posts 76 to 100 of 105
  1. #76
    djg
    djg is offline

    User Info Menu

    Quote Originally Posted by omphalopsychos
    No you don't "need" AI for it. But it's a test for AI to do it without guardrails/constraints.
    silicon charlie parker

    // 70% chance to use chord tones
    if (Math.random() < 0.7) {
    const note = chordTones[Math.floor(Math.random() * chordTones.length)];
    console.log('Using chord tone:', note);
    return [note];
    }

    // 30% chance for approach notes or scale tones
    const approach = previousNote + (Math.random() < 0.5 ? 1 : -1);
    console.log('Using approach note:', approach);
    return [approach];

  2.  

    The Jazz Guitar Chord Dictionary
     
  3. #77

    User Info Menu

    if it were that simple i'd be a great soloist!

  4. #78
    djg
    djg is offline

    User Info Menu

    Quote Originally Posted by CliffR
    I've been using the China-hosted version online. So I did see the censorship issues that you did not.
    yes. i saw it too using the hosted version. but i tricked it

    Using DeepSeek for Guitar Pedal Power Tech Question-taiwan-bmp

  5. #79

    User Info Menu

    Quote Originally Posted by Cunamara
    If you update to IOS or iPadOS 18.3 or MacOS 15.3 (I think), Apple Intelligence is on by default (assuming your device can run it- my iPhone SE is too old, for example); previously you had to opt in.
    Apparently my SE2 is too old too - or AAI wasn't in iOS 17 yet.

    Last I checked DS is (was) already set to be banned in Italy...

  6. #80

    User Info Menu

    DeepSeek was asked to come up with one truly novel insight about humans:



  7. #81

    User Info Menu

    It's reasoning steps could also be seen:




  8. #82

    User Info Menu




  9. #83

    User Info Menu

    I don't think any one of these are novel. I was already familiar with the first one (it's final answer). I looked up some of the others. For example the insight about the laughter is in a paper from 1998:
    Just a moment...

    After coming up with one insight it sort of admits that it's not novel and it should look deeper etc. It would have been really, really scary if it wasn't just regurgitating "novel insights" from the internet but actually inventing them.

  10. #84

    User Info Menu

    Quote Originally Posted by Tal_175
    It would have been really, really scary if it wasn't just regurgitating "novel insights" from the internet but actually inventing them.
    Sure but at least it would have been appropriate to call it intelligent (caveat emptor: I haven't read the prose to see if it isn't just the brouhaha you'd hear a Ted talking about ).

    Since some people here actually seem to understand the models behind these systems: to what extent do they display emergent properties or behaviour? That is, do things you wouldn't have expected but aren't obvious(ly the result of) errors?

  11. #85

    User Info Menu

    I keep wondering about the overall cost-benefit ratio of generating AI output versus putting this output into perspective.

  12. #86

    User Info Menu

    I also keep wondering what will become of discussions (or, ideed, places) like this, given a potentially increasing need to puzzle over "AI-in-disguise" contributions by individual posters.

  13. #87

    User Info Menu


  14. #88

    User Info Menu

    Quote Originally Posted by Litterick
    Right. Actually, the other day, when RJVB posted ChatGPT's mention of a Norwegian saxophonist "Niels-Nygaard Nilsen" and I questioned the existence of that person, AI should have immediately picked this up crawling the web, opened an account here by that name and posted a message protesting that he (NNN) was indeed a real person.

    Now THAT would have been funny!

  15. #89

    User Info Menu

    Quote Originally Posted by RJVB
    Since some people here actually seem to understand the models behind these systems: to what extent do they display emergent properties or behaviour? That is, do things you wouldn't have expected but aren't obvious(ly the result of) errors?
    I certainly don't know enough to comment. (If we do have an expert on the technicalities, I'd appreciate an answer to my question about distillation without access to source parameters and topology!) But I did find this article about emergence interesting and readable

    Emergent Abilities in Large Language Models: An Explainer | Center for Security and Emerging Technology

    Short answer: LLM's rely on the notion of 'emergence', but there's a more nuanced definition where 'emergence' is defined as a property exhibited by a larger model that was not present in a smaller model. But even that metric is controversial among researchers.

  16. #90

    User Info Menu

    Quote Originally Posted by palindrome
    Right. Actually, the other day, when RJVB posted ChatGPT's mention of a Norwegian saxophonist "Niels-Nygaard Nilsen" and I questioned the existence of that person, AI should have immediately picked this up crawling the web, opened an account here by that name and posted a message protesting that he (NNN) was indeed a real person.

    Now THAT would have been funny!
    I seem to remember reading very recently that OpenAI have produced 'AI agents' that can perform that kind of task, ie operate programs on your computer for you.

    Then there's ChatGTP being used for nuclear security:

    cnbc.comOpenAI partners with U.S. National Laboratories on scientific research, nuclear weapons securityOpenAI on Thursday said it's signed a partnership allowing the U.S. National Laboratories to use its latest line of AI models.

    Meanwhile McDonald's cancel their scheme to use AI to take drive-thru orders because it was too unreliable.

  17. #91

    User Info Menu

    Quote Originally Posted by CliffR
    I seem to remember reading very recently that OpenAI have produced 'AI agents' that can perform that kind of task, ie operate programs on your computer for you.
    I was thinking more in terms of AIs conspiring to cover up each others' hallucinations.

    "Agents," huh? Comintern of AIs. Botintern.

    Just kidding. (I understand what you're saying.)

  18. #92

    User Info Menu

    Quote Originally Posted by RJVB
    Since some people here actually seem to understand the models behind these systems: to what extent do they display emergent properties or behaviour? That is, do things you wouldn't have expected but aren't obvious(ly the result of) errors?
    That’s exactly what makes this generation of AI different. When a model is big enough trained on enough data and runs through enough iterations it starts developing abilities that weren’t explicitly programmed in. It’s not randomness or error it’s just that at a certain scale the model generalizes patterns so well that new capabilities emerge like reasoning through problems or translating between languages it was never directly trained on.

    Quote Originally Posted by CliffR
    If we do have an expert on the technicalities, I'd appreciate an answer to my question about distillation without access to source parameters and topology!
    Think I responded to this above. Distillation means training a smaller or more efficient model using the full outputs of a bigger model including its probability scores (/logits) not just the final text. DeepSeek couldn’t have done that because it didn’t have access to OpenAI’s weights or numerical model outputs. When people say it’s a distillation they probably just mean it was trained on a ton of GPT-generated text. That would mean they passed a bunch of data into GPT got the responses and used those as training data. I don't know enough to support if that's more than pure speculation and not sure how anyone would prove it.

    Quote Originally Posted by CliffR
    I seem to remember reading very recently that OpenAI have produced 'AI agents' that can perform that kind of task, ie operate programs on your computer for you.
    There are open source agents available if you want to test one out. Here's one codename goose | codename goose. You can plug in your AI provider to power the back end. I don't know if it has Deepseek but I remember hearing it's on the roadmap.

  19. #93

    User Info Menu

    Quote Originally Posted by omphalopsychos
    Think I responded to this above. Distillation means training a smaller or more efficient model using the full outputs of a bigger model including its probability scores (/logits) not just the final text. DeepSeek couldn’t have done that because it didn’t have access to OpenAI’s weights or numerical model outputs. When people say it’s a distillation they probably just mean it was trained on a ton of GPT-generated text. That would mean they passed a bunch of data into GPT got the responses and used those as training data. I don't know enough to support if that's more than pure speculation and not sure how anyone would prove it.
    I see you did, and I went so far as to 'like' your comment, so I guess my bad for not paying sufficient attention . The people saying it's a distillation are of course OpenAI folks, or those briefed by them, who have a vested interest in belittling DeepSeek's achievements. As somebody else has pointed out, DeepSeek also displays its intermediate 'reasoning' steps, which ChatGTP's O models do not, so the claim seems improbable.

  20. #94

    User Info Menu

    Quote Originally Posted by CliffR
    I seem to remember reading very recently that OpenAI have produced 'AI agents' that can perform that kind of task, ie operate programs on your computer for you.

    Then there's ChatGTP being used for nuclear security:

    cnbc.comOpenAI partners with U.S. National Laboratories on scientific research, nuclear weapons securityOpenAI on Thursday said it's signed a partnership allowing the U.S. National Laboratories to use its latest line of AI models.

    Meanwhile McDonald's cancel their scheme to use AI to take drive-thru orders because it was too unreliable.
    how about a nice game of chess?

  21. #95

    User Info Menu

    Seems I recall lots of people claiming Internet would not change things all that much.

    LLM's are going to be a big change. Are they there yet? In some areas certainly. In others, not so much. I inquired about a 2 button looper that draws 100mA or less. Good list of loopers but nothing within the parameters of my question. So we point to current imperfections as if that is a reason to dismiss the technology. Hardly. How we tap information and how that information is applied are going to change in big ways.

    Of course, there are more complex questions. Would you enjoy listening to jazz that was intuitive, interesting, and creative if you knew it was a machine? Not sure.. I mean other than to steal it's licks.

  22. #96

    User Info Menu

    BusyBeaver(5) took 41 years to figure out. No one will ever come close to understanding what a network of billions of nodes with billions of parameters, billions of weights does after billions of cycles of updates. To what extend will it be able to do inference? Will it become conscious? What does it even mean?

    It's impossible for a human to comprehend even much simpler systems. You can write a few lines of code that generates random looking numbers. It'd take a nobel prize in mathematics to prove that the numbers generated by the algorithm doesn't satisfy all the properties of random sequences. This would be a relatively simple linear logic. As the branching factor and state space of a system increases, it very quickly becomes very intractable and unintuitive for humans to conceptualize it.

  23. #97

    User Info Menu

    Quote Originally Posted by Tal_175
    It would have been really, really scary if it wasn't just regurgitating "novel insights" from the internet but actually inventing them.
    What's been bugging me ever since you posted these samples, aside from the scariness, to me it would seem that we have a serious credit-where-credit-is-due problem here. Not to be taken lightly as far as I am concerned.

  24. #98

    User Info Menu

    Ars Technica: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt


    Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

  25. #99

    User Info Menu

    One of the other shoes drops. This thing may have many feet...

    DeepSeek iOS app sends data unencrypted to ByteDance'-'controlled servers '-' Ars Technica

  26. #100

    User Info Menu

    Quote Originally Posted by palindrome
    Now THAT would have been funny!
    Now extrapolate that just a little and imagine how many dogs could have been gotten eaten...

    Quote Originally Posted by Cunamara
    One of the other shoes drops. This thing may have many feet...
    That's just data entered into the app, right?

    I must say that as a private citizen I'm not certain if I should be more concerned about the Chinese government gaining access to "sensitive" data of mine as opposed to, say, the US or my own government.