-
Originally Posted by omphalopsychos
// 70% chance to use chord tones
if (Math.random() < 0.7) {
const note = chordTones[Math.floor(Math.random() * chordTones.length)];
console.log('Using chord tone:', note);
return [note];
}
// 30% chance for approach notes or scale tones
const approach = previousNote + (Math.random() < 0.5 ? 1 : -1);
console.log('Using approach note:', approach);
return [approach];
-
01-31-2025 08:26 PM
-
if it were that simple i'd be a great soloist!
-
Originally Posted by CliffR
-
Originally Posted by Cunamara
Last I checked DS is (was) already set to be banned in Italy...
-
DeepSeek was asked to come up with one truly novel insight about humans:
-
It's reasoning steps could also be seen:
-
-
I don't think any one of these are novel. I was already familiar with the first one (it's final answer). I looked up some of the others. For example the insight about the laughter is in a paper from 1998:
Just a moment...
After coming up with one insight it sort of admits that it's not novel and it should look deeper etc. It would have been really, really scary if it wasn't just regurgitating "novel insights" from the internet but actually inventing them.
-
Originally Posted by Tal_175
).
Since some people here actually seem to understand the models behind these systems: to what extent do they display emergent properties or behaviour? That is, do things you wouldn't have expected but aren't obvious(ly the result of) errors?
-
I keep wondering about the overall cost-benefit ratio of generating AI output versus putting this output into perspective.
-
I also keep wondering what will become of discussions (or, ideed, places) like this, given a potentially increasing need to puzzle over "AI-in-disguise" contributions by individual posters.
-
-
Originally Posted by Litterick
Now THAT would have been funny!
-
Originally Posted by RJVB
Emergent Abilities in Large Language Models: An Explainer | Center for Security and Emerging Technology
Short answer: LLM's rely on the notion of 'emergence', but there's a more nuanced definition where 'emergence' is defined as a property exhibited by a larger model that was not present in a smaller model. But even that metric is controversial among researchers.
-
Originally Posted by palindrome
Then there's ChatGTP being used for nuclear security:
cnbc.comOpenAI partners with U.S. National Laboratories on scientific research, nuclear weapons securityOpenAI on Thursday said it's signed a partnership allowing the U.S. National Laboratories to use its latest line of AI models.
Meanwhile McDonald's cancel their scheme to use AI to take drive-thru orders because it was too unreliable.
-
Originally Posted by CliffR
"Agents," huh? Comintern of AIs. Botintern.
Just kidding. (I understand what you're saying.)
-
Originally Posted by RJVB
Originally Posted by CliffR
Originally Posted by CliffR
-
Originally Posted by omphalopsychos
. The people saying it's a distillation are of course OpenAI folks, or those briefed by them, who have a vested interest in belittling DeepSeek's achievements. As somebody else has pointed out, DeepSeek also displays its intermediate 'reasoning' steps, which ChatGTP's O models do not, so the claim seems improbable.
-
Originally Posted by CliffR
-
Seems I recall lots of people claiming Internet would not change things all that much.
LLM's are going to be a big change. Are they there yet? In some areas certainly. In others, not so much. I inquired about a 2 button looper that draws 100mA or less. Good list of loopers but nothing within the parameters of my question. So we point to current imperfections as if that is a reason to dismiss the technology. Hardly. How we tap information and how that information is applied are going to change in big ways.
Of course, there are more complex questions. Would you enjoy listening to jazz that was intuitive, interesting, and creative if you knew it was a machine? Not sure.. I mean other than to steal it's licks.
-
BusyBeaver(5) took 41 years to figure out. No one will ever come close to understanding what a network of billions of nodes with billions of parameters, billions of weights does after billions of cycles of updates. To what extend will it be able to do inference? Will it become conscious? What does it even mean?
It's impossible for a human to comprehend even much simpler systems. You can write a few lines of code that generates random looking numbers. It'd take a nobel prize in mathematics to prove that the numbers generated by the algorithm doesn't satisfy all the properties of random sequences. This would be a relatively simple linear logic. As the branching factor and state space of a system increases, it very quickly becomes very intractable and unintuitive for humans to conceptualize it.
-
Originally Posted by Tal_175
-
Ars Technica: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.
-
One of the other shoes drops. This thing may have many feet...
DeepSeek iOS app sends data unencrypted to ByteDance'-'controlled servers '-' Ars Technica
-
Originally Posted by palindrome
Originally Posted by Cunamara
I must say that as a private citizen I'm not certain if I should be more concerned about the Chinese government gaining access to "sensitive" data of mine as opposed to, say, the US or my own government.
Henriksen Bud or Blu 6
Today, 07:53 PM in For Sale