• 0 Posts
  • 63 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle




  • I feel that stupid is too easy an answer, especially when the stupid was manufactured, or at the very least the cure was withheld. There is a deeper part of human nature at play, greed is close but its more nuanced, it’s featured in the “tragedy of the commons” and against the tide even those who would act righteously in their own life would be replaced and utterly destroyed by the clawing hands of everyone else.



  • I made my level of understanding kinda open at the start. And you say it’s not, open source most say it is, and they explained why, and when i checked all their points were true, and o tried to understand as best i could. The bottom line is that the reason for the disagreement is you say the training data and the weights together are an inseparable part of the whole and if any part of that is not open then the project as a whole is not open. I don’t see how that tracks when the weights are open, and both it and the training data can be removed and switched to something else. But i have come to believe the response would just boil down to you can’t separate it. There really is no where else to go at this point.


  • The LLM is a machine that when simplified down takes two inputs. A data set, and weight variables. These two inputs are not the focus of the software, as long as the structure is valid, the machine will give an output. The input is not the machine, and the machines source code is open source. The machine IS what is revolutionary about this LLM. Its not being praised because its weights are fine tuned, it didn’t sink Nvidia’s stock price by 700 billion because it has extra special training data. Its special because of its optimizations, and its novel method of using two halves to bounce ideas back and forth and to value its answers. Its the methodology of its function. And that is given to you open to see its source code


  • So like an emulator. Or at least the PS2 ones when you had to dump your bios from your machine (or snatch someone else’s).

    But that’s my point! The data set is interchangeable. So Its not what makes the deepseek, THE deepseek LLM . But without the data set it would be functionally useless. And there would be no way possible to satisfy your requirement for data set openness. You said there is some line in the sand somewhere where you might be satisfied with some amount of the data, but your argument states that granularity must be absolute in order to justify calling it open source. You demand an impossible unnecessary standard that is not held to other open source projects.


  • Are you talking source as in source code? Or are you talking about source as in the data the llm uses? Because the source code is available. The weights are not the output, they are a function. The LLM response is The output

    but the weights can be changed, the input data can be changed. And if they are… it’s still deepseek and if you can change them they are not what makes deepseek; deepseek.

    I use boot.dev it has an AI. But they changed the data set to only cover relevant topics, and changed its weights, and gave it tone instruction. And wile it plays a character, it’s still chatgpt.



  • Source build dependency… so you don’t have a problem with the LLM at all! You have a problem with the data collection process or the pre-training! So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was not disclosed? Or which games were dissected in order to reverse engineer that info? I don’t consider that a prerequisite to say an emulator is open

    So if i say… remove the data set from deepseek what remains would be considered open source by you?


  • Would it? Not sure how that would be a better analogy. The argument is that it’s nearly all open… but it still does not count because the data set before it’s manipulated by the LLM (in my analogy the data set the emulator is using would be a Nintendo ROM) is not open. A data set that if provided would be so massive, it would render the point of tokenization pointless and be completely unusable by literally ANYONE without multiple data centers redlining for WEEKS. Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.

    An emulator without a ROM mounted is still an emulator, even if not usable.




  • So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and doesnt touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?


  • So i am leaning as much as i can here, so bear with me. But it accepts tokenized data and structures it via a transformer as a json file or sun such. The weights are a binary file that’s separate and is used to, well, modify the tokenized data to generate outcomes. As long as you used a compatible tokenization structure, and weights structure, you could create a new training set. But that can be done with any LLM. You can’t pull the data from this just as you can’t make wheat from dissecting bread. But they provide the tools to set your own data, and the way the LLM handles that data is novel, due to being hamstrung by US sanctions. A “necessity is the mother of invention” and all that. Running comparable ai’s on inferior hardware and much smaller budget is what makes this one stand out, not the training data.


    1. Well that’s the argument.

    2. Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.

    3. It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special


  • WraithGear@lemmy.worldtomemes@lemmy.worldDeepseek broke the kayfabe
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 month ago

    This is not the experience i get on my home hosted version of deepseek 12. I asked it directly about the 1989 Tiananmen Square protests and massacre, and it was very open that the info was missing, and considering the circumstances, that tracks. What was your prompt and what resource level are you using?