• 0 Posts
  • 22 Comments
Joined 3 months ago
cake
Cake day: June 12th, 2024

help-circle

  • But then it’s the tools to make the AI that are open source, not the model itself.

    I think that we can’t have a useful discussion on this if we don’t distinguish between the source code of the training framework and the “source code” of the model itself, which is the training data set. E.g, Mistral Nemo can’t be considered open source, because there is no Mistral Nemo without the training data set.

    It’s like with your Doom example - the Doom engine is open source, but Doom itself isn’t. Unfortunately, here the analogy falls apart a bit, because there is no logic in the art assets of doom, whereas there is plenty of logic in the dataset for Mistral - enough that the devs said they don’t want to disclose it for fear of competition.

    This data set logic - incredibly valuable and important for the behavior of the AI, as confirmed by the devs - is why the model is not open source, even though the training framework might be.

    Edit:

    Another aspect is the spirit of open-source. One of the benefits of OSS is you can study the source code to determine whether the software is in compliance with various regulations - you can audit that software.

    How can we audit Mistral Nemo? How can we confirm that it doesn’t utilize copyrighted material to provide its answers?


  • You’re trying to change the definition of open source for AI models and your argument is that they’re magic so different rules should apply.

    No, they’re not fundamentally different from other software. Not by that much.

    The training data is the source of knowledge for the AI model. The tools to train the model are the compiler for that AI model. What makes an AI model different from another is both the source of knowledge and the compiler of that knowledge.

    AFAIK, only one of those things is open source for Mistral - the compiler of knowledge.

    You can make an argument that tools to make Mistral models are open source. You cannot make an argument that the model Mistral Nemo is open source, as what makes it specifically that model is the compiler and the training data used, and one of those is unavailable.

    Therefore, I can agree on the social network analogy if we’re talking about whether the tools to make Mistral models are open-source. I cannot agree if we’re talking about the models themselves, which is what everyone’s interested in when talking about AI.


  • That’s like saying the source code of a binary is a bunch of hexadecimal numbers. You can use a hex editor to look at the “source” of every binary but it’s not human readable

    Yes, the model can be published without the dataset - that makes it, by definition, freeware (free to distribute). It can even be free for commercial use. That doesn’t make it open source.

    At best, the tools to generate a model may be open source, but, by definition, the model itself can never be considered open-source unless the training data and the tools are both open-source.



  • https://en.m.wikipedia.org/wiki/Open-source_software

    Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.

    From Mistral’s FAQ:

    We do not communicate on our training datasets. We keep proprietary some intermediary assets (code and resources) required to produce both the Open-Source models and the Optimized models. Among others, this involves the training logic for models, and the datasets used in training.

    https://huggingface.co/mistralai/Mistral-7B-v0.1/discussions/8

    Unfortunately we’re unable to share details about the training and the datasets (extracted from the open Web) due to the highly competitive nature of the field.

    The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.

    If you can’t share part of your source code due to the “highly competetive nature of the field” (or whatever other reason), your software is not open source.

    I cannot lool at Mistral’s source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.

    I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.

    I cannot fork Mistral and create a competitor from it, because the devs specifically said they’re not providing the source because they don’t want me to.

    You can keep claiming that releasing the binary makes it open source, but that’s not going to make it correct.





  • I don’t think source-available licenses have any chance of outcompeting open source, or at least I hope developers won’t let them.

    Open source thrives on contributions. The moment you restrict what I can do with the software I’m supposed to contribute to is the moment I ask myself: “am I being asked to work for free, solely for the benefit of someone else?”.

    The incentive to contribute completely disappears (at least to me) when I’m asked to do it for a project which “belongs to someone in particular”.




  • It’s no more a risk than throwing more developers at it when they’re not needed.

    “Too many devs“ can, and often is, a significant bottleneck in and of itself. The codebase may simply not be big enough to fit more.

    Besides, I still don’t see what all those additional engineers would actually be doing. “Responding to incidents” presupposes a large number of incidents. In other words, the assumption is that the application will be buggy, or insecure enough, that 30 engineers will not be enough to apply the duct tape. I stand by the claim that an application adhering to modern standards and practices will not have as many bugs or security breaches, and therefore 30 engineers sounds like a completely reasonable amount.


  • I have no idea why you’re even bringing up OT. We’re not talking about PLCs or scientific equipment here, we’re talking about glorified web apps.

    Web apps that need to be secure and highly available, for sure, but web apps all the same. It’s mainly just a messenger app, after all.

    So cool that you got to work with teams of devs that where able to do that.

    Just because, as I assume from this quote, you weren’t able to work with teams like that, does not mean that there are no teams like that, or that Telegram doesn’t operate that way. Following modern practices, complex projects can be successfully done by relatively small teams. Yes, a lot of projects are not run that way, but that just means that it’s all the more a valid point of pride for Telegram.



  • Even if you have a full-time role for continuously auditing the infrastructure (which I would say is the responsibility of either a security officer or a devops engineer), you still didn’t show how that needs a 15-person team, and an otherwise-untouched infrastructure should just keep on working (barring sabotage), unless someone really messed something up.

    If CI builds or deployments keep randomly failing at your place, that’s not an inescapable reality, that’s just a symptom of bad software development practices.




  • If you have separate developers for writing unit tests, and not every developer writing them as they code, something is already very wrong in your project.

    Deployment and infra should also mostly be setup and forget, by which I mean general devops, like setting up CI and infrastructure-as-code. Using modern practices, which lean towards continuous deployment, releasing a feature should just be a matter of toggling a feature flag. Any dev can do this.

    Finally, if your developers are ‘code monkeys’, you’re not ready for a project of this scale.


  • There are good reasons to dislike Telegram, but having “just” 30 engineers is not one of them. Software development is not a chair factory, more people does not equal more or better quality work as much as 9 women won’t give birth to a baby in a month.

    Edit:

    Galperin told TechCrunch. “‘Thirty engineers’ means that there is no one to fight legal requests, there is no infrastructure for dealing with abuse and content moderation issues.”

    I don’t think fighting legal requests and content moderation is an engineer’s job. However, the article can’t seem to get it straight whether it’s 30 engineers, or 30 staff overall. In the latter case, the context changes dramatically and I don’t have the knowledge to tell if 30 staff is enough to deal with legal issues. I would imagine that Telegram would need a small army of lawyers and content moderators for that. Again, not engineers, though.