• 5 Posts
  • 88 Comments
Joined 11 months ago
cake
Cake day: August 10th, 2023

help-circle
  • Why is SSPL not considered FOSS while other restrictive licenses like AGPL and GPL v3 are?

    So I have an answer for this. Basically all of the entities listed that relicensed their projects to the SSPL, also relicensed their projects using the dual licensing scheme, including one proprietary license. That’s important later.

    The SSPL’s intent is probably that the deployment framework used to open source this software must be open sourced. I like this intent, and I would consider it Free/Libre Software, but it should be noted that another license, the open watcom license, which requires you to open source software if you simply deploy it, is not considered Free Software by the FSF. I don’t really understand this decision. I don’t count “must share source code used” as a restriction on usage cases. It seems that the FSF only cares about user freedom, whoever is using the software, and views being forced to open source code only used privately as a restriction.

    Now, IANAL… but the SSPL’s lettering is problematic. What is part of the deployment system? If I deploy software on Windows, am I forced to open source windows? If I deploy it on a server with intel management engine, am I forced to open source that? Due to the way it is worded, the SSPL is unusable.

    And a dual license, one proprietary and one unusable means only one license — proprietary. There’s actually a possibility that this is intentional, and that the intent of the SSPL was never to be usable, but rather so that these companies could pretend they are still Open Source while going fully proprietary.

    But, for the sake of discussion, let’s assume the SSPL’s intent was benevolent but misguided, and that it’s intent was not to be unusable, but rather to force companies to open source deployment platforms.

    Of course, the OSI went and wrote an article about how the SSPL is not an open source license but that’s all BS. All you need to do is take a look at who sponsors the OSI (Amazon, Google, other big SAAS providers) to realize that the OSI is just protecting their corporate interests, who are terrified of an SSPL license that actually works, so they seek to misrepresent the intent of the SSPL license as too restrictive for Open Source — which is false. Being forced to open source your deployment platform still allows you to use the code in any way you desire — you just have to open source your deployment platform.

    Is there some hypothetical lesser version of SSPL that still captures the essence of it while still being more restrictive than AGPL that would prevent exploitation by SaaS providers?

    AGPL. There’s also Open Watcom, but it’s not considered a Free Software license by the FSF, meaning software written under that wouldn’t be included in any major Linux distros.

    I think in theory you could make an SSPL that works. But SSPL ain’t it.

    Of course, there are problems with designing an SSPL that works, of course. Like, if you make it so that you don’t have to open source proprietary code by other vendors, then what if companies split themselves up and one company makes and “sells” the proprietary programs to another.





  • Putting something on GitHub is really inconsequential if you’re making your project open source since anyone can use it for anything anyway,

    Except for people in China (blocked in China) or people on ipv6 only networks, since Github hasn’t bothered to support ipv6, cutting out those in countries where ipv4 addresses are scarce.

    So yes, it does matter. Both gitlab and codeberg, the two big alternatives, both support ipv6 (idk about them being blocked in china). They also support github logins, so you dob’t even need to make an account.

    And it’s not a black or white. Software freedom is a spectrum, not a binary. We should strive to use more open source, decentralized software, while recognizing that many parts are going to be out of our immediate control, like the backbone of the internet or little pieces like proprietary firmware.





  • https://forgejo.org/compare-to-gitea/

    I dunno, some of these are a pretty big deal, in particular:

    Gitea repeatedly makes choices that leave Gitea admins exposed to known vulnerabilities during extended periods of time. For instance Gitea spent resources to undergo a SOC2 security audit for its SaaS offering while critical vulnerabilities demanded a new release. Advance notice of security releases is for customers only.

    Gitea is developed on github, whereas forgejo is developed on and by codeberg, who use it as their main forge (also mentioned on that page). Someone dogfooding gives me more confidence in the software.


  • The comparison isn’t quite right because you can use git with any provider (Github, gitlab, etc), including multiple at once.

    On the other hand, snap is hardcoded to only be able to use one store at a time, the snap store. To modify this behaviour, you would have to make changes to the snap client source code.

    It’s a crucial difference.



  • sn1per is not open source, according to the OSI’s definition

    The license for sn1per can be found here: https://github.com/1N3/Sn1per/blob/master/LICENSE.md

    It’s more a EULA than an actual license. It prohibits a lot of stuff, and is basically source-available.

    You agree not to create any product or service from any par of the Code from this Project, paid or free

    There is also:

    Sn1perSecurity LLC reserves the right to change the licensing terms at any time, without advance notice. Sn1perSecurity LLC reserves the right to terminate your license at any time.

    So yeah. I decided to test it out anyways… but what I see… is not promising.

    FROM docker.io/blackarchlinux/blackarch:latest
    
    # Upgrade system
    RUN pacman -Syu --noconfirm
    
    # Install sn1per from official repository
    RUN pacman -Sy sn1per --noconfirm
    
    CMD ["sn1per"]
    

    The two pacman commands are redundant. You only need to run pacman -Syu sn1per --noconfirm once. This also goes against docker best practice, as it creates two layers where only one would be necessary. In addition to that, best practice also includes deleting cache files, which isn’t done here. The final docker image is probably significantly larger than it needs to be.

    Their kali image has similar issues:

    RUN set -x \
            && apt -yqq update \
            && apt -yqq full-upgrade \
            && apt clean
    RUN apt install --yes metasploit-framework
    

    https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/

    It’s still building right now. I might edit this post with more info if it’s worth it. I really just want a command-line vulnerability scanner, and sn1per seems to offer that with greenbone/openvas as a backend.

    I could modify the dockerfiles with something better, but I don’t know if I’m legally allowed to do so outside of their repo, and I don’t feel comfortable contributing to a repo that’s not FOSS.





  • When syncthing is configured to go both ways (the default), it also syncs any deletions. You can somewhat get around this by something like one way sync, but it’s not really a proper “backup” software.

    Personally, I like to treat data synced by syncthing, even between multiple machines, as one copy of the data when I am following the 3-2-1 backup rule*, because syncthing won’t save me from a buggy program deleting all my files or user error, or anything like that.

    *See wikipedia for info about the 3-2-1 backup rule.