• 0 Posts
  • 42 Comments
Joined 22 days ago
cake
Cake day: January 7th, 2026

help-circle

  • E2EE isn’t really relevant, when the “ends” have the functionality, to share data with Meta directly: as “reports”, “customer support”, “assistance” (Meta AI); where a UI element is the separation.

    Edit: it turns out cloud backups aren’t E2E encrypted by default… meaning: any backup data, which passes through Meta’s servers, to the cloud providers (like iCloud or Google Account), is unobscured to Meta; unless E2EE is explicitly enabled. And even then, WhatsApp’s privacy policy states: “if you use a data backup service integrated with our Services (like iCloud or Google Account), they will receive information you share with them, such as your WhatsApp messages.” So the encryption happens on the server side, meaning: Apple and Google still have full access to the content. It doesn’t matter if you, personally, refuse to use the “feature”: if the other end does, your interactions will be included in their backups.

    Cross-posting my comment from the cross-posted post



  • E2EE isn’t really relevant, when the “ends” have the functionality, to share data with Meta directly: as “reports”, “customer support”, “assistance” (Meta AI); where a UI element is the separation.

    Edit: it turns out cloud backups aren’t E2E encrypted by default… meaning: any backup data, which passes through Meta’s servers, to the cloud providers (like iCloud or Google Account), is unobscured to Meta; unless E2EE is explicitly enabled. And even then, WhatsApp’s privacy policy states: “if you use a data backup service integrated with our Services (like iCloud or Google Account), they will receive information you share with them, such as your WhatsApp messages.” So the encryption happens on the server side, meaning: Apple and Google still have full access to the content. It doesn’t matter if you, personally, refuse to use the “feature”: if the other end does, your interactions will be included in their backups.


  • I still prefer mobile users adding features, even if they are of an unusual object type; effectively being another type of fixme to desktop users. But instead of another desktop user integrating these elements, I rather have mobile users on the desktop as well; as to integrate their mobile changes when at home. If you’re sightseeing, these applications are very helpful, for creating/editing POIs and effectively sketching out non-POI features; but the latter does require some work to integrate them.

    Quoting another comment of mine. Your use of the tool is something I’m advocating for, really; I recognize it’s usefulness, but am not treating it as a substitute for desktop editors.


  • There’s quite some changes by First World contributors in Africa, primarily from mapping events. Perhaps they could also play a role in integrating POI and line elements (which are traditionally areas); or maybe allow a more POI- and line-based standard in Africa, not requiring areas for such objects. Or an intuitive UI, supporting editing of geometries, could be added; despite gluing and complicated relationships, etc. I would love to be proven wrong in my skepticism.


  • Ah okay, now I get it; I wasn’t familiar with that. Satelliet Data Portaal provides both partial (more recent), and full mosaics (less recent) WMTS from multiple sources (Pleiades-NEO or SuperView-NEO); which might complicate things (having to load the right imagery, based on the location being edited for the partial captures; and selecting the right source). The resolution, especially from the partial captures, but also the mosaics, doesn’t really hold up to something like PDOK or Esri. So perhaps this source being the default might not desirable, but having it as an option (especially the mosaic) would be neat.


  • OSM is a community project, someone have to the the PR, It won’t show up automagically without human intervention.

    Is this referring to the “mass imports” part, you would argue are done in batches by many contributors? If so, then yes, mass import might give the wrong idea, I agree. But even if imported by many over time, the result is still a mass import from these open databases (minus a few addresses maybe, drawn in by hand; or roads not yet aligned with BGT, in case of The Netherlands).

    Are you sure its license is compatible? E.g. The website says I can’t view it because I’m not in the Netherlands. There are a lot of frequent editors from there, it’s strange they haven’t added it yet.

    I can’t find the forum post regarding this, but I’m quite sure the conclusion was it being compatible; despite viewing being restricted to Dutch citizens (because it’s a service provided by The Netherlands). It’s a quite common source here, especially for recent changes (which other imagery just doesn’t provide). And they are providing WMTS directly, so if they wanted to restrict usage for georeferencing, I don’t understand why they’d do that.



  • Oh, you can add new things, that’s perfectly fine. I still prefer mobile users adding features, even if they are of an unusual object type; effectively being another type of fixme to desktop users. But instead of another desktop user integrating these elements, I rather have mobile users on the desktop as well; as to integrate their mobile changes when at home. If you’re sightseeing, these applications are very helpful, for creating/editing POIs and effectively sketching out non-POI features; but the latter does require some work to integrate them.


  • If that’s the only way you’re going to contribute to OSM, by all means, go for it. But as a desktop OSM editor, I really dislike some of the incentives pushed by mobile applications. Primarily not adding objects as polygons (as it would be difficult to draw on such devices), but adding them as POIs (parking, amenities, etc.) and paths (waterways for instance: where paths are often used for just naming, or as water"ways", like for marine traffic). This often leads me to correct these changes, as they really stand out compared to the rest of the map. So generally, I view these tools as complementary, rather than final changes; unless it’s changes to POIs or something, which is where these applications shine, in my opinion.


  • I personally quite like OsmAnd’s granular control, but understand how others might experience this as being overwhelming; which big-tech’s restrictive… I mean “modern” user-experience (UX) might be to blame for. There are however quite some alternatives to pick from, if you wanted a more minimalist approach to UX; which OsmAnd could also provide by default (while allowing advanced users to toggle additional “expert” settings).

    What makes Google “Maps” superior to OSM-based maps, is not its inferior “map”, but rather the navigational aspect: businesses and other ‘points of interests’ (POIs) registering their location to Google, public transit data being supplied to it (allowing for planning), traffic statistics (through creepy location tracking, even in the background unless opted out), etc.; and bundles all into a single, undeniably convenient application.

    I would argue OSM data is primarily mass imports, from other permissive or open (government) databases; which are strongly dependent on region. For The Netherlands: BAG (basic registration of addresses and buildings) and BGT (basic registration of large-scale topography), make up a large portion of the data presented (which are either directly imported or used as a reference). Although, relative to real-world changes they might temporarily lack behind, and users add details based on satellite imagery.

    Regarding satellite imagery: editors don’t always have up-to-date imagery, leading to some users undoing changes others have made. In The Netherlands, the government provides relatively recent satellite imagery: which can be imported into the alternative JOSM editor as an WMTS layer. And you may also want to check the comments of the last change: in OSM’s own iD editor you can click the “last modified …” link, all the way at the bottom of the “Edit object” tab, for the selected object.

    Another thing I would really recommend, is checking how other mappers have added certain features. Which is sometimes easier to understand than OSM’s documentation; which doesn’t always correspond to practice (possibly dependent on region). A very useful tool for this is Overpass Turbo, which you can use to search for certain elements, to see how others have implemented these.

    I know this might all feel a little overwhelming, but I wish I had known these things earlier in my mapping journey. I started doing it because I noticed things missing, that I knew existed as a mailman. Just starting with smaller changes to get my feet wet, and gradually working my way to larger changes. As long as you don’t start taring up large roads (including their often many relationships), you’ll be just fine; and might even become hooked (as it can be quite satisfying, having created another beautiful part).




  • So the amend alleges, Nvidia having used/stored/copied/obtained/distributed copyrighted works (including plaintiffs’), both through databases available on Hugging Face (‘Books3’ featured in both ‘The Pile’ and ‘SlimPajama’), or pirating from shadow libraries (like Anna’s Archive), to train multiple LLMs (primarily their ‘NeMo Megatron’ series), and distributing the copyrighted data through the ‘NeMo Megatron Framework’; data which was ultimately sourced from shadow libraries.

    It’s quite an interesting read actually, especially the link to this Anna’s Archive blog post. Which it grossly pulls out of context, as plaintiffs clearly despise the shadow libraries too: as they have ultimately provided access to their copyrighted material.

    Especially the part: “Most (but not all!) US-based companies reconsidered once they realized the illegal nature of our work. By contrast, Chinese firms have enthusiastically embraced our collection, apparently untroubled by its legality.” makes me wonder if that’s the reason why models like Deepseek, initially blew Western models out of the water.




  • Yeah, I think they employ a pretty sophisticated bot detection algorithm. I vaguely remember there being this ‘make 5 friends’ objective, or something along those lines, which I had no intention of fulfilling. If a new account, having triggered the manual reviewing process, doesn’t adhere to common usage patterns, simply have them supply additional information. Any collateral damage simply means additional data, to be appended to Facebook’s self-profiling platform… I mean, what else would one expect when Facebook’s first outside investor was Palentir’s Peter Thiel?



  • AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.

    I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?

    Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.

    And how exactly do you enforce that? It seems like you’re just shifting the problem.

    Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

    I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.

    Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.

    If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.

    For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.