

This is how federation was sold to me
But did you pay anything for it?


This is how federation was sold to me
But did you pay anything for it?


Please use a test instance…


Cara, eu já estou com com uns 70% da API do Lemmy implementada, e passei esse fim de semana todo trabalhando numa entensão pro browser que “puxa” o grafo social localmente e mostra os dados, como se fosse um browser. Quero ver se consigo fazer posts via C2S antes de ir dormir. :)
Tudo isso pra dizer: sim, eu tenho muito pitaco pra dar nessa história…
He likes the process of working on greenfield ideas and gets bored once he ships the MVP, which is fine. What is not fine is that he makes a ton of hype around new projects, but after he gets tired of playing with it, he refuses to let go. It sucks the air out of the community for very little benefit.
I was hoping that the PixelFed kickstarter would force him to finally focus on the damn thing, but it seems he simply does not have the drive or interest to work at a steady pace in one single product.
I was excited about this. Went on to look at the website and was greeted with a “Coming Soon!” message. It all made sense when I saw it was yet-another project from Daniel “Overpromise and Underdeliver” Supernault.


Once you achieve any kind of scale, whoever your client is querying to get the book data for those kinds of queries is going to block you
You know that the whole of wikidata can be copied with just a few hundreds of GBs, right? There are plenty of examples of community-driven data providers (especially in the *arr space), so I can bet that there would be more people setting up RDF data servers (which is mostly read-heavy, public data sharing) than people willing to set up their Mastodon/Lemmy/GoToSocial server - because that involves replicating data from everyone else, dealing with network partitions, etc…
Also, there are countless ways to make this less dependent on any big server, the client could pull specific subsets of the data and cache data locally so the more they are used the less they would need to fetch remote resources.
Think of it like this: a client-first application that understands linked data would be no different than a traditional web browser, but the main difference is that the client would only use json-ld and not HTML.


Or are all of the books objects stored on activitypub and I get the data from the social graph itself?
Not “stored on activitypub”, but each book could be represented with RDF (it could be something as sophisticated as using DublinCore or as simple as just using isbns to uniquely identity the books (urn:isbn:1234556789) , and then each activity for “CombatWombatEsq read a book” would be an activity where you are the actor and the book is the object. Then it would be up to the client to expand that information. Your client app could take the ISBN and query wikidata, or Amazon, or nothing at all…


and building an “everything server” that implements every message you might want to send is prohibitive just in terms of complexity and scope.
It is not. A server that “speaks” the ActivityPub is not that difficult to build, I’ve done it. The complexity is in getting the data from the social graph into and creating a good UX for users who are too used with the “app-centric” mentality.


It stores the complete data for any given user post in its databases
That is not fully correct. The index the data from the different personal data servers, and they host the largest personal data server out there, but you can have your own PDS and interact with other Bluesky users without having to rely on their data.
This means each one has its own data model, internal storage architecture, and streams/APIs.
Yeah, but why? ActivityPub already provides the “data model” and the API. Internal storage is an implementation detail. Why do we continue to accept this idea that each different mode of interaction with the social graph requires an entirely separate server?
Because they were built for different purposes, they support different features
Like OP said, on bluesky is possible to have different “shells” that interact with the network. Why wouldn’t that be possible on ActivityPub?


I still don’t understand how this is not akin to falsifying data. If we normalize servers copying data from other instances and just rewriting the URL, there is little in the way for malicious actors to create piefed instances to scam others pretending to be someone they are not.


lemmy-federate is the wrong solution to this problem. It duplicates data on all instances, even those with no subscribers. it increases the load and the amount of storage requirements for small instances
What we need is a system where admins can set up a separate discovery service, and include that in search results. Mastodon is finally doing something in this direction, and Lemmy/PieFed/mbin would benefit a lot to adopt it.


Unless you are on a frantic hurry to make this change, I might be able to help. You’ll need to migrate to Wagtail, and I have done some work on integration with Wagtail and the Fediverse via the Django ActivityPub Toolkit. But if you do consider this, you’d have to keep in mind that the ActivityPub side of things would be a ongoing experiment.


projects like Garage allow to do so in a distributed way.
Really? Does this mean that different instances could share a bucket? Do they have to trust each other? I’ll have to jump off the minio train anyway, if you have any tips to get this going would be nice.


It could be an optional feature.
By default, users and communities share the namespace so they can not have the same name. But if you as an admin want to let users and communities with the same handle (the “as:preferredUsername”), then you need to add two CNAMEs that point to the same domain of the backend, and add these to lemmy.hjson, so that the backend can know how to generate actor ids.
Of course, this still wouldn’t let mastodon users to find the actors by querying “username@myserver”, but at least they would be able to know they can find “@username@people.myserver” and @username@groups.myserver".
But we are not going to get “niche” users if we don’t get large numbers of users. Niche interests will only come up here when the population is so large that even the long tail ends up with critical masses.
Those defending “quality over quantity” miss this exact point.


I think this is yet-another reason to have a separation between users and communities at the instance/domain level.
Setting up a server should require one top-level domain and two subdomains:
https://myserver.com/ would be for webfinger and the actual backend.https://groups.myserver.com/ would be the subdomain for the AS2.Group actorshttps://people.myserver.com/ would be the subdomain for the AS2.Person actor

I sound like a broken record, but none of this would happen if the devs took a good look at RDF before throwing everything into objects/classes and ORMs.
I’m working on something that aims to be compatible with Lemmy’s API, and my models are based on the context definitions first. This means that it becomes impossible to have communities and users with the preferred_username, because they are both actors.


That might work, but it’s never a good idea to write your code against a specific implementation. Plus, it seems that in this case the Lemmy devs shot themselves in the foot: why allow to create two different types of actors with the same name?!


I am not so sure Mastodon is at fault, here. Going to https://lemmy.world/.well-known/webfinger?resource=acct%3Avinyl%40lemmy.world, this is the result:
{
"subject": "acct:vinyl@lemmy.world",
"links": [
{
"rel": "http://webfinger.net/rel/profile-page",
"type": "text/html",
"href": "https://lemmy.world/u/vinyl",
"template": null
},
{
"rel": "self",
"type": "application/activity+json",
"href": "https://lemmy.world/u/vinyl",
"template": null,
"properties": {
"https://www.w3.org/ns/activitystreams#type": "Person"
}
},
{
"rel": "http://ostatus.org/schema/1.0/subscribe",
"type": null,
"href": null,
"template": "https://lemmy.world/activitypub/externalInteraction?uri=%7Buri%7D"
},
{
"rel": "http://webfinger.net/rel/profile-page",
"type": "text/html",
"href": "https://lemmy.world/c/vinyl",
"template": null
},
{
"rel": "self",
"type": "application/activity+json",
"href": "https://lemmy.world/c/vinyl",
"template": null,
"properties": {
"https://www.w3.org/ns/activitystreams#type": "Group"
}
}
]
}
So, lemmy is just providing two different actors for the same subject name and saying they refer to the same account.
Yes, but if you want a real one-to-one, private chat system use Matrix or XMPP. Treat anything you write on Lemmy as public information.