Hey everyone,
I’m reaching out because I could use some help or guidance here. So, I kinda ended up being the go-to person for collecting and storing data in a project I’m involved in with my team. We’ve got a bunch of data, like tens of thousands of files, and we’re using nextcloud to make it accessible to our users. The thing is, we’re facing a bit of a challenge with the search function on nextcloud when accessed through a public link. Being that there isn’t a search function. While we really appreciate the features it offers from an administrative standpoint, it’s not working that well for our users.
I was wondering if anyone has any suggestions or resources that could point us in the right direction for this project? It would be super awesome if it’s something open-source and privacy retaining. 😄
Thanks a bunch in advance!
If they’re well named files, just spin up a webDAV server via
rclone
and search by file name in the browser. You could also usedavfs2
to mount the server locally in a directory and then filter through the content withfd | fzf
If they’re text files, spin up a docker with Forgejo (formerly gitea) and enable the bleve search indexer.
If you wanted to get really fancy, you could have wikijs in the same docker container, use git as a backend and get a wiki that’s easy to fork and distribute among the team.
Would the rclone method work with a public website? I only have a vague familiarity with rclone from the .edu google drive days.
Of course, it’s just a http server. All you have to do is port-forward.