- 34 Posts
- 84 Comments
Looks like i missed this and now have 2.0.12 installed. Dangers of autoupdate i guess. What are the chances that this version is compromised? I guess if i installed it the i’ve potentially already compromised my android. When will we know when all is good to go and why has the fdroid people not blocked the app unitil there is clarity?
I second drive mirroring with ZFS. Truenas Scale has been a quantum leap for me. I have two very old Dell T110 with 32GB ram each. One, the main one, has 4x 4GB Western Digital Gold drives, which cost me a fortune at the time. I think they are in raid5 but cant remember. The other t110 has cheaper WD reds. I turn on the slave machine on saturdays to complete replication tasks. I dont have a robust backup model yet besides replicating to an external HD on a 3rd machine but will need to work on that.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Running docker compose the right way
1·3 months agoOf course, i forgot to mention, that user is in the docker grp. I was just thinking that maybe, as the data folders/volumes for the containers were saved in the user home directory, there may be read/write issues foe the various containers.
Likewise, i was worried that if installing/running a sensitive service like Vaultwarden with sudo exposed me to risks.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Getting the right setup for Vaultwarden compose.yamlEnglish
2·3 months agoI’m picking up on this because I’m getting a bit confused. I’ve run this through docker compose using the below yaml. I’ve done it as normal user, “Fred” (added to docker group) rather than root (using sudo although it make no difference as I get the same outcome). I normally have a “docker” folder in my /home/fred/ folder so is /home/fred/docker/vaultwarden in this instance (i.e. my data folder is in here).
I get the same issue highlighted here which is all about the SSL_ERROR_RX_RECORD_TOO_LONG when trying to connect via https, whereas when I try to connect via http, I get a white page with Vaultwarden logo in top left corner and the spinning wheel in the center. I’ve got no proxy enabled and I’m still not clear why I need one if I’m only accessing this via LAN. Is this something on the lines of “you must yse this through a proxy or it won’t work” thing? Although that not why I understood the from the guidance. I’m clearly missing something although not sure what exactly it is …
services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: always environment: # DOMAIN: "https://vw.home.home/" SIGNUPS_ALLOWED: "true" volumes: - ./vw-data/:/data/ ports: - 11001:80
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Getting the right setup for Vaultwarden compose.yamlEnglish
2·3 months agoI don’t have any proxy.
Nice one. I missed this
And, i can’t find clients on f-droid. Any variants recomended that dont come from the playstore.
Another key feature will be Keepass data import.
That is another problem i face when i have the app open on desktop and phone at the same time. Its a nightmare.
trilobite@lemmy.mltoAsklemmy@lemmy.ml•What happened with Syncthing-Fork and is it safe to use now?
0·3 months agoToo bad I read this only now. I may have already updated the original app to v.1.28.1 Just seen that on Android you only have access to syncthing-fork now. Moved over to syncthing-fork and realised you cant import the config. Its expecting a zip file yet v.1.28.1 expoted loads of individual files. :-(
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
1·4 months agoReadeck looks similar to Wallabag?
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
1·4 months agoThis is a good comment! I just discovered after your comment that floccus has a setting to link up with Linkwarden so that together, they achieve most of my desired outcomes. It just becomes more i volved in the managemente as you no end up with two components to manage rather than one ;-)
trilobite@lemmy.mltoAsklemmy@lemmy.ml•What’s something you own that has truly paid for itself?
0·5 months agoWhen you say "I close city water’, sounds like you are also drinking that water? Sounds like a cool idea that I too have been thinking about. That water needs disinfection though
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Truenas Scale replication of docker appsEnglish
1·5 months agoOK, so maybe I didn’t explain myself. What I meant was that I would like resilience so that if one server goes down, I’ve got the other to quickly fireup. Only problem is that slave sever has a smaller pool, so I can’t replicate the whole pool of master server.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
1·5 months agoI was using floccus, but what is the point of saving bookmarks twice, once in linkwarden and once in browser
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Getting old and would like a better way to track health the self hosted wayEnglish
5·5 months agoLooks very interesting. But as others noted, still too young, only two releases in 3 months and 1 person. Certainly to keep an eye out. The MIT licence worries me too. I always add the licence in the criteria ;-)
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Getting old and would like a better way to track health the self hosted wayEnglish
4·5 months agoabsolutely, none of that is going past my router.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
1·5 months agoInterestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
2·5 months agoThe first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the
/var/www/html/storage/uploadfor the main app data and the database stored in here/var/lib/mysqlcan just be copied over? but then why does my local folder not have anystrorage/uploadfolders?user@vm101:/var/www/html$ ls index.html
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
1·5 months agoHere is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID". # You can generate the Client ID at http://localhost/profile (after registering) # The Firefly III URL is: http://app:8080/ # # Other URL's will give 500 | Server Error # services: app: image: fireflyiii/core:latest hostname: app container_name: firefly_iii_core networks: - firefly_iii restart: always volumes: - firefly_iii_upload:/var/www/html/storage/upload env_file: .env ports: - '84:8080' depends_on: - db db: image: mariadb:lts hostname: db container_name: firefly_iii_db networks: - firefly_iii restart: always env_file: .db.env volumes: - firefly_iii_db:/var/lib/mysql importer: image: fireflyiii/data-importer:latest hostname: importer restart: always container_name: firefly_iii_importer networks: - firefly_iii ports: - '81:8080' depends_on: - app env_file: .importer.env cron: # # To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below # The STATIC_CRON_TOKEN must be *exactly* 32 characters long # image: alpine container_name: firefly_iii_cron restart: always command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout" networks: - firefly_iii volumes: firefly_iii_upload: firefly_iii_db: networks: firefly_iii: driver: bridge



I think human lazyness as you say is the biggest obsticle. My kids school sent out a Google form the other day to collect family stats. A google form requires a google email and i had degooglised the family 5-6 years ago. My wife was concerned that not providing the info would singke out my son but i was determined to not fill that form, thus passing my data to Google. So i called up the school and sought a second method. The school secretary eventually agreed to record the info over the phone. I lost 30 mins of my day. Basically, if we don’t have the will to mske change, change will never come. But then we have no right to complain, do we?