• 18 Posts
  • 107 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • My unpopular opinion is I like ads, some are well thought, funny, and memorable.
    Ads in videogames which allow you to have a small boost are also amazing, I don’t have to spend money, just leave my phone for 30~60 seconds and I get a bit of premium currency while supporting the devs.

    The annoying/worrisome part is all the tracking the ads have, and the ones which are very invasive which take half of the screen.
    If we could go back to TV ads where everyone watches the ads without individual targeting, and with current technology to protect against hacking, and getting them in sensible places to not hide the content I would place and exception in my ublock and pihole for them.


  • In that case I’d recommen you use immich-go to upload them and still backup only immich instead of your original folder, since if something happens to your immich library you’d have to manually recreate it because immich doesn’t update its db from the file system.
    There was a discussion in github about worries of data being compressed in immich, but it was clarified the uploaded files are saved as they are and only copies are modified, so you can safely backup its library.

    I’m not familiar with RAID, but yeah, I’ve also read its mostly about up time.

    I’d also recommend you look at restic and duplocati.
    Both are backup tools, restic is a CLI and duplocati is a service with an ui.
    So if you want to create the crons go for restic.
    Tho if you want to be able to read your backups manually maybe check how the data is stored, because I’m using duplicati and it saves it in files that need to be read by duplicati, I’m not sure if I could go and easily open them unlike the data copied with rsync.


  • pe1uca@lemmy.pe1uca.devtoFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    20 days ago

    Unless they’ve changed how it works I can confirm.
    Some months ago I was testing lemmy in my local I used the same URL to create a new post, it never showed up in the ui, it was because Lemmy treated it as a crosspost and hid it under the older one.
    At that time it was only a crosspost jf the URL was the same, I’m not so sure about the title, but the body could be different.

    The thing would be to verify if this grouping is being done by the UI or by the server, which might explain some UIs showing duplicated posts.


  • For local backups I use this command

    $ rsync --update -ahr --no-i-r --info=progress2 /source /dest
    

    You could first compress them, but since I have the space for the important stuff, this is the only command I need.

    Recently I also made a migration similar to yours.

    I’ve read jellyfin is hard to migrate, so I just reinstalled it and manually recreated the libraries, I didn’t mind about the watch history and other stuff.
    IIRC there’s a post or github repo with a script to try to migrate jellyfin.

    For immich you just have to copy this database files with the same command above and that’s it (of course with the stack down, you don’t want to copy db files while the database is running).
    For the library I already had it in an external drive with a symlink, so I just had to mount it in the new machine and create a simlar symlink.

    I don’t run any *arr so I don’t know how they’d be handled.
    But I did do the migrarion of syncthing and duplicati.
    For syncthing I just had to find the config path and I copied it with the same command above.
    (You might need to run chown in the new machine).

    For duplicati it was easier since it provides a way to export and import the configurations.

    So depending on how the *arr programs handle their files it can be as easy as find their root directory and rsync it.
    Maybe this could also be done for jellyfin.
    Of course be sure to look for all config folders they need, some programs might split them into their working directory, into ~/.config, or ./.local, or /etc, or any other custom path.

    EDIT: for jellyfin data, evaluate how hard to find is, it might be difficult, but if it’s possible it doesn’t require the same level of backups as your immich data, because immich normally holds data you created and can’t be found anywhere else.

    Most series I have them in just the main jellyfin drive.
    But immich is backedup with 3-2-1, 3 copies of the data (I actually have 4), in at least 2 types of media (HDD and SSD), with 1 being offsite (rclone encrypted into e2 drive)


  • I can share you a bit my journey and setups so maybe you can take a better decision.

    About point 1:

    In vultr with the second smallest shared CPU (1vCPU, 2GB RAM) several of my services have been running fine for years now:
    invidious, squid proxy, TODO app (vikunja), bookmarks (grimoire), key-value storage (kinto), git forge (forgejo) with CI/CD (forgejo actions), freshrss, archival (archive-box), GPS tracker (traccar), notes (trilium), authentication (authelia), monitoring (munin).
    The thing is since I’m the only one using them usually only one or two services receive considerable usage, and I’m kind of patient so if something takes 1 minute instead of 10 seconds I’m fine with it. This is rare to happen, maybe only forgejo actions or the archival.

    In my main pc I was hosting some stuff too: immich, jellyfin, syncthing, and duplicati.

    Just recently bought this minipc https://aoostar.com/products/aoostar-r7-2-bay-nas-amd-ryzen-7-5700u-mini-pc8c-16t-up-to-4-3ghz-with-w11-pro-ddr4-16gb-ram-512gb-nvme-ssd
    (Although I bought it from amazon so I didn’t had to handle the import.)

    Haven’t moved anything off of the VPS, but I think this will be enough for a lot of stuff I have because of the specs of the VPS.
    The ones I’ve moved are the ones from my main PC.
    Transcoding for jellyfin is not an issue since I already preprocessed my library to the formats my devices accept, so only immich could cause issues when uploading my photos.

    Right now the VPS is around 0.3 CPU, 1.1/1.92GB RAM, 2.26/4.8GB swap.
    The minipc is around 2.0CPU (most likely because duplicati is running right now), 3/16GB RAM, no swap.

    There are several options for minipc even with potential to upgrade ram and storage like the one I bought.
    Here’s a spreadsheet I found with very good data on different options so you can easily compare them and find something that matches your needs https://docs.google.com/spreadsheets/d/1SWqLJ6tGmYHzqGaa4RZs54iw7C1uLcTU_rLTRHTOzaA/edit
    (Here’s the original post where I found it https://www.reddit.com/r/MiniPCs/comments/1afzkt5/2024_general_mini_pc_guide_usa/ )

    For storage I don’t have any comments since I’m still using a 512GB nvme and a 1TB external HDD, the minipc is basically my start setup for having a NAS which I plan to fill with drives when I find any in sale (I even bought it without ram and storage since I had spare ones).

    But I do have some huge files around, they are in https://www.idrive.com/s3-storage-e2/
    Using rclone I can easily have it mounted like any other drive and there’s no need to worry of being on the cloud since rclone has an encrypt option.
    Of course this is a temporary solution since it’s cheaper to buy a drive for the long term (I also use it for my backups tho)

    About point 2:

    If you go the route of using only linux sshfs is very easy to use, I can easily connect from the files app or mount it via fstab. And for permissions you can easily manage everything with a new user and ACLs.

    If you need to access it from windows I think your best bet will be to use samba, I think there are several services for this, I was using OpenMediaVault since it was the only one compatible with ARM when I was using a raspberry pi, but when you install it it takes over all your net interfaces and disables wifi, so you have to connect via ethernet to re-enable it.

    About point 3:

    In the VPS I also had pihole and searxng, but I had to move those to a separate instance since if I had something eating up the resources browsing internet was a pain hehe.

    Probably my most critical services will remain in the VPS (like pihole, searxng, authelia, squid proxy, GPS tracker) since I don’t have to worry about my power or internet going down or something that might prevent me from fixing stuff or from my minipc being overloaded with tasks that browsing the internet comes to a crawl (specially since I also ran stuff like whispercpp and llamacpp which basically makes the CPU unusable for a bit :P ).

    About point 4:

    To access everything I use tailscale and I was able to close all my ports while still being able to easily access everything in my main or mini pc without changing anything in my router.

    If you need to give access to someone I’d advice for you to share your pihole node and the machine running the service.
    And in their account a split DNS can be setup to only let them handle your domains by your pihole, everything else can still be with their own DNS.

    If this is not possible and you need your service open on the internet I’d suggest having a VPS with a reverse proxy running tailscale so it can communicate with your service when it receive the requests while still not opening your lan to the internet.
    Another option is tailscale funnel, but I think you’re bound to the domain they give you. I haven’t tried it so you’d need to confirm.




  • Found that also myself trying to do the same thing haha. I did the same process as OP, gparted took 2.5 hours in my 1TB HDD to create a new partition, then copying the data from old to new partition was painfully slow, so I went to copy it to another dive and into the new partition.
    Afterwards I deleted the old partition and grew the new one, which took a bit more than 1.5 hours.

    If I had the space I would have copied all the data out of the drive, formatted it and then copied back into. It would have been quicker.



  • it just seems to redirect to an otherwise Internet accessible page.

    I’m using authelia with caddy but I’m guessing it could be similar, you need to configure the reverse proxy to expect the token the authentication service adds to each request and redirect to sign in if not. This way all requests to the site are protected (of course you’ll need to be aware of APIs or similar non-ui requests)

    I have to make an Internet accessible subdomain.

    That’s true, but you don’t have to expose the actual services you’re running. An easy solution would be to name it other thing, specially if the people using it trust you.
    Another would be to create a wildcard certificate, this way only you and those you share your site with will know the actual sub domain being used.

    My advice is from my personal setup, but still all internal being able to remotely access it via tailscale, so do you really need to make your site public to the internet?
    Only if you need to share it with multiple people is worth having it public, for just you or a few people is not worth the hassle.






  • I’d say it depends on your threat model, it could be a valid option.
    Still, how are you going to manage them? A password manager? You’d still be posing the same question: should I keep my accounts in a single password manager?

    Maybe what you can do is use aliases, that way you don’t expose anywhere the actual account used see your inbox, only accounts to send you emails.
    But I tries this and some service providers don’t handle well custom email domains (specially government and banking which move slowly to adapt new technology)


  • I sort of did this for some movies I had to lessen the burden of on the fly encoding since I already know what formats my devices support.
    Just something to have in mind, my devices only support HD, so I had a lot of wiggle room on the quality.

    Here’s the command jellyfin was running and helped me start figuring out what I needed.

    /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -canvas_size 1920x1080 -i file:"/mnt/peliculas/Harry-Potter/3.hp.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:0 -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 5605745 -bufsize 11211490 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -sc_threshold:v:0 0 -filter_complex "[0:3]scale=s=1920x1080:flags=fast_bilinear[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p[main];[main][sub]overlay=eof_action=endall:shortest=1:repeatlast=0" -start_at_zero -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693.m3u8"
    

    From there I played around with several options and ended up with this command (This has several map options since I was actually combining several files into one)

    ffmpeg -y -threads 4 \
    -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda \
    -i './Harry Potter/3.hp.mkv' \
    -map 0:v:0 -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0 \
    -map 0:a:0 -map 0:a:1 \
    -fps_mode passthrough -f mp4 ./hp-output/3.hp.mix.mp4
    

    If you want to know other values for each option you can run ffmpeg -h encoder=h264_nvenc.

    I don’t have at hand all the sources from where I learnt what each option did, but here’s what to have in mind to the best of my memory.
    All of these comments are from the point of view of h264 with nvenc.
    I assume you know who the video and stream number selectors work for ffmpeg.

    • Using GPU hardware acceleration produces a lower quality image at the same sizes/presets. It just helps taking less time to process.
    • You need to modify the -preset, -profile and -level options to your quality and time processing needs.
    • -vf was to change the data format my original files had to a more common one.
    • The combination of -rc and -cq options is what controls the variable rate (you have to set -b:v to zero, otherwise this one is used as a constant bitrate)

    Try different combinations with small chunks of your files.
    IIRC the options you need to use are -ss, -t and/or -to to just process a chunk of the file and not have to wait for hours processing a full movie.


    Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format

    There’s no need to have a GPU or a big CPU to run these commands. The only problem will be the time.
    Since we’re talking about preprocessing the library you don’t need real time encoding, your hardware can take one or two hours to process a 30 minutes video and you’ll still have the result, so you only need patience.

    You can see jellyfin uses -preset veryfast and I use -preset p7 which the documentation marks as slowest (best quality)
    This is because jellyfin only process the video when you’re watching it and it needs to process frames faster than your devices display them.
    But my command doesn’t, I just run it and whenever it finishes I’ll have the files ready for when I want to watch them without a need for an additional transcode.


  • I think you have two options:

    1. Use a reverse proxy so you can even have two different domains for each instead of a path. The configuration for this would change depending on your reverse proxy.
    2. You can change the config of your pihole in /etc/lighttpd/conf-available/15-pihole-admin.conf. In there you can see what’s the base url to be used and other redirects it has. You just need to remember to check this file each time there’s an update, since it warns you it can be overwritten by that process.

  • Just did the upgrade. Only went and copied the docker folder for the volume.

    # docker inspect immich_pgdata | jq -r ".[0].Mountpoint"
    /var/lib/docker/volumes/immich_pgdata/_data
    

    Inside that folder were all the DB files, so I just copied that into the new folder I created for ./postgres

    I thought there would be issues with the file permissions, but not, everything went smoothly and I can’t see any data loss.
    (This even was a migration from 1.94 to 1.102, so I also did the pgvecto-rs upgrade)


  • I’ve been using https://github.com/mbnuqw/sidebery
    It also suggests you a way to hide the top bar, it can be dynamic or permanent depending on how you configure your userChrome.css

    It provides you a way set up snapshots, although I haven’t tested the restore functionality hehe.
    I’m not sure how you can export them and back them up.

    The one I know works for restoring your tabs is https://github.com/sienori/Tab-Session-Manager
    But if you use sidebery to have your trees, panels, and groups, this one won’t restore them, you’ll get back on long list of tabs in a single panel, with no groups or trees. I already had to restore a session with this one because I changed computer. It has a way to backup your sessions in the cloud.


  • You can use GPSLogger to record it in local or send it to whatever service you want.
    If you’re into selfhosting you can use traccar which is focused into fleet management so it’s easy to get reports on the trips made.

    As for your second point, I wouldn’t trust the GPS for this, it can say you weren’t moving since it only checks every so often to record the data, or maybe it says you actually were speeding because the two points it used to calculate the data weren’t the actual points you were at that time.
    A dashcam would be better suited for this. I’m not sure how they work, but most probably they can be connected to read data from your car which would be more trust worthy to whoever might decide if you were actually speeding.