Fair. I will try NFS if anything else fails. Thanks :)
Fair. I will try NFS if anything else fails. Thanks :)
I strongly disagree why this would not be beneficial. Could you expand?
do you need them all at the same time?
I need to access all files conveniently and transparently depending on what I need at work in that particular moment.
are they mostly the same size and type?
Hard no.
What would be the performance implications? Isn’t virtiofs
theoretically faster?
Every other solution looks more elegant on paper but has lots of pitfalls
A very sane and fair comment.
Why not NFS? Regardless, wouldn’t it be slower anyway compared to virtiofs
?
I think NFS would be a better choice if I decide to go that route. Isn’t SAMBA slower and older than NFS?
strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.
No worries. Thank a lot nonetheless.
is creating an intermediary like others have commented on in this post an option?
What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn’t work since my drive is smaller than the cloud drive and I need all the files on-demand.
It does not, hence my question.
do you have to provide a username/password or token when you try to access the drive now?
I do but it’s through the proprietary GUI of the binary which has no CLI or API I can use.
I just checked and it is mounted as a fuse
drive.
do you know how to use strace?
A very confident NO :)
Then I will try NFS and get back to you. Thanks :)
The cloud binary is proprietary and it’s not supported by rclone
unless I find out how the binary works but I doubt it uses something standardized like WebDAV underneath.
I can try but I might end up in the same situation as with virtiofs
. The cloud drive will get unmounted and I will end up with an empty folder when I try to access it from the host.
The cloud drive is mounted on the guest, yes, but once I mount it with virtiofs
in order to share it with the host it gets unmounted and I end up with an empty folder. bind
doesn’t work either.
That would be impossible since the cloud drive is 2TB and my physical storage space is under 500GB in size.
I have no idea how it is mounted (how can I find out?) because the binary is proprietary. This is why it is contained inside a virtual machine.
The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (bwrap
and the like introduce a whole lot of problems, the drive doesn’t sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.
Because the executable is proprietary (and a bit legacy I would say) and full of telemetry, undocumented and the cloud service has no CLI, WebDAV or rclone support. I do not want to run something like that on my personal computer and I do not know how to use
bwrap
properly and don’t want to risk it. I have since switched over to apodman
container but I encounter the same problem, the folder is empty on the host (See my post here: https://lemmy.ml/post/22215540).