I just spun up a Nextcloud VM, and I’m trying to decide the best way to manage the data storage.
For context, I’m running it on proxmox and installed it with This script.
Ideally I’d like to keep most of my storage on my NAS. I’m trying to figure out if I should keep the data directory local and add a NAS NFS share as external storage, or just move the whole data directory to an NFS share.
How are you guys handling your Nextcloud storage?
Like some others, I have separate storage and compute servers.
The data directory is an NFS share on my storage server and I run Nextcloud in docker on my compute server.
I have the NFS share defined as a volume of type nfs in the docker compose, mounted to /var/www/html/data. Nextcloud itself just treats it like a local directory.
I just run Nextcloud on my NAS, which is just my old desktop PC. Nextcloud is just a docker container and it points to where I mount the NAS partition.
Currently my NAS runs TrueNAS and pretty much just serves files. I guess I could run a Nextcloud container on TrueNAS, but I’m thinking I may get better performance with it running on a more robust machine.
What sort of performance are you looking for? I run NC’s from Raspberry Pi’s, no problem.
Its been awhile since I’ve run Nextcloud but when I did on a pi the interface was slow and the instance itself was unreliable.
I have my storage mounted from my NAS using NFS and this is added to Nextcloud using the External Storage plugin. Works great.
Is the storage shared with other software, or are the NAS and Proxmox two different machines? Or why bother to set up NFS at all?
I keep most of my usual data local. But I’ve also added some external storage to Nextcloud, that’d be a large harddisk that contains some music and TV recordings, Linux ISOs and rarely used stuff. It’s mounted read-only most of the times and spins down unless I need to access my achived stuff. That’s the main reason why I keep it separate.
Yea proxmox on one machine and a separate machine running truenas and serving NFS
Sure. That’d be a valid use-case. I don’t think I can recommended anything, here. Both should work fine. And you can always run into some unforeseen consequences in a few years. Especially once you decide to change something about your setup. But these things are hard to factor in. I often tend to prefer the easier solution over the more complex one. That helps with maintainability. But that approach doesn’t always apply.
Edit: If you want everything stored on the NAS, just do it.
why bother to set up NFS at all?
It’s a NAS…
I meant the other side. If you use 200GB of your Proxmox, you don’t need to transfer it to the NAS. Which is the question here. I don’t do it, because it’s mostly calendars, contacts and like 10GB of data on Nextcloud, which I’m currently working on, or sharing with friends. And the “NAS” is sleeping most of the day. But if OP wants all their data stored on a NAS, they might very well configure Nextcloud to use NFS and do it that way.
My two main boxes are split to be storage vs compute. The NAS box has a minimal CPU and a pile of 3.5 drives for the low cost storage and they hypervisor has all the CPU and fairly small storage.
Basically the goal in my setup is all the working data is in one place and the handling in another with various snapshots and RAID mixed in to avoid the risk of “oh shit did I just…?” situations.
So anything that could be called bulk data gets offloaded to the NAS directly via mounts and any cache/working data is held locally. If your lab space grows to a notable level over time eventually you need to consider disk I/O as part of the design and having the bulk data on another box let’s you effectively trade some network load in for disk load.
I’ve put the data dir on an nfs mount - didn’t have any problems with it. I’m pretty active with it too - hundreds of gigs, updates daily, run for 5ish years.