And it failed spectacularly.

We only needed a simple form, but we wanted to be fancy, so we used “nextcloud forms”.

The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower. No warning whatsoever. It’s an official app, couldn’t they wait that it was ready for NC 30 before launching it? The newsletter boasts “NC hub 9 is the best thing after sliced bread” yet i don’t see any difference both in visual or performance compared to NC hub 2

Conclusion: we made our business to rely on nextcloud forms as a signup form, but the only reason we were using it was disabled who knows how many weeks ago.

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    4 months ago

    The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.

    Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.

    • Moonrise2473@feddit.itOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.

      Now, I did directly to production because this is low priority stuff, but it would have happened even with a testing stage. I would have never noticed that the forms apps was disabled, the system disabled it without any notification.

      You would expect that an official app supports the latest release, no?

      This wasn’t an app released by a nobody in their free time, this is a main feature heavily advertised in their blog. Look by yourself:

      https://nextcloud.com/blog/nextcloud-forms-to-keep-your-surveys-private/

      It’s not unreasonable to get pissed when 6 months after that blog post it doesn’t support the latest release anymore.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      Docker images should never self update - that’s an anti pattern. They should be static code. The only time I would expect a docker image to “auto update” is if I was using the “latest” or “stable” tag and Compose/Kubernetes/I repull the image - but the image should never update itself.

      Yes, OP bit off more than they could chew. Nextcloud, however, is breaking the entire purpose of Docker images by having an auto-updater at all.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        If you say

        Thing:latest
        

        and then redeploy your compose file or what not,

        well, you’re getting the latest!

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            3 months ago

            That is a very bad idea. Use the stable tag instead. Better yet, create an Ansible playbook that updates the containers in bulk and then manually run it when you have time.

            • Scott@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              Naw I mostly do it for my own personal shit, can’t be fucked to update Plex 3 times a week and so on with other homelab stuff. Everything production is tagged with gitops version managed kubernetes manifests

              Edit: should also mention I build quite a bit of the software being deployed