I’m talking not only about trusting the distribution chain but about the situation where some services dont rebuild their images using updated bases if they dont have a new release.

So per example if the particular service latest tag was a year ago they keep distributing it with a year old alpine base…

  • B0rax@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    23 days ago

    No. I only have a limited amount of time for maintaining my home infrastructure. I chose my battles.

    • jimmy90@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 days ago

      i do look out for new images that could be a drop in replacement

      the new no-distro builds of containers is very interesting

      • femtek@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Yeah, I saw that another person forked NPM and used that for awhile before moving on to something else. Work is handled outside of myself but I don’t do it at home. I did learn how to though to get an understanding of it.

  • Not a newt@piefed.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    Rebuild: no. If the software itself is unmaintained, it gets replaced.

    Patch: yes. If the base image contains vulnerabilities that can be fixed with a package update, then that gets applied. The patch size and side effects can be minimized by using copacetic, which can ingest Trivy scan results to identify vulnerabilities.

    There’s also repos like Chainguard and Docker hardened images which are handy for getting up to date images of commonly used tools.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    I don’t think a year old base is bad. Unless there’s an absolutely devastating CVE in something like the network stack or a particular shared library, any vulnerabilities in it will probably be just privilege escalations that wouldn’t have any effect unless you were allowing people shell access to the container. Obviously, the application itself can have a vulnerability, but that would be the case regardless of base image.

  • HotDog7@feddit.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I don’t know enough about code to verify things myself. And I assume this applies for a lot of us here. So I just pray that nothing’s fucked in the distribution chain.

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      I’m also in this category, but OP is talking about something else.

      Like if you use container-x, which has an alpine base. If it hasn’t released a new version in several years then you’re using a several year old alpine distro.

      I didn’t really realise this was a thing.

      • HotDog7@feddit.online
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        Ah, I have no idea what that is. I thought OP meant building stuff directly from Github (e.g. Ungoogled Chromium). Thanks for the clarification! :)

        • fizzle@quokk.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 days ago

          Containers have layers. So if you create an instance of a syncthing container whoever built that container would have started with some other container. Alpine linux is a very popular base layer, just used as an example in this discussion.

          When you download an image, all the layers underlying the application that you actually wanted, will only be as fresh as the last time the maintainer built that image. So if there were a bug in the alpine base, that might have been fixed in alpine, but wouldn’t by pushed through to whatever you downloaded.