One important part of keeping a server secure is ensuring updates are applied. In our modern world, there are so many different packages, containers, and frameworks that rely on different packages in order to perform their functions that it can be a cumbersome process.
Recently, we ran into an issue where when simply checking for a list of updates returned an error message that we pay very close attention to:
Err:5 https://dl.yarnpkg.com/debian stable InRelease
The following signatures were invalid: EXPKEYSIG 23E7166788B63E1E Yarn Packaging
This included the following warning as well:
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://dl.yarnpkg.com/debian stable InRelease: The following signatures were invalid: EXPKEYSIG 23E7166788B63E1E Yarn Packaging email@example.com
W: Failed to fetch https://dl.yarnpkg.com/debian/dists/stable/InRelease The following signatures were invalid: EXPKEYSIG 23E7166788B63E1E Yarn Packaging firstname.lastname@example.org
Any time a signature is not verified, it can raise a red flag and should be looked into. The reason for this specific warning and error proves the case for why it is important to investigate first, panic later.
With a little investigation it was discovered that the specific yarn package that we use is set up to use a specific GPG key to sign each release. Unfortunately, it appears that the process of renewing the key, or extending the key's expiration date, is not yet an automated process. Luckily, the GPG key has been updated, but it does require a manual step to add the new key. Once added, everything worked as intended.
This brings up a question about updates; be it keys, code, packages, etc. Should it be automated? As with all things relating to security, the answer is always going to be it depends. There are whole solutions dedicated to building, testing, staging, and then deploying updates—even ones that seem trivial—to ensure that the platform, app, or web site using them does not fall over or lose functionality.
In the tech world, there is often a joke about how developers and sysadmins spend a good portion of their time looking for ways to automate their tasks. Bash scripts being a good example. Or the now slightly famous story of the build engineer who wrote a script that connected to the coffee machine and made him a cup of coffee in the exact amount of time it takes to walk from his desk to the machine, among other things.
Internally, we have several workflows that we have automated and others that are 100% manual as we can be a bit more picky about when we apply an update to a platform.
Yet I feel it's important to be critical on judging when to "automate all the things". After all, had we had that one server's update process automated, we would not have immediately caught the error and warning. And while we can have errors and warnings pushed into our monitoring solution, we're not at a point infrastructure wise where need to abstract to that level.
Still, the power of automation in the world of technology is what allows us to move at the ever increasing speed required to perform our tasks. Obviously, this means there is a time and a place for it. Perhaps just be a bit selective.