CentOS Linux Server Hardening in 2016, Patching Intro

Probably the single most important thing you can do to secure your server is to keep it up to date.

This goes whether you are working for a large multinational corporation, or a small business, or even if you are a home user.

This also applies not just to the operating system packages, but to the firmware that lives on the hardware inside your server. Things like your BIOS (now replaced in most cases by EFI), your SATA and SAS controller(s), some backplanes, your hard drives and SSDs, your network card, the storage arrays (both direct attached external arrays and SAN-attached arrays) and and even the switch or router that your server is connected to, or in the case of VMs, the hypervisor those VMs run on.

I must apologize for the length of the post; this is something I have been meaning to write at length about, for a while now.

Something I have noticed as a disturbing trend in businesses is that they will buy servers and then never update them after the initial setup because of the old adage “if it ain’t broke, don’t fix it.”

This belief seems to come from a fear that a software update will break whatever applications they are using. In the case of software that comes with vendor support, in many cases the vendor has only tested the software on a specific configuration and will only certify (that is, provide support) for their software when it is running in that configuration. Companies who are customers of these software vendors need to push their vendors to do more regular testing and updates of the software in order to certify it on updated configurations. It is no longer good enough to just pump out one new release every few years and then sit back collecting checks for support. Microsoft has even acknowledged this by the fact that Windows 10 is going to be a rolling release OS.

In the case of firmware, my experience has been that often times it is out of plain ignorance that firmware is out of date. This can be attributed to many things, but I suspect that it is largely due to a lack of collaboration between with hardware vendors and OS vendors. You see, most outfits don’t have the resources to hunt down the firmware updates. It is generally left up to the IT guy to seek out the firmware updates when and if time permits. Of course, as is so often the case, the IT guy doesn’t have the time with everything else he/she/they are juggling, or they can’t get the change management approvals because it isn’t seen as a critical update, and so the firmware doesn’t get updated until some issue comes along that requires the vendor to be called out to run hardware diagnostics, where he then points out that it’s not up to date.

This is largely a problem in two types of environments. The first is shared web hosting environments, where bringing down a server for updates takes down hundreds of sites at a time and causes customers who missed the memo to become furious that their site is down. Of course this could be mitigated if the host were to have a secondary server stood up behind a load balancer, but that adds cost and complexity that the shared providers generally don’t want to deal with.

The second environment is at large businesses where the bulk of their network is behind a firewall and they only have a very very tiny web presence. They have deemed the internal network as “safe” simply by the fact that the corporate firewall provides no access to hostile entities. They fail to see that the firewall itself is a single point of failure from a security standpoint. Unless they have completely isolated the internal network from the internet, meaning no proxies, no NAT, no jump hosts — someone can find a way in, given enough resources.

That’s not to say that proxies, NAT, and jump hosts can’t be used in a way that will mitigate the risk of having a non-airgapped network, but it takes a carefully laid out plan to ensure that it actually happens.

Another problem I’ve seen, at least in web hosting environments, is that they tend to take physical security and password security for granted. They use default passwords for DRAC and iLO interfaces (when they have those), and generally only update whatever shared root password they have when an employee leaves. Additionally, they never update the firmware on switches, routers, filers, and other such network-connected devices.

The one exception to this, at least for switch and router updates, are some “managed” hosting companies, where they have system administrators who act as the IT department for their customers. Having worked for one for a time, I can confirm that they did do firmware updates on their switches and routers.

Now, I can understand hosting companies, including the managed hosting companies, not doing the firmware upgrades for servers, because the servers are being used by their customers who might have change management policies in place. However the firmware should be the first thing upgraded after a server leaves the truck, and should be upgraded again every time it is assigned to a new customer, as part of the build process; and that is the bare minimum I would recommend. Not updating it at all ever is just asking for trouble.

Ideally, the high level System Engineers at any given company would have a dump provided monthly from the Asset database which lists all of the hardware in use, and they would go out seeking that firmware to drop into a custom repository for installation by the OS package manager, and then the firmware could be updated whenever the OS is patched. I know of at least one enterprise which does something along those lines, however I cannot name them as I am still working for them.

Anyway, the point here is that you need to be updating not only your software, but also the hardware that is connected to your server, so the first part of this series is going to cover upgrading both software and hardware updates.