"New release '16.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it."


(Daniel Hollands) #1

In the wake of the Dirty Cow vulnerability, should I run the do-release-upgrade on the server which runs this forum?

Currently, the server is running Ubuntu 14.04.5 LTS. Hosted inside it is a docker container which runs the actual forum and all other related bits. This is pretty good at keeping itself updated via a magic button that I press when told to do so, so I’m not too concerned about the docker container, but the host server is no-doubt vulnerable?

I’ve already run an apt-get update && apt-get upgrade, which I understand should be enough to patch Dirty Cow… But in doing so the server is inviting me to run do-release-upgrade to upgrade to 16.04.1 LTS - and I’m not sure if I should.

I’m concerned that I don’t know enough about the server to fix any issues that the upgrade might cause, but equally concerned that at some point the LTS for 14.04 will expire, and I don’t really want to leave everything until the last minute.

I’d welcome feedback from anyone that actually understand what they’re doing (unlike me).

Thanks.


(Marc Cooper) #2

14.04, being an LTS, won’t be EOLd until 2019-04. So, no rush.

Personally, I like to get onto the next LTS once the 0.1 release is out. That’s the trigger for the message you’re now seeing, I believe.

I upgraded a server from 14.04 to 16.04 last week and had some weirdness related to the systemd/upstart changes and some old scripts I’d left in /etc/init. There was also a post install script that blocked the upgrade due to bad calls to the hardware clock. Those were pretty tricky to workaround. I’m no Linux guru, more intermediate, but there were some nervy moments.

If you’ve done very little to the server since its been installed, then I doubt you’ll run into anything, though.

I’d be happy to do it if none of the gurus here can’t. Else, I’m happy to sit on Slack while you do it and offer advice and calming words if the :poop: hits the fan.


(Daniel Hollands) #3

I’ve since realised that I’m running two servers, the one hosting the forum, and the one hosting my (Ghost) maker blog, both of which are making the same offer of an upgrade.

TBH, in both instances I’ve simply followed the instructions provided by the relevant software, I don’t think any of it was particularly out of the ordinary.

I don’t mind doing it myself, I just want to know what I’m letting myself in for before I do. The offer of support via Slack would be kindly received.

Tomorrow, mid-day ish?


#4

Is the database attached to the instance or is it hosted separately? Might be possibly to provision a new instance on an updated Ubuntu, check it works then switch to it via load balancing / DNS / however you’ve got it setup


(Daniel Hollands) #5

It’s all on the same box on both servers.

While I have no doubt what you are trying is true, I think it’s probably overkill (or, at least more than I really want to have to worry about)


(Marc Cooper) #6

Tomorrow 12:00ish is fine. Probably obvious, but suggest you do a db backup beforehand.


(Andy Wootton) #8

The LTS releases are pretty well tested, with the exception of Nvidea drivers, Java and Flash, in my experience :slight_smile:

Taking of flash, I’m just updating Raspbian and it’s adding Flash to Chromium.


(Peter Oliver) #9

Or at least check that your regularly scheduled backups have been happening and are good. You have regularly scheduled backups, right :wink:


(Daniel Hollands) #10

I’m not too concerned with the software itself, more the changes introduced between versions which may need fixing.

Hahaha, yes I have.

Up until just now the forum was using its built-in backup feature, which was dumping a copy of itself into an S3 bucket on a weekly basis, with the past 5 dumps being kept in the archive before they’re destroyed. I’ve just gone in and updated this to do it on a daily basis, of which I’m keeping the past 7. Maybe that’s overkill (the forum isn’t exactly bursting at the seams these days), but better safe than sorry I guess.

The other server is using Digital Ocean’s backup feature, with a snapshot of the server being taken weekly and stored somewhere else. I have given some thought to using tarsnap along with an automated backup script which saves a dumped copy of the database along with all the files, but decided that - even though this will be cheaper, it’s a lot more work, both in setting it up, and in restoring it should things go wrong.


(Greg Robson) #11

My approach (as I’m confident, but no genius with Linux) was to roll out a new fresh server with Laravel Forge (using Linode servers), get everything working, and then switch DNS.


(Andy Wootton) #12

I’ve also seen people who are better at Linux than me separate their system and user partitions, install a new version in a different partition then flip the symbolic links over to the original /user. If I ever get a PC with a system disk big enough, I might work out how to do that :slight_smile: but currently using an 8GB SSD which is getting tight for one copy. I’m facing having to dump Unity and Gnome and use LXDE for Pi compatibility(ish.) The pet dinosaur may have to go too.


(Peter Oliver) #13

Is that safer? If you go on holiday and something bad happens, every good copy could be overwritten before you get back.


(Stuart Langridge) #14

Yeah, a sort of spaced out thing may be better; keep some going back into the past, increasingly spaced out as you go further back…?


(Daniel Hollands) #15

HHmm… tbh, I don’t know. My reasoning is if it’s broken, then it won’t be making backups, because the software powering the backups is the same software powering the site.

I figured a daily backup was better because you’d lose less content should the worst happen… but I’m happy to be told I’m wrong.


(Andy Wootton) #16

The most common need for recovery from backups is human error. You don’t always realise within a few days. On tapes, we used to do at least a 3 day tape cycle then weekly, monthly, maybe six monthly and annual. So a kind of exponential scale. If something matters, after a period of not being missed, it probably lasted for at least that period.

Also: chech you can recover from your backups occasionally. I’ve worked 2 places that couldn’t.


(Daniel Hollands) #17

I’ve had a little bit of experience with AutoMySQLBackup, which gives you a daily backup, which it keeps for 7 days, then a weekly one (which I think it maybe keeps for a month) and finally a monthly one (which it keeps for however long). I like this staggered approach.

In my instance, I’m just using what discourse offers, which is a backup every X days, as well as a store of Y number of backups. Maybe I should go back to the default (once per week, with the last 5 kept). What do you think?


(Andy Wootton) #18

Default Job 1 - A backup every day, kept for 8 days

A separate mechanism to backup the backups? I’d have done it with ‘script’ files.
Job 2 - A backup every 7 days, kept for 30 days (it might be e.g. a rename of the nth daily, which I imagine are just files now.)
Job 3 - A backup (or rename) every 28 days. kept for a year
etc.


(Marc Cooper) #19

I live in postgres world and usually install autopostgresqlbackup from the get go. I see there’s a automysqlbackup package, which might be similar.


(Daniel Hollands) #20

I would be happy to use this, (if it works in the same way as automysqlbackup) along with a tarsnap backup… but I’d have to do it inside the docker VM ?(I think), and I simply don’t want to mess about inside there.