Duke’s Website has Gone Docker

29EcG

I was excited to see Tony Hirst retweet the news that Duke University’s website is being run in a Docker environment, and it could even be served through Amazon Web Services. Chris Collins, senior Linux admin at Duke, wrote about “Using Docker and AWS to Survive an Outage” they had as a result of DDoS attacks on their main site back in January. I love the way he tells the story:

While folks were bouncing ideas around on how to bring the site up again while still struggling with the outage, I mentioned that I could pretty quickly migrate the site over to Amazon Web Services and run it in Docker containers there. The higher-ups gave me the go-ahead and a credit card (very important, heh) and told me to get it setup.  The idea was to have it there so we could fail over to the cloud if we were unable to resolve the outage in a reasonable time.

TL;DR – I did, it was easy, and we failed over all external traffic to the cloud. Details below.

He goes on to describe his process in some detail, and it struck me how the shift in IT infrastructure is moving, and also made me wonder how many IT organizations in higher ed are truly rethinking their architecture along these lines. It’s one thing to push your services to a third party vendor that hosts all your stuff, it’s all together different to bring in a team that understands and is prepared to move a university’s infrastructure into a container-based model that can be hosted in the cloud. Not to mention what this might soon mean for personal options, and a robust menu of teaching and learning applications heretofore unimaginable. This would make the LAMP environment options Domain of One’s Own offers look like Chucky from Child’s Play 🙂

I know Tim and I are looking forward to thinking about what such a container-based architecture might means for an educational hosting environment that is simple, personalized, and expansive. Tim turned me on to Tutum recently, which starts to get at the idea of a personalized cloud across various providers—something Tim Klapdor gets at brilliantly:

MYOS is very much the model the Jon Udell laid out as “hosted life bits” – a number of interconnected services that provide specific functionality, access and affordances across a variety of contexts. Each fits together in a way that allows data to be controlled, managed, connected, shared, published and syndicated. The idea isn’t new, Jon wrote about life bits in 2007, but I think the technology has finally caught up to the idea and it’s now possible to make this a reality in very practical way.

His post on the topic deserves a close reading, and it’s the best conceptual mapping of what we might build I have read yet. I wanna help realize this vision, and I guess I am writing about Duke University’s move to Docker because it suggests this is the route Higher Ed IT will be moving towards anyway (sooner or later—which could be a long later for some 🙂 ). Seems we might have an opportunity to inform what it might look like for teaching and learning from the ground floor. It’s not a given it will be better, that will depend upon us imagining what exactly a teaching and learning infrastructure might look like. Tim Klapdor has provided one of the most compelling visions to date, building on Jon Udell’s thinking, but that’s just the beginning.

This entry was posted in AWS, reclaim and tagged , , . Bookmark the permalink.

2 Responses to Duke’s Website has Gone Docker

  1. Tony Hirst says:

    We’re currently working on a new OU course on data wrangling that requires giving OUr distance ed students s/w bundled up in a VM (MongoDB & PostrgreSQL accessed via IPython/Jupyter notebooks). The original plan was a headless monolithic VM (the notebooks are exposed via http, they can talk to the DBMS purely within the VM), but we’ve decided to go instead with a VM in which each app is in its own container, and the containers wired together via Docker Compose.

    Though current thinking is a direct evolution of the original monolithic VM idea – we’ll distribute a VM with images inside that are then fired up as containers from Vagrant calling on a docker-compose script – I’m hopeful we may also be able to come up with a way for students to launch the machine using Kitematic (in its raw form, this might just be to call the docker-compose script via the docker CLI, though I’m hoping that Kitematic gets a more graphical UI for compositions at some point).

    I’m also hopeful that we might be able to find a more atomic way of working with Kitematic (eg students just launch a notebook container on it’s own when they don’t need the DBs); and I’d like for us to be able to provide students on tablets, netbooks or locked down public access machines the ability to easily fire up containers themselves on a third party host.

    (I’ve been quite taken by Tutum too for running atomic apps – eg http://blog.ouseful.info/2015/06/24/running-rstudio-on-digital-ocean-aws-etc-using-tutum-and-docker-containers/ – it’s also got me out of a hole doing some training once where participants were supposed to have OpenRefine running on their machines but didn’t (I’d preemptively launched a couple of cloud instances, just in case…) though I’ve yet to try container compositions there.)

    What I’d really like to see is Kitematic support plugins for third party remote hosting launches – so e.g. you could create/configure a panel for AWS, or Digital Ocean, and then drag an image from the Kitematic palette and drop it into an AWS – or Reclaim?! – host… (I could imagine a tutum style middle layer that might do the same? So e.g. tutum as part of Kitematic providing access to third party hosts.)

    Exciting times…:-)

    • Hi Tony, I’m also hooked on using Tutum and Kitematic. Some sort of mashup between the 2 would be a huge help with some of our projects. Congrats on migrating to a Docker environment.

Leave a Reply to Brian Christner Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.