Containers101_high
0 (0 Likes / 0 Dislikes)
Hello and welcome.
My name is Matt McSpirit,
technical evangelist
for our data center technologies
here at Microsoft,
and today we're going to be talking
all about Containers.
I'm joined my Taylor Brown
from Microsoft
and Chad Metcalf from Docker.
- Welcome guys.
- Thank you.
And as I said, today, it's all about Containers.
So guys, the first obvious question for people
is what are Containers all about?
What is this thing called Containers?
Yeah, I mean this has been a question
that we're getting a lot right now
as people are starting to—
especially the Microsoft faithful,
you folks are starting to look at.
What is this thing of a Container?
They're hearing the buzz about it.
I think it's really useful to actually think about it
in two different ways.
You've got to break it down
into the operating system perimeters.
There is a set of core things
that are just built into the operating system
that make Containers work.
And if we look at Linux
as kind of the proxy for this,
they've had Container technology
for 6-7 years,
started with IBM, added to by Google.
But they weren't really heavily utilized
until tool sets like Docker came along.
And that's kind of the second part of Containers
is that tool set.
And what my team is working on right now
is building the core perimeters
added to the Windows operating system
such that we can create Containers
and have containerization as well
and then working with folks like Docker
on that as well.
OK, so Chad, bring us up to speed,
what is Docker?
What does Docker provide
as Taylor was mentioning?
Well, there's really two sides to Docker.
Docker originated as an open source project.
So Solomon Hykes is the founder of our project
and basically realized that development
was still too difficult
and started a project
and over the course of time
just very quickly, that project to off.
So then there's also Docker Inc.,
the company that now
is leading that charge,
and that's building an enterprise-grade tool set
around the open source project.
But if you look at, like, Docker,
just the open source project,
what it is is it's a set of tools and APIs
to help our customers or users
build, ship, and run their applications.
So from a developers laptop,
whatever application they're developing,
put that into a Container.
And in this case, a Container
is really just a format
to describe an application
and all of its dependencies.
And then ship that to, say, test,
ship that to stage,
and ship that to prod,
but sort of the magic of Docker
is the thing you're shipping
isn't just the application code.
It's the application and all of its dependencies
down to the bottom.
So as it runs on your laptop,
this is the same way it will run in test,
the same way it will run
in stage and prod.
So these sort of problems worked in dev,
failed in prod, are oftentimes mitigated.
- You've got that consistency across—
- All the way across.
—with the Container.
So what are some of the—
explain to us what makes up a Container.
What is it?
You touched on a few different terms there,
so help us understand
what makes up a Container.
You've got the run time, dependencies,
but what are they?
Have you got a good example
you could perhaps give us for people watching?
Yeah, but I think there's a couple key components
to kind of what makes a Container
an ecosystem or what makes
the entire Container system work?
There's that of run time,
the operating system perimeters,
the things that allow us to easily separate out
a set of processes and make them pretend
like they're in their own operating system
which is ultimately what Containers are about.
It's actually virtualizing the operating system layer
so that every Container thinks that it's running
on its own copy of the operating system.
It's got its own file system, its own registry,
its own name spaces, its own network address.
It thinks it's on its own copy
of the operating system
which is kind of a lie.
It's actually being virtualized
and there's potentially mini-Containers
running next to it.
And that's kind of the core run time.
And then there's Container images.
And as Chad of described,
these images are what you actually move around.
They're the transportable
kind of view of the Container.
There are all the files
and registry keys and settings
and descriptors of kind of
what this Container looks like
so that I can do something
like Docker pull my SQL
and have that run on my laptop
just the same way it ran on Chad's laptop
when he built it.
It's that common format that we exchange.
And what's really neat about the way
that Docker has built that
is that these images actually
build on top of each other.
So as Chad built MySQL,
now I can build my database
on top of Chad's MySQL layer.
You can build your layer on top of that.
And when we share those around,
we only have to share what's changed.
You only have to share that top layer.
So it becomes a very, very
efficient means to share these things.
So efficient from capacity
perspective exclusively
because I think I relate
some of this to virtual machines.
In my mind, I create lots of VMs in each one,
let's say a ten gigabyte footprint.
Even though I want them to be more or less the same,
you're saying that we can be
a lot more efficient with that through Containers.
Yeah, it could be more efficient on the transport.
So if I have a base Container
which is maybe the base operating system
and on top of that I might have Redis;
I might have MySQL;
I might have an application stack like a JVM.
When I ship those around,
I will only ship the base once
because I already have it.
They all depend on the base.
And then each of those applications
which ship its own layer,
as that keeps going, I only have to ship those
if something is changed.
So you get a very, very quick time
to actually transport those.
And also it does reduce storage
on disk requirement.
Right. So when you say
you've already got the base,
what do you mean?
Is that on my laptop;
is that your laptop; is it on a server?
What is the base?
So the base Container is the place where
oftentimes it's a very minimal
operating system image
that you can start from.
It doesn't have to be.
It turns out the only thing
that needs to be in a Docker image
is everything that that application needs to run.
So if you have a statically linked binary,
you could just put that in there.
But oftentimes, what we see is it's
a very minimal distribution of an operating system.
Once I have that, what happens
is I just use any tool
that I already use to build the next piece, right?
So in Linux-land, I might install
additional packages
which represent some service
that I would like that Container to become.
So the resulting image or the result—
yeah the resulting image now has two pieces.
It has that common base,
and it has a new thing on top
which is the new application.
So that image is defined by two layers.
And when I shop those around,
I only have to ship them once.
And that's from your laptop to his laptop,
you would basically copy two layers.
Let's say you made a change to that top application,
you updated the version.
When you ship that to his laptop,
you would only ship the top layer
- because he's already got the base.
- Right. That makes sense.
And for Windows Containers,
what we're doing is we're shipping
that base layer as Server Core
for the Technical Preview 3 which you all have.
And then over time we'll get nano as well,
so we'll actually have a nano base layer as well.
And we'll update those
and distribute those as necessary
with updates and whatnot.
So anybody can get those for free
through our distribution
of the images.
So that's a layering approach
that will provide that base.
They could then introduce
their own layers on top
and as you say construct the applications
that they're looking to.
I think there's another part
of the efficiency story that's really—
that has kind of been touched on
that's really powerful
is that as things change, it's the person
who's affecting that layer
that makes that change.
So if I'm the guy who's responsible
for creating the SQL install—
everybody's going to use
the same SQL install on our team—
when I make an update to that, that SQL layer,
which is what I know how to do,
then you all pull the latest version of that
and you build on top of that each time.
It's not me going around and saying,
"Oh, I need to install an update
on top of Matt's layer
that has all of his databases
and everything on it,
and I don't know anything about it."
So I'm coming in kind of black-boxed
just hoping and praying that it's going to work.
Well, we find that oftentimes doesn't work very well.
And that's how we end up
with these production outages, right?
So how does containerization
help with density?
So we mentioned it's more efficient.
If I'm used to, again, deploying lots of applications
and workloads in VMs
and maybe I want to deploy now with Containers,
do I see greater levels of density
for my applications and workloads?
You can certainly.
I mean as we've kind of got this image in the background,
we've got the Windows Kernel down here,
and this is really where the Container APIs,
the meat of Container run time, works,
and then Containers layer on top of that.
Unlike a virtual machine where I have to have
a new copy of the Kernel
for each and every instance I'm running,
with a Container, I've got the same Kernel
and actually most of the same operating system.
I'm just isolating the pieces of that application
that I need in their own Container.
- So that unique Container that you're making.
- Right, right.
So I do get a pretty significant benefit
in terms of density.
OK, just density, or can we boot them faster
or get them going quicker,
that kind of stuff beneficial as well—
obviously beneficial.
Very much so, Containers—
because I don't have to boot a whole operating system,
all I have to do is start the processes.
So Containers start considerably faster
on the order of seconds
as opposed to virtual machines
that are on the order of potentially minutes.
Yeah, exactly. So Chad, explain some
of your components on this graphics here
because we've the Docker client;
we've got the Docker engine.
What are some of those building blocks
that you've got there?
So the Docker engine
is really the core of Docker.
So it's a service that runs on basically every host
that you want to run Containers on.
So in TB3, you'll be able to do that in Windows,
on Linux,
basically any modern Linux Kernel,
so 310 or later you'll be able to run an engine.
And then once you have an engine,
you can run those Containers into that engine.
And you do that by using an API
that the Docker engine presents.
So you can either use that restful API directly
or you can use it through a client.
And so the average—
sort of the average developer
will often interact with an engine through a client
which is just a command line set of tools
or maybe potentially integration
into studio
or another IDE to get that up and running.
One of the cool things about Docker
at least on the—
actually on the Linux side
and certainly on the Windows side—
it doesn't matter
on the Linux side what the underlying
operating system is.
Wherever I have a Docker engine,
I can run a Container.
So in Azure
if I an Ubuntu engine up and running,
I can run whatever Containers
I want in there
regardless of what's inside that Container
because in the end, it's all Linux.
And so the same thing will be—
same thing for Windows
is it doesn't matter what the underlying
Windows operating system looks like
so long as I have a Docker engine there,
I'll be able to run Windows Containers in that.
And so from the client, I'm executing commands
to start Containers, stop Containers.
What else can I do from the client?
What else is typical?
So you can attach to log output
of that Container
to see what logs it's casting off.
You can look at stats to understand
what its CPU and memory
and other statistics are.
One of the cool features is
you can actually attach other Containers.
So if you were to debug a Container
so you have a network issue,
you can run another Container
that shares that Containers network.
And you can debug it without ever touching
the running Container.
So there's lots of new ways
when you have this technology
that you get to start to think differently
about how to interact with Containers.
So speaking of thinking differently,
how is Docker enabling developers
to think differently generally speaking?
Well, I think part of it is just the ability
to rapidly self-serve.
So I have—I can have
a full stacked development environment
on my laptop.
I can pull down all the pieces and parts.
Maybe I don't know
how those pieces and part work,
but I know that my application consists
of these five Containers
with these versions, and they're wired up this way.
Docker takes care of that for me,
so I can actually do what I want to do
which is just work on the feature and the service.
That feature—or that service oftentimes
doesn't exist in a bubble, right?
So I need other pieces to make that happen,
and I can do that on a laptop very quickly.
It's very trivial to spin up a new Docker environment.
Play around with it.
If you break it, throw it away and try again, right?
So you get that sort of quit iterative
development experience
that everybody really wants.
So when we think about—
you mentioned using the client
to deploy new Containers.
Where are they coming from?
So where are they originating?
Are they originating in the cloud?
If you're an enterprise,
are they somewhere located on premises?
- What's the flexibility you've got there?
- Yeah, so I think the answer is all of the above.
So they might generate from my own environment.
I might build my own Container then run it, test it.
But that's only interesting for my laptop, right?
So then where else can I get Containers?
First would be the hub which is our SAS offering,
so hub.docker.com.
Right now in the hub,
there's about 80,000 different Containers
that are just publicly facing.
So you can also have private repositories as well,
but everything you could ever want
from databases
through middleware,
through messaging systems,
- you can just search and find what you want—
- But people have already built those—
- Absolutely.
- —loaded them.
Yeah. And we have an interesting program.
We have the Official Images program,
and there's about eighty official images.
And what these are, the upstream IP holder
has built the Container, right?
So Oracle has stepped up
to do the MySQL Container,
Mongo folks have done MongoDB,
and they represent the best practice way
of building this.
So if it's my IP, generally speaking,
I should probably know really well how to run it.
So then you can just consume those
as just Lego bricks
that you want to snap together
and I just need a Redis,
I need a Python stack,
I need a Go stack
- and just consume them down.
- And combine them together
and filter out the application.
Oftentimes though, enterprises want
maybe a little more control than that.
So from Docker, there's Docker Trusted Registry
which is sort of the hub but in a network that I control.
So whether that's in a cloud or in a data center,
it doesn't really matter.
It's a version of the registry
that is behind something that I control,
that I can decide what goes into it,
what comes out of it,
that integrates into the company's existing directories
for access control and all those pieces.
But these things aren't one or the other.
You can use any combination.
OK. So you mentioned before
around having an application
that's made up of lots of Containers.
Is that the same as microservices?
Tell us a little bit about that
and microservices,
how they relate to Containers.
We've been having this conversation
a lot with folks
is that they come in and say,
"OK, yeah, Containers,
they look interesting,
but I'm not really into microservice,
so I guess I don't need them.
And really microservice is a design pattern.
It's a way to design an application.
It existed before Containers were really a big thing.
It doesn't have to—
it's not one or the other.
It's a design pattern, and then Containers
happen to be a tool
that can enable that design pattern.
So we see people do microservices
with platforms and service offerings.
We've got Azure Service Fabric
which is a microservice architecture.
And ultimately that's going to use Containers
as an efficient way to run the service,
but they really are kind of a separate entity.
And I think Chad was mentioning earlier
that he's been talking to a lot of customers
that are using Containers,
and ultimately
they end up getting to a microservice, right?
So if you think about at least
what Docker wants to present
as two key values to our customers,
it's choice and portability.
So what do you want to do with the technology
and where do you want to run it?
So what we see oftentimes with our customers
when they start, they just put their large applications
into Containers
because it makes it easy to move them
between their environments,
so between dev and test and stage and prod
and potentially on-premise to a cloud.
And it's still the old monolithic application
or just maybe just a very large application
that they would like to migrate
to more of a services architecture.
So what do you classify
as a large application as an example?
A full-on MySQL database or—
Well, I think it's even bigger than that, right?
It might be—
it might be a number of components,
so it might be, say,
a very large Java application
that has an authentication service built in,
- it has a lobbying service—
- You get a lot of business applications?
Yeah, exactly, a lot of business service.
So maybe I even have, like,
seven or eight of those
that all have their own services
that are duplicated across.
So we have one customer in mind
that had seven of these big monoliths,
and they each had duplicate services
that they were maintaining.
And they wanted to move
to a different architecture
but they needed to be able
to move those pieces first.
So if I can't ship to production,
I can't even—I don't have time
to do anything else.
So once they had that working in Docker,
they were able to go back and identify
these five of the seven
are using different authentication services,
so let's rip that out into a new authentication service
that they all can share.
And now we only have to maintain
one authentication service.
- In its own Container.
- In its own Container.
So now I have eight Containers.
So I have the original seven
plus the new authentication service
and we'll just migrate that through.
And over time, rinse and repeat that same piece,
and maybe those seven Containers
end up being
tens or hundreds of Containers at the end.
OK. So does the Docker technology help me
with the packaging of the application
into a Container, or is just about the running
and the deploying
and the managing and starting and stopping?
Yeah, so Docker gives you basically
what we call a Docker file
which is a manifest to describe
how you want to build your application.
And so that basically lives in parallel
with the application itself,
so it's probably in the same version control
and you use that
to say, OK, now build this application
and it just describes through.
So whatever—if you have a scripting service,
if you're using existing PowerShell scripts
or other tools,
you can just use those and the Docker file
will walk you through building the application.
And something we showed at DockerCon
that I think was pretty neat around the same thing
is we showed actually building the application
Visual Studio,
right click Deploy into a Docker Container,
but then we followed that up
into Visual Studio Online
where we can actually automatically
have a Docker bill kick off
every time a check-in is made.
So that Docker image
actually gets updated automatically
every time someone does a check-in
and then could get pulled back down
into QA, into other developer scenarios
or even all the way into production, right?
So you've got some automation
around your bill process there
which sounds to me a bit like DevOps.
And are these two technologies—
so Containers and DevOps
being a process as well as people
and other technology—
are they—
do they have to exist together
or are one reliant on the other?
I think it's the same as kind of the microservices.
DevOps is a great thing, and people
are doing that without Containers.
They have been doing it
since before Containers were around.
Containers are a really,
really helpful tool in it.
It's something we can all talk about
in the same way, right?
I can say, "Hey Matt, I'm giving you a Container,"
and you know what that means.
- You'd hope so.
- Well, yeah.
Or I can at least explain it and say,
"Here's a Container.
You can start it; you can stop it;
you can deploy it;
you can move around,"
and you can understand,
- "OK, I know how to do that now."
- Yeah.
And whereas before it's "OK, here's a script,
and if the script doesn't work,
I don't know, your environment's messed up
because it worked on my machine."
- Lots of finger pointing and all that.
- Yeah. And the 2 am phone calls
and the 3 am phone calls
and the escalations the next day
of why was this down?
All those can go away
when we actually have a common language
to talk about,
when we all understand each other
and can work together.
I think that's really what Containers have helped
is providing some of that common language.
But they are not mutually—
they are mutually exclusive.
Right. I think the—
so the common language
is a big piece to this.
So if you have sort of
a very forward DevOps team
where you have Ops embedded in Dev,
that's great and everyone
can understand that.
We also see that in, say,
financial or government sector,
customers that have a mandated bright line
between the two teams,
it still helps them because it gives them
a common interface
and a common language to do this piece.
And if you think about, like an example,
an additional tool from
in the Docker ecosystem
is called Docker Compose,
and it's a—
basically gives you a manifest
to describe an entire application,
where that application
might be many different Containers.
So I might have fifty different Containers
at these various versions,
and they might have these dependencies between them
and they might have these links
that use these ports.
And if you can imagine when we talk
to folks on the Dev side,
they can just say "Docker Compose up"
and it spins up their entire application,
and they can test it and build it.
Then hand that same manifest to the Ops side
and ask them, "When was the last time you got
a machine-readable specification of an application,
all its dependencies, all of its networks,
all of its ports?"
Basically never.
And you can use that same tooling
to spin up the application, right?
There's no more—it's in the code,
it lives with the application itself,
and I understand I can read it,
I can reason about it.
And I can take a laptop,
run it the same way that I would in my data center
which might be very different,
but Docker helps abstract away that difference.
OK, so it's a real enabler for IT
and that relationship with Dev
and speeding up the rate
that they can deploy applications
and provide in that standardized platform.
Now when we think about Containers,
we started talking briefly
about Windows Server Containers initially,
and it's a new technology.
You'll be able to play with it in TP3.
The graphic depicts architecturally
some of those key components
from a Windows Server perspective.
But we announced a couple of months back
a different type of Container
within the Microsoft offering
and that's a Hyper-V Container.
Tell us a bit more about Hyper-V Containers,
how they differ,
what their intended purpose is.
And then we can talk about
how Docker provides value
in that space as well.
Yeah, and some of you
might have already read a blog post
that we did on kind of
what is a Hyper-V Container,
and I think that's a really helpful post to read
and we'll have a link to it at the end if you haven't.
But the easiest way for me to explain
what a Hyper-V Container is
is we basically took a virtual machine,
we stripped it down to its very,
very, very minimal surface,
just enough for it to run a Container,
and then we took Windows and we did the same thing.
We made Windows just enough to run a Container,
and then we run a Container
inside that virtual machine.
And so what that gives us is it gives us
all of the same kind of isolation of benefits
that a virtual machine would have.
We can run it across different versions like Kernel;
we can check all the security boxes
for this needs to be HIPAA compliant
and it has to run in this.
Get all those of check boxes checked.
But we still have the same process
and the same tool
and the same commands that we would have
with a Windows Server Container.
In fact, it's the same image format.
So if you build a Container
using a Hyper-V Container,
it will run as Windows Server Container,
or if you build it as Wow Server Container,
it will run as a Hyper-V Container.
So it's just a run-time option
between the two.
And we'll have a lot more to talk about
with Hyper-V Containers
as we get those into a preview later this year.
But we really think they're
a pretty neat technology.
We built them largely because we saw
a need for our own workloads.
As we looked around Azure
and how we were running things in Azure,
we saw a need for it.
And as soon as we identified that,
started talking to customers,
we identified that they also had a similar need.
So we're really excited about them.
One of the interesting pieces is that
the customers that we see are very different, right?
So in the retail space or even in sort of Silicon Valley,
those customers and their requirements
look very different
than say the financial space or government sector.
And one of the pieces I like the best
is it's a run-time option, right?
So if I have a business case
that requires me to do that,
I can flip a flag.
But if I don't, I don't have to do it.
I can still use just
the standard Windows Container
and I can still get that density,
I can still have all those pieces
which goes back to sort of
one of Docker's core mission
which was choice
which is where do you want to run it
and what are your requirements
around running it?
- And we should run it there.
- And did you think that's the biggest,
most significant Docker has had
on both the Linux development community
and going forward to Windows is just that flexibility?
I think the first one was just reducing Dev friction.
The second one is portability,
so my data center Azure,
and I think the third is choice.
So if you look at large enterprises,
it's not going to be one answer.
They're going to have many different environments,
and we don't think that they should have to re-tool
just because they want to run it in a different place.
So my on-prem story shouldn't have to be different
than my Azure story.
And I think one of the things
when we started first talking to Docker
one of the key kind of turning points
and kind of visionary moments in that
was when we talked about our vision,
our Microsoft vision,
of an application that spanned both Linux Containers
and Windows Containers.
And we demonstrated that kind of realization
at DockerCon this year
where we spun up the first application
that used components
in both Windows and Linux, right?
And we think for our customers,
that's really valuable
because there's some great technology
that's available in Linux today
that enables them to build really,
really great apps.
Same thing for Windows.
We've got great technology
in Windows for building applications.
And when we can marry those together
and let customers use both of them together,
they win, right?
And Docker's been a great enabler
for that vision to come to fruition.
I think it's exactly because
you can stop having conversations
about Linux or Windows
or on-prem or a cloud,
and you can start having conversations about
what's the best place to run my workload.
Yeah, so thinking about the application,
financial, where it's better in the cloud or on-prem,
security, whatever it may be.
And if your business requirements change tomorrow,
Docker's going to help you move.
Correct, and the flexibility of move the Container.
And we were already talking to a lot of customers
that are in this world today.
They've got an investment
in Docker and in Containers,
and they're using that where they can,
but they have a huge investment in software
that they've written for Windows
that their choices were either rewrite it
so that we can continue to use Docker
or forego Containers.
And that was a really bad choice for them,
so they're really excited about what we're doing
because it moves that kind of painful choice
and just says,
"Well, I can continue to use the best tool I've got."
So if people want to get started,
how can they get started
with the Windows Server Containers?
- What advice would you give people?
- Well, the first thing I would do
is take a look at our documentation site.
The short link is aka.ms/Windowscontainers.
So we've actually spent a bunch of time
making better documentation,
I think better documentation
than typically we've seen in Windows.
And so we're trying really hard to make sure
that that documentation is really good
and that we can get feedback from you on it.
So there's also links on there to the forums
where we can ask questions and have Q&A.
So that's definitely the first place to look.
- People can make use of it in TP3.
- Yeah, the Technical Preview 3, yep, absolutely.
And for Docker, where can people get started there?
Well, the easiest way to get started with Docker
is just go to Docker.com/tryit
and just get the technology.
What we see is that usually you put Docker
in the hands of Dev
and then it just spreads like wildfire.
OK. Great. That's a good thing.
OK. Well, thank you guys.
- Thank you.
- Thanks for coming along.
And hopefully that's been useful
for you at home as well in this Containers 101, if you will.
We've gone through a lot:
What are Containers,
how Hyper-V
and Windows Server Containers differ,
the impact that Docker has made
in the Container ecosystem.
And it's all great stuff,
and there's lots more resources
out there the guys have provided,
so get your hands on TP3,
read the Windows Server documentation
written by Taylor himself,
and his team obviously as well.
- No, my team.
- Get onto the Docker website as well, /tryit.
Go to Docker.com/tryit
and download their bits as well.
And obviously use the image, the hub,
to pull down some Containers
that you can use for your apps
and try it all out.
And with that, that's Matt McSpirit signing off.
Thanks for joining us again,
and we'll see you again.
- Thank you very much.
- Thanks.