Watch videos with subtitles in your language, upload your videos, create your own subtitles! Click here to learn more on "how to Dotsub"

Take Control of the Cloud with the Windows Azure PowerShell Cmdlets

0 (0 Likes / 0 Dislikes)
  • Embed Video

  • Embed normal player Copy to Clipboard
  • Embed a smaller player Copy to Clipboard
  • Advanced Embedding Options
  • Embed Video With Transcription

  • Embed with transcription beside video Copy to Clipboard
  • Embed with transcription below video Copy to Clipboard
  • Embed transcript

  • Embed transcript in:
    Copy to Clipboard
  • Invite a user to Dotsub
[TechEd 2013] Let's get started. [Take Control of the Cloud: Windows Azure PowerShell] [Michael Washam, Principal Cloud Architect, Aditi] My name is Michael Washam. I'm a Principal Cloud Architect at Aditi. About a week and a half ago I was a program manager with Microsoft, so I was actually the PM for the PowerShell cmdlets. I got tired of working with those guys. Just kidding. What we're going to talk about today is a deep dive from getting started all the way to how you actually can use our PowerShell cmdlets to do some more advanced scenarios. How many of you have used PowerShell cmdlets for Windows Azure before? We do start at the beginning, but we go deep. It's not that big of a subject, so we can have a lot of coverage in these sessions. I always like to kick off the session with a quick overview of what can you do with Windows Azure PowerShell cmdlets? Obviously automation. The goal of PowerShell is to be able to automate script and do all kinds of things that normally take you a long time, but you can do them once and make it very repetitive. From PowerShell you can query, manage and configure Windows Azure VMs. You can do our PaaS, Cloud Services, Windows Azure websites, manage storage, queues, databases, etc. I think Service Bus is even in there now. I need to update my slides, and the key is you can manage all these different resources in Windows Azure across multiple subscriptions and multiple data centers. You can provision fully composed virtual machines. If you're getting started with Windows Azure today and you go to create a virtual machine, and then you realize that you have to go back to the portal to create an endpoint, then you have to go back to the portal to create a data disk, and then you realize that you need to do that for another 5 machines, it's a really long, repetitive process. PowerShell allows you to create a virtual machine with your endpoints, your disks, network configuration all at once. It makes it nice and fast to deploy your MVMs. The second thing is if you're deploying anything in AD, you can actually have your virtual machines booted up and join the domain, so you don't have to wait for them to log in and then manually do a domain join. We just enabled Remote PowerShell a few weeks ago. Whenever VM is booted up, you have PowerShell all over the portal, by default Remote PowerShell is turned on. This gives you the capability of once the VM is actually up and running in the cloud you can log into it remotely from your client, manage it, check disk, check PerfMon counters, install software, whatever you want to do with this all from your remote client. Another feature we've added is a networking stack. This was just a few days ago. This came out in Search Management API. It has the ability to lock down public endpoints via access control lists, so I can open up SSH or RDP, which is what you get today with Windows or Linux, and then I can sub an access control list on it so only a certain IP or a set of IP addresses can actually get to it. Finally, you can manage storage. Managing your virtual hard disks, uploading them from on prem or downloading them to on prem and copying them between data centers. You can do all of that with PowerShell. I'm going to talk a little bit about getting started. My big complaint, even though I was a PM for these guys, is they are relatively hard to get up and running because there is so much Windows Azure subscription complication involved. I'm going to give you the crash course on how to get these cmdlets configured. The first thing you need to do is go download the cmdlets themselves. WindowsAzure.com, there is going to be a downloads link. Click Downloads, Command Line Tools, and PowerShell. Once you install them, there is a trick here too. I'm skipping something on my slide. You first have to import the Azure module. If you install the PowerShell cmdlets and you immediately run Import Module Azure, it won't work, because they do something as part of the setup that registers Azure as a known module, and it only happens after you reboot once. Either reboot, or you have to type in the full path to the module, and I'll show you how to do that. Once you get the module imported, you need to run another command, and you only have to do this one time for your subscription. Get-AzurePublishSettingsFile is really powerful. What it does is it pops up IE and takes you to a website, so it really doesn't do a whole lot, but it does take you to the URL that you won't know about otherwise. You go to this website, download your Publish Settings File. This has your subscription ID, and it actually generates your management certificate for you that you can then download to your machine. And from there, you use another command, Import-AzurePublishSettingsFile, and this command actually configures PowerShell to use your subscription. The first two, the Get-AzurePublishSettingsFile and Import, you only have to do that once. We actually persist everything in your user profile so the next time you launch your PowerShell session they're already there. You don't have to do that every time. The next thing you need to do is you'll need to— if you have multiple subscriptions especially— you'll need to configure your subscription to use the right storage account. In Windows Azure, I can store the VHDs in pretty much any data center I want. I can put it on the East Coast, the West Coast. I can put it in Asia. I can put it in Europe. I can put it in Australia. All kinds of power for specified storage, and the PowerShell cmdlets want to know which continent you want to put your VHDs on is really all it's asking here. And finally, since we are multi-subscription aware, you have to tell it to be a select subscription. If you have more than one, to tell it which subscription you want to work on. And then once you do all that magic, you can call Azure cmdlet. Let's walk through and show you how that works. If I was just getting started, the first thing I'd have to do is go to WindowsAzure.com, hit Downloads, Command Line Tools, and Windows Azure PowerShell, and this is going to launch the WebPI installer and put everything it needs to on my machine, and luckily I've already done that. It's going to spin for a second and say it's already installed. Now, one thing to know, the PowerShell cmdlets are updated about every 3 to 4 weeks. If you're an avid PowerShell user, go check that download link often, because usually there is new functionality literally every 3 weeks, so the ship cycle last time before I left was every 3 weeks. I think they were extending that out a little bit because it was pretty hectic, but these rev very fast, so any time we have new functionality it's going to show up there first. Once the PowerShell cmdlets are installed, I'm a big fan of the PowerShell ISE, but feel free to pick whatever your favorite PowerShell editor choice is, and this is a path you'll memorize eventually. You memorize that, or you install, you reboot and type in Import Module Azure. Take your pick. This is the one that will work without rebooting. You do that, everything is set. Now that you've got the PowerShell module loaded up, all these Azure cmdlets will show up in your dropdown. Basically how we did the name prefix is verb name, our noun prefix is Azure, and then whatever resource we happen to be working on. The one that we'll want to look at for getting started, of course, is Get-AzurePublishSettingsFile. Hit that, and it's going to automate IE for me and take me to this random URL that you don't have to remember, and the first thing it's going to do is pop up this download file. I'm already logged in with my Windows Live ID, otherwise it would ask me to log in. I can take this really long file and put it in directory. I'll put it in this directory, and the final step I have to do to actually get my subscription configured is import that file I just downloaded. It's kind of long. Now this is actually installing this in an XML file on my local machine, so I don't need this file ever again. I can actually delete it if I want to, unless you want to copy it to another machine and import it there. Now I can call Get-AzureSubscription, and it will dump out a ton of information. This is all subscription settings that my account has admin rights on. I have Aditi demos, cloud practice sales demos. You can tell I do a lot of demos. That's the gist of it. But now let's get back to the point about selecting your storage location for whenever you start creating virtual machines or cloud services or whatever. How to do that is you first need to know your location. Get-AzureLocation. This cmdlet will basically enumerate all the data centers that you have in your subscription. I can scroll down, and I can see North Europe, North-Central US, East Asia, etc., and one thing to note is not all data centers are equipped for virtual machines. If I look at North Central or South Central, they only say the available services are compute and storage, where the one right above it talks about compute, storage, PersistentVMRole, and HighMemory. The HighMemory is the new virtual machine SKUs that were announced a few weeks ago. If I was a brand-new user, I would pick one of these locations, say, East US, and I would create a new storage account. Just give it some name. It does have to be unique, so make it random, and this will actually create the storage account that you'll use to create your virtual machines. While that's waiting, actually, I already have a storage account waiting. It's already created, and I'll show you the subscription I'm going to use. If I do this, you can see down at the bottom my current storage account is set to smiaasdemo1. It's some random storage account. But as long as you have a current storage account set, that's how you know where your VHDs are going to be stored. That's pretty much all it takes to get set up on your subscription. Again, when I call Select-AzureSubscription, that's going to put the context right, so anything I create is going to execute in that context. Now let's talk about actually creating something. Say I wanted to create a virtual machine. The first thing I need to know to create a virtual machine is the image name. To do that, I'm going to call another cmdlet, and I'm going to filter out so I can see the image name, otherwise it returns back a ton of data. Get-AzureImage will return back all of the images that you can see in a portal and then actually there are quite a few of them that you don't see in a portal as well, so you won't see any of the right image virtual machines images, and I think that's the only ones you won't see are the RightScale ones. But in the future, there will probably be other vendors and stuff that we'll put out there that's not designed for showing up in the portal. For creating a virtual machine from PowerShell, I'm going to select the VM image name I want and save that guy. Now I don't need this code anymore, and to create a new Azure virtual machine for PowerShell you need to call the new Azure VM config cmdlet, because how the PowerShell cmdlets work is we actually create a virtual machine configuration that we use other cmdlets to modify that config, and once everything is set on a config, we use that config to post up to the Windows Azure Service Management API to do whatever we want to do, whether that's a create or a delete or create or update, I mean. When I create this object, I'll say "myvm1," specify the image name and the instance size. I'm actually not creating a VM. I'm creating an object, and I'm going to show you what that looks like. If I dump that object out, you can see that it's really just creating an object. You see the availability set name, configuration sets, data hard disks, etc. What I'll want to do is I'll want to modify that object to have everything I want. If I wanted to have this virtual machine boot up with a data disk I can say "create new," say 500, and label. Now when I update this object— actually you can't see it unless I dump that out— you can see that I have a data disk attached to this object as well, and you can do the same thing. You can attach multiple data disks here, or you can open up network endpoints, so I'll open up an endpoint on this guy. Now we've created a VM configuration that's Windows Server 2012, instance size small, and we'll have a 500 GB data disk and port 80 open on the web server. If I wanted to add multiple endpoints I can easily copy and paste that. If I have a service that's going to run on port 8080, now I have an object that can do both. The beauty of PowerShell is once I figure out exactly what I want, if I wanted to create 2 of them, I could literally copy and paste that code, change a couple of parameters, and now I have 2 configurations for 2 VMs. Now the code to actually create the virtual machines is using the new Azure VM command line. Here I need to pick the data center location, so I'll pick West US, and then I need to pass my configurations. I did forget a crucial piece of this, which is a provisioning config. We need to tell the service management API whether you want to create a Windows machine or a Linux machine and also what the user name is and the password. This is where copy and paste gets tricky. Whenever you screw up the source and then you copy it, you've got it screwed up twice. Now we're going to go ahead and create 2 virtual machines, both with 2 network endpoints each and two 500 GB data disks. Even accounting for explaining it and typing it it's still faster than going through and setting all this up in a portal. Let's jump back over to our slides for a little bit and talk about how you update it. We saw how you can use a few lines of code to create a couple of virtual machines, but how do you actually go off and modify something that's up and running? Again, working out this concept of a configuration, we're really working with a big blob of XML. If you really want to know the truth of what we're passing back and forth via PowerShell it's just a big blob of XML. When I call Get-AzureVM, it's returning back a big blob of XML. I then use other cmdlets just like I did during creation to modify that XML, and then I pass it back to the API via Update-AzureVM. This is a pretty concise pipeline version of how you'd do an update. Get-AzureVM will return the VM configuration. Add-AzureDataDisk modifies it by adding a new disk, and Add-AzureEndpoint modifies it by adding a new endpoint, and then Update actually posts it. Now, I've been told that this format looks more readable, so I return the VM config. I then Add-AzureDataDisk, and I pass the config as a parameter. Add-AzureEndpoint modify and then update, either way. I hope that makes the concept a little clearer. Let me switch back over, and we'll update a virtual machine 2. You can see the first virtual machines are still in the creation phase, so I'm going to open up another tab here, and I'm going to show you a really quick cmdlet that comes in very handy. Get-AzureVM has a couple of use cases. If I call it with no parameters whatsoever, it returns back the source name and the VM name and the current status of the virtual machine. It can be a little slow, because basically for every cloud service you have there is a separate API call. It's going to get the cloud service, and it's going to enumerate all the VMs inside that. If you have 100 cloud services, and it takes awhile, that's why. Here is how we could update a VM. Let's take this virtual machine. It's just a Windows server. I'm going to go ahead and call Get-AzureVM with the service name and the name. Does everyone know why I'm specifying service name? Did you watch Mark Russinovich talk when he talked about cloud services and their container? I'll point that out really quick. Anytime you're dealing with a virtual machine in Windows Azure how you reference it is via the public DNS name and then the actual virtual machine name inside the cloud service. If you want to know what that cloud service name is, it's down here at the bottom. The DNS name, the host name, is what the actual cloud service name is, and architecture-wise, it actually works where the cloud service, the public DNS name, and then all your virtual machines are contained inside of it. I hope that makes a little bit of sense. Just think anytime you want to reference a VM from PowerShell cloud service name, VM name, and you should be good to go. And I had a typo on mine I think. Notice it's the same cmdlet, but you get a different output. Whenever I specify an actual service name in the name I get the live, running configuration of that virtual machine. You can see we get the instance status, the power state, see what fault domain it's in. You can see the public DNS name, etc. I can also pipe this out to other cmdlets to get more in-depth information, so I can say Get-AzureEndpoint, and this will actually dump out the network configuration of that virtual machine. You can see I have port 5986 for Remote PowerShell open and 3389, so now we're talking about update, though. Let's go ahead and add another endpoint. We'll open up web. What I'm doing is I'm taking that return configuration, I'm passing it to Add-AzureEndpoint. It's implicitly passed. That's why you don't see the VM parameter. It comes as part of the PowerShell pipeline. It modifies that configuration by adding that endpoint to it, and then I hit Update AzureVM to actually update it. And you can do the same thing with data disks or whatever you want to do. If you want to change your disk cache settings it's the same way, and it always takes you a few minutes to actually do the update. Let's talk a little bit about remote management with PowerShell. This is brand-new. I think it literally was added maybe a month and a half ago. By default it's turned on, so whenever you go to the portal or if you create it from PowerShell you're going to have Remote PowerShell turned on. There is a flag in the Add-AzureProvisioningConfig cmdlet to turn it off if you don't want it, and there is a checkbox in the portal to turn it off if you don't want it. It's very secure, though. We generate an HTTPS cert on our own, so whenever you create the VM there is code that runs inside the ISVM on provisioning time that generates a self-signed certificate and installs it as WinRM certificate, and then you can download that certificate and install it on your local machine. That way you can validate who you're talking to. Now, by default, we don't turn on HTTP. It is optional, though, so you can turn it on and have VM-to-VM communication within the cloud, but we don't turn it on by default, so if you want to launch a script on a VM and then have it remote into other VMs inside your cloud service you'd want to turn on HTTP to make it simpler so you don't need to manage what certs. This is really useful for in guest customization or basically customizing your virtual machine when it boots up or monitoring. You could have a script that enumerates all your virtual machines out in the cloud, checks the disk space, checks PerfMon counters, whatever you want it to do. You could completely automate your management that way. Some of the more detailed parameters that you get whenever you're using the PowerShell cmdlets, we have a -WaitForBoot flag. This comes in really handy whenever you want to provision, just like we saw with create 2 virtual machines, and then once they're booted up do something useful with them in Remote PowerShell. What it does is basically it pulls. At the end of creating the virtual machine it actually goes off and pulls your virtual machine to see if it's in the role-ready state. And if it is, that way it knows if you can log in and do something useful with Remote PowerShell, so it's not assuming that you're going to sit around and do this manually. This is really for automated processes. Setting up the connection. One of the tricks to using Remote PowerShell in the cloud is that architecture I just described earlier. You have one public DNS name, which means you have one IP address for multiple virtual machines. How do you address each individual virtual machine behind that IP? They all certainly can't listen on port 5986, right? Because you can only have one IP on one port. What they do is whenever we generate WinRM endpoint the public port is randomized, so the public port is actually 50,002 or 5,063, whatever, and we added a cmdlet called Get-AzureWinRMUri that will go off and figure out which is a WinRMUri for you so you don't have to go log in and figure it out first, because it breaks automation if you have to go to the portal and figure out what port you want to use first. We have a helper function that's published. It's up on my blog. It's up in the git repo that actually does that code that will mock download your WinRM cert so you don't have to worry about that, and then from there you can invoke a script block using Invoke-Command or Enter-PSSession, which puts you into a Remote PowerShell console. How many of you are familiar with using Remote PowerShell? A little bit. How many of you have ever deployed SharePoint? Have ever wanted to? Just to give you an example of what this can be used for we've written 2 scripts in the past couple months. One of them I demoed earlier in the automated deployment session which from a single config and a single script you don't have to write any code additionally. It will deploy 2 web servers or multiple web servers, any number of servers, and it will then install AS, and it will actually install web deploy so it will do content synchronization. Basically from a single script you can deploy a full web farm. And that's a pretty cool demo. I think I would actually use that. I used it for setting up my demo here, but another set of scripts that we wrote is a little bit deeper. This one will actually do a full-blown SharePoint farm deployment and not just one big, giant virtual machine with everything on it, which is pretty much useless. It will do a full interior SharePoint farm, and even at the SQL database layer it configures SQL Server always on. Literally you go in, modify it, a script to tell it which subscription you want to use and hit F5 and wait 5 hours, and it goes off and makes a farm for you. Considering if you did that manually it would probably take you a week and a half, it's a pretty good plus. Let's jump over, and I'll show you a little down-scaled version of what you can do with Remote PowerShell. Let's use this server as an example. This is the function I was just telling you about that tells you what the PowerShell URL should look like. I'm going to print that out to the screen, and you can see it gives you the full URL plus :60,074. And the only way you know that this is 60,074 is if you went and looked and figured out what the WinRM endpoint is and got the public port for it. This cmdlet saves you from doing that. What I can do with it now is I can enter a PSSession. One thing before I actually get this attached is it will not work until I use that helper function which installs the WinRM cert locally. This is an include file I have here in a PowerShell script. Let me open that really quick to show you what it looks like. This is the entire function. You don't have to memorize this. Thankfully that's what's nice about having it posted on a blog. You just have to copy and paste it, put it in your code, and it magically fixes everything. But what it does is it figures out that generated cert that was on that virtual machine. It downloads it from your certificate store, and it installs it in your local certificate store. One thing to keep in mind is if you try this you do have to have PowerShell ISE elevated, because it does install stuff in your certificate store. Now I can put my service name and my VM name. One thing is, you only have to do this one time per that machine, though. If you routinely use Remote PowerShell, keep in mind, you only have to do it once. It will go ahead and install that WinRM cert. Then it's going to prompt me for credentials so I can actually log into the machine and be on Remote PowerShell. I don't know if you can see that, but down at the bottom you can see at the prompt it's actually at the cloud service, mwrdpclientservice1.cloudapp.net. This code is actually running on that remote server up in the cloud. That is a remote DIR. Not that impressive right now. Now that you know that you can log in, it's pretty easy to log in to our machine remotely and do something with it. Let's take it to the next step and execute something on it. Instead of Enter-PSSession we could do Invoke-Command, which takes the same parameters, that URI, the credential, and then it also takes a file path. This is something that I could pass in a script, just some random PowerShell script that does something interesting on the server, and it will go execute it for me. If you were in my earlier session, sorry, I'm going to show it again. A really quick script on how to install IS on this machine. This script, all it does is it uses server manager features, roles and feature cmdlets, and it installs IS. I didn't need to do that again, but that's okay. Invoke-Command. We'll use the same credential prompt. And if you're wondering how do you automate that prompt, yes, you can get credential silent. It can read password and user name from a parameter. You can see it actually executed, and it said no change needed, which I think I ran this on the same machine earlier, but it did execute that script remotely. This is really all you need to do to jump in and do a customization on a virtual machine after you've deployed it. Let's talk a little bit about image and disk mobility. I know this sounds buzzwordy, but really all it means is the ability to get a VHD from your local Hyper-V system, move it up to the cloud, or vice versa, take a VHD that's running from the cloud and move it down to your local Hyper-V system. We have 2 cmdlets to help you with this. The first one is Add-AzureVHD, and the second one is Save-AzureVHD. This literally gives you the ability to fully script an automation of a virtual machine. If you have a virtual machine that has 2 or 3 disks in it you can use Add-AzureVHD to upload them, and one thing really nice about using Add-AzureVHD is it's very—I wouldn't call it super intelligent, but it's fairly intelligent. It only uploads bytes that actually have data in them. If you have 120-some GB VHD and you only have 30 GB of space that means there is 90 GB of empty zeros on that disk. Add-AzureVHD does not upload those 90 GB of zeros. It's optimized to a point where it only uploads the stuff that you are going to need on the other end. The disk will still be 120 GB. It just doesn't actually upload all the stuff in between. Add-AzureVHD does the upload, and then Add-AzureDisk is different. It doesn't do an upload. What it does is it takes the VHD file you uploaded and registers it so Windows Azure knows it's a disk, because whenever you upload a VHD to storage, it doesn't VHD from an image at that point. You have to tell it by registering some metadata around it. That's what AzureDisk does. A path you could use to migrate a full VM is Add-AzureVHD, Add-AzureDisk, and then pass the disk name to New-AzureVMConfig instead of an image, and the VM will boot off that disk instead of provisioning from an image. Let's show a really quick demo on VHD mobility. What I want to show you is how we can upload a VHD. I have a really simple script here. Let me make that font size a little bigger. I always forget I can do that. Before I do that, I'm going to go ahead and create a VHD. I'm going to our administrative tools, computer manager. And I'm going to create a really simple VHD called My Data Disk, and I'm going to make it 50 MB so we don't have to sit here all day and wait for it to upload. And now that it's there, I'm going to go ahead and initialize the disk, create a simple volume off of it, format it, and then we'll copy something over to it. This is a super efficient way of transferring files up to your VM. Not really. Just kidding. My PowerPoint is on the disk, and now I'm going to detach this guy, because we're going to upload it. Now I'm going to use this script to upload it to the storage count. You can see the source VHD is pointing to the directory that I just saved mydatadisk.vhd, and the target storage count is a storage count that I have on the West Coast, mwwestus1.blob.core.windows.net, and I'm going to put it in the uploads/mydatadisk directory. I'm going to use a non-command line tool to show you what that looks like. This is the mwwestus/uploads folder. As you can see, there is nothing in it, so let's go ahead and kick off the upload. It's calculating how much stuff to use. Obviously not a lot. Detecting the empty data blocks, and it's uploaded. We now have VHD. The next command it's doing is the Add-AzureDisk. Add-AzureDisk, this is the one actually registering it in Windows Azure so it knows that it's a disk. It just registered as well. I can now call—if I wanted to—I could call Get-AzureDisk. Not the right code. And mydatadisk, and it will give you all the properties about that disk, because it is registered with the service. You can see attached to is null. I haven't attached it to a virtual machine yet. It's on the West Coast. Here's the location that you can download it from, etc. Now let's go ahead and attach it to a VM. Let's do that. We saw how to do this earlier with the update. I will reuse those parameters, and I'll put it on our RDP client. Something fell there. I'll have to take a look and see what that was. When I add an existing data disk to my VM, I'm going to use the import flag. This name is going to be mydatadisk, and I do have to tell it which one to put it on, so I'm going to attach it to LUN 0, and I'm going to call it AzureVM. And while that's running let's RDP into that VM so we can see it. This is the VM that we're uploading that data disk to. It's not there yet. By default we only have the C and the D drive. I don't have any data disks attached to it, but assuming the command line works we should see— I think it will be an F drive pop up shortly. There it is. Here's our F drive. And you can see the PowerPoint. Now that we're established that we're going to use VHDs to transfer basic small text files and PowerPoints, let's go ahead and send something back. I can't have a typo on my demo. There we go. Now we want to do something similar. We want to take this back. What I want to do now is I want to detach this from that VM, and you guessed it, use Remove-AzureDataDisk. Push LUN 0 and update AzureVM. Now this is not going to be attached to that virtual machine anymore. Very similar to Hyper-V except it's not a pretty GUI. No right click and add disk. You have to actually type in some stuff, but it's cooler, because it's in the cloud. That should be removed here momentarily, and while we're doing that, let's set up our download code. The code is going to be very similar to add except the source and destination are going to be reversed. The source is going to be the URL that we uploaded it to earlier, so mwwestus1 uploads/mydatadisk. And the destination is going to be the same directory I upload it from except I pended .downloads to it so I don't have to overwrite the existing one. Let's go ahead and download that. It's still running. Hold on. This should download the VHD to the directory once you've got it closed. There we go. Here is our freshly downloaded data disk, and you can see that we have successfully copied a 1k text file only in 20 minutes. This is actually very useful for moving VHDs with real data in them, and you can upload bootable disks too, so when we showed this it was Add-AzureDisk. That made a data disk. There is actually a flag on there if you want to take a look really quick. Here. I'll show you. If you wanted to upload an OS disk you'd use the exact same command, Add-AzureDisk, except the difference is you'd specify the OS. It would either be Windows or Linux. That marks it as a bootable disk so whenever you go to create a virtual machine in the portal or even from PowerShell the search management API knows that this is a bootable VM and it's not a random data disk. Let's jump back to our slides really quick. This is another really powerful feature that was recently added by the Windows Azure Storage Team. There is basically a service called Asynchronous Blob Copy, and what this is really useful for is getting blobs— they can be VHDs, they can be images, PDFs, whatever you want— from data center A to data center B. If you think about the traditional pattern, if you've set up a SharePoint farm or a virtual machine on the West Coast or in someone else's subscription and now you want to get it on the East Coast, how do you do that? If it's 150 GB of VHD files, do you download it to your local machine first and then upload it? You can certainly do that. It's painful, but you can do it. This is why the Asynchronous Blob Copy Service was created. The general idea is you call Start-AzureStorageBlobCopy, and it makes a request out to the destination server that tells it to download a blob from the source server. This source can be anywhere. It could actually be in Amazon. It could be on an FTP server. Anywhere that's publicly addressable, so as long as the storage service can get to it without passing authentication then it can actually copy it. Let me show you a couple examples of how to copy VHDs between data centers. Let's jump into this one. This one is a little bit of code, but I'll explain to you what it does. The first thing to note is I have a source URL. This is mwwestus testcopy1. It's a storage count that's sitting on the West Coast data center, and the target storage count does have to have authentication. I'm pointing this to mwwestus2. That's another storage count in the same data center location, and this is the actual storage count key that the storage service is going to use to do the writes, because the write is authenticated. The read is not, but the write is. I'm going to create a storage context that tells the cmdlets how to authenticate the target storage count, and from there I'm actually creating a container which if you're new to Windows Azure Storage think of it as directory called Copy VHDs, and there I'm going to call Start-AzureStorageBlobCopy, and this cmdlets takes the blob that I copied and checks the status of its progress. The beauty of this design is I can call Start-AzureStorageBlobCopy on numerous VHDs at the same time, and they're all going to kick off synchronously. It's not going to wait on it to finish before it starts the next one. This is how you can use the Windows Azure Storage Service to copy a lot of data quickly between data centers. Let me show you these 2 storage counts in Cloud Explorer. Here's my test copy 1. This is 13 GB VHD. I don't have zoom on here for some reason. Let me fix that. I'm really annoyed that I can't zoom. I hope I don't make everyone sick from zooming too much now. Okay, so testcopy1.vhd, 133 GB. It's a pretty big disk, so I'm going to copy this from storage count A to storage count B, or realistically, this storage count to this storage count, both on the West Coast. I'll go ahead and hit F5, and it should print out the status, and it's done, which is awesome. How many can copy 133 GB instantly in the cloud? That's just cool. That doesn't always work, and let me tell you why. Within Windows Azure Storage if you have 2 storage counts on the West Coast and you think, "Hey, I'm going to go do this." "It only took Michael a fraction of a second." "It was awesome. It saved me a lot of time." And then go start your copying and it takes 45 minutes, you're going, "What the hell?" What happened is inside of that region there are multiple storage stamps. There is a whole separate container of where your storage actually goes, and you don't have control over which storage stamp your stuff goes into. It just so happens that those 2 storage counts I created are all in the same stamp, and how you know they're in the same stamp is because the storage count IP address will always be the same. Mwwestus1 and mwwestus2, those are actually the same IP address. Different storage counts, but the same IP. If I create another storage count that happens to be in the West US region but has a different IP address, it's going to be in a different stamp, and it's not going to be a shadow copy, which is what we just showed. It will be a little bit longer. It's still going to be remarkably fast, because we're copying it over high-speed links, but it's certainly not going to be by the time I can scroll up to show you the status. I'm going to show a longer version, which is probably more what you'll use this tool for, and this is copying across regions. This one is going to copy the same 127 GB disk from West US to East US, so all the way across the country. Let's go ahead and kick that off. And as you can see, it's not successful. It's not done. That would be really cool, though, wouldn't it? You've got to admit. But it's not. It's only marginally cool. What we can do, though, is the blob returned from the Start-AzureStorageBlobCopy. We can actually call it, pass it back over and over, and get the status of it until the copy is done. You can see we've already started copying bytes, so every time you run this you'll get an updated version of it until the copy has been completed. And of course, I use this all the time to copy multiple VHDs between data centers or subscriptions even. Note there is no subscription context here. It's just storage key context. A lot of the use cases you'll see for this are if you created a subscription virtual machine over here and you really want to move it to a different group you can use these cmdlets to move these disks around. Okay, moving forward. Access control lists. This is a brand-new feature. It literally came out last Wednesday, I think. The portal doesn't support it yet. It's only available in PowerShell. But what it's really useful for is restricting access. Say you create a virtual machine, mysqlserver1.cloudapp.net, and you open up port 1433, so say a Windows Azure website or a service running in Amazon or whatever can access it, because you're not always going to have everything on the same virtual network. Sometimes you're going to have to expose these public endpoints. What you can do to lock this down, because I've actually had a customer say, "Hey, I did this architecture, "and I'm getting all these errors in the event log from people typing in bad passwords." "Am I getting hacked?" You're getting attempted hack. You're not hacked yet, because you need this. This will keep those people from even attempting. They can't brute force. They can't do anything. You add an endpoint ACL. It basically allows you to set up to 50 rules, allow or deny for your permit rules, and you give it a remote subnet. You can give it a description and an order of the rules processing. What this allows you to do is allow your friendly website to have access but everyone else not so lucky. The code to do this is not overly complex. It's a little complex for updating existing ones. It's really simple for creating new endpoints. The general idea is we create a New-AzureAclConfig. This is the access control list object, and then we have a second command line called Set-AzureAclConfig that allows you to modify, to basically add or remove rules from that ACL. And then you modify the actual endpoints on the virtual machine via Set-AzureEndpoint. This really simple script will lock down an endpoint for SSH. Let's walk through and show you how that works really quick. There we go. I don't think I ever hit enter here. There we go. It's still copying. On ACL, this is the exact same code I just showed you in the PowerPoint slide. I think the IP address might be different. Let me see what my IP address is really quick. I'm going to use Google. Sorry. They have a nice functionality. When I type in "What is my IP?" it actually tells me. It comes in real handy. I can take the IP address of whatever proxy server I'm currently going out from and put this in the script, and it takes a CIDR notation. I don't know if this proxy is going to come from multiple IP addresses. It actually could be a class B, and this might break eventually too, but I'm going to take my chances and do a class C, which hopefully I'm not coming from more than one of these IP addresses. But before I do this, I'm going to verify that I actually can connect to this machine. I have a Linux server out here that I'm going to try to SSH into. I have an SSH endpoint open on it. As you can see, I have network access to this endpoint. I can log in, and I don't know my password, of course. But that's not a networking problem. That's a user problem. I can SSH into this machine, and I'm going to go to another machine out in the cloud not on the same proxy here to verify I can get in there as well. This is a VM running Windows Azure. You can see I can log in here as well. Let's go ahead and apply this ACL. Theoretically, whenever this update is complete, the only one that should be able to access this Linux server is everyone that's following this proxy server. The last time I ran it it took about 3-5 minutes to update, so let's come back and test this one here in a few minutes. Let's see if we can get through a little bit more content. Let's talk about designing for scale and availability. Question. The question is can you do ACLs for PaaS, web and work role, and the answer is unfortunately not yet. It's in the works, but this access control is only applied to IS virtual machines at the moment. Where you asking the same thing? Sorry. I was actually the PM for Cloud Services before I came over, and I was requesting this very strongly as well, like what's up with that? How many of you are familiar with availability sets? What an availability set does is it allows you to take a group of virtual machines, say you have 2 web servers or 2 SQL servers or any number of servers that do the exact same thing, and it splits them up across multiple data centers, or sorry, racks in the data center. What this gives you is if you have 4 web servers, it could potentially split it up to maybe 2 to 3 racks. What it gives you is redundant power supply. It gives you redundant switch and load balancing, so if we lose an entire rack in a data center, any of those single point of failures fail, half of your application is still up and running or maybe more depending on how many virtual machines are in that availability set. The other thing it does is it tells the Windows Azure Fabric Controller how your application is structured so whenever it has to do updates at the actual host, which means it's going to reboot your machines, it doesn't reboot all your web servers at once. Windows Azure, the Hyper-V, is running Windows Server 2012 Hyper-V. Just like Windows Server 2012, it has to be patched. It has to be updated, and whenever it's patched and updated, it occasionally does require a reboot, and whenever that happens, they're going to take down the virtual machines that are running on that host. How you avoid downtime and get that 99.95% SLA is grouping machines that do some more tasks, like the main controllers or web servers or SQL servers, into an availability set. It's really simple. It's just a property set at VM creation time, and what this does is whenever a host update falls through the fabric controller will only bounce servers— or it won't bounce all your servers in that availability set at once because it knows how your application is structured. The second thing I want to talk about that increases your high availability is load balanced endpoint sets. Everyone knows what a load balancer is and why they're valuable—it distributes load— but it can also be used for high availability. One of the things that you can do in Windows Azure is to find health problems. If you have a web application that—say you have 2 or 3 web servers— if you have a web application and something bad happens on one of those virtual machines, say it doesn't go down. Say it starts to return a 500 error, a misconfiguration on one of the VMs. You certainly don't want that virtual machine in the load balancer. You can actually tell Windows Azure to point to a specific URL to detect the health of your application, and if it returns anything other than a "Hey, I'm good," 200 error, then it's going to yank it out of the load balancer. How you can design a highly available solution is having availability sets at each tier. You definitely don't want availability sets to span tiers. You want them to encapsulate a tier. You define availability sets for your web front end, your data back end, and then you're going to end up having load balancer at the public front facing. We don't have internal load balancing yet, so it's only a public load balancer that you have to worry about at the moment. How the custom probes work is a little bit more detailed. There are 2 types of probes. There is a TCP probe, and then there is an HTTP probe. The TCP probes are really useful. They're kind of the dumb probes. What they'll do is they pull every 15 seconds, and instead of actually looking for an HTTP request they're not that smart. They do a socket connect, and if they get an acknowledgement back, then that machine is good to the PCP load balancer probe. If it doesn't give them ACK, then it's going to stop traffic on that port, and it's going to take the node out. The good news is it does continue polling, so if you went to the machine and you were doing some kind of maintenance on it, you shut down IS or whatever service you're working on, do whatever maintenance you want, turn it back on, it's going to automatically be added back to the load balancer. There is nothing else you have to do. Now, what this doesn't do is it doesn't detect 500 errors. It's just detecting a socket connect. If your website is crashing, you've accidentally applied something goofy in your config files and you didn't realize it, it's going to be seen out to your users. It's not going to stop that. That is where the HTTP load balancer custom probes come in. They're a little bit smarter, so instead of looking at a specific port they're actually looking for a specific URL. You can pass it /healthcheck.aspx, or you can put it at the root of your site if it's anonymous, which does lead me to point something out. This does not work on an anonymous URL. The first thing you'll notice if you ever try to deploy SharePoint or something that requires 80 authentication, for instance, and use probes, it's not going to show up in load balancer, because the load balancer doesn't have any credentials to hit your site with. It's going in completely blind, completely anonymous, and it's going to get back a 401, which to the load balancer says your site is not up, sorry. What you do in that case is— or what I've told people to do with SharePoint, for instance, is create a virtual directory in IS or whatever you're monitoring and enable that for anonymous, and you can specify the anonymous path whenever you create the probe. Again, the walkthrough polls every 15 seconds. This time it actually cracks open the response from HTTP. If it gets anything other than a 200 error, then it's going to stop traffic, but it does continue polling. Let's walk through a quick demo of configuring the load balancer. But before we do, let's check and see if our ACL is happy. The update did work. I should be able to go into—actually shut this one off. Get that URL back, and let me try to connect via PUTTY again. I can still log in here slowly, but I can. Here we go. Now can I log in from my Windows Azure Client? And life is good. I've locked it down to only the TechEd conference. A minimal number of people that know my password that can hack me. Only 6,000, or I don't know how many people are here. A lot. Definitely going forward you should absolutely use ACLs to lock down your public endpoints. Okay, let's take a look at configuring the load balancer. To do that, I'm going to show you the code on how to create a load balanced endpoint. But I already have one, so I'm going to show you what it would look like if you were doing this yourself. Again, we're going to create a new Azure VM config. We're going to have a provisioning config. We're going to deploy Windows. It's going to be in washam in my hackable password. And now if I wanted to make a load balanced endpoint I would add Add-AzureEndpoint, name, web, protocol TCP, local port 80, public port 80, probe port. This is the probe port I was telling you about, because one thing to keep in mind if you do enable probing is it doesn't have to be on the same port as your web application. You can have your web app on port 80, and you could have an anonymous health endpoint listening on 8080 or whatever you want it to do. Those don't have to be the same. Probe protocol HTTP or TCP. This is how you flip the option I was showing you in the slides. And the probe path, and there are a couple other things. You notice the 15 seconds I was mentioning. Those are actually configurable. You can configure them to be longer or shorter, but I don't think it gets a lot shorter than 15. This could be to the root of your site, or it could be to healthcheck.aspx, whatever you feel like doing, and then this would be a new Azure VM, and now whenever this VM is booted up it's going to have a load balanced endpoint. The magic is whenever you create other VMs they'll have the exact same load balanced endpoint. If they have this setting, lbweb, and it matches, whenever they all boot up they're all going to be load balanced on the same port. Just to show you that, I have a set of load balanced VMs up and running. Get the cloud service name, and I'm going to pipe those 2 VMs out to Get-AzureEndpoint so I can dump out the endpoint configuration. You can see I have 5986. I have 8080 open, because I have web deploy configured on this, and I do web deploy publishing over 8080, and here is a load balanced endpoint probe, lbweb, and it's going to /home/healthcheck. I have an MVC page that listens on that request, and it can optionally return 200 or 500 to test this out. And then the other endpoint for the other virtual machine is up here as well. Let's test this theory out. The first thing I want to show you is browse to this site, and you can see "Currently being served from IISWFE-0." You can see that right there. Now if I hit F5 a couple of times, there we go. Now we're served from IISWFE-1. We're getting load balanced. What I want to do is I want to modify my checker. I have a little bit of code here in the health check controller method that basically says if the current VM is in my configuration return 200. If it's not, throw a 500, or the load balancer will take me out. My web config, I have a little code here, ActiveServers and those VM names. If I take out iiswfe-1, it will take it out of the load balancer. Before we get started on doing that really quick— actually let me go ahead and do that. I'll go ahead and take out iiswfe-1. Hit publish. It's going to copy out to web deploy, and web deploy does a scheduled task that copies in between all the other VMs. As you can see, it landed on IISWFE-1. Obviously it's not that fast. It's going to take 15 seconds for the load balancer to come back and check, get the 500 error, and then it's going to take a few seconds to kick it out. Plus I also have a minute of content synchronization delay, so this demo takes anywhere from a minute to 2. Let's go ahead and log in to that VM quick and take a look at what it's doing. Here are the log files for this website. You can see the only thing that's really been hitting it is I got a bunch of 404s earlier from when I turned this on and I didn't have any of my website set up. But then once I got the website turned on where it was working I'm getting a bunch of 200s. You can see the 200 errors, which means everyone is good, and that's when I was in the load balancer. Then if I scroll way down, I don't know if it's hit 500 yet. Not yet. Here in a moment I should see a 500 error at the end of that log. There we go. Now the load balancer is getting 500 errors. Instead of being a happy 200 or whatever I'm throwing a "Hey, I crashed, and I don't know what's going on" error. Now when I go back to the load balancer in theory I should only every see IISWFE-0 again at least until I go fix my load balanced endpoint. This can be really useful for actually writing some code to detect that you can do transactions or your database is happy. It doesn't have to be that IS is up and running. You can add some serious logic to this to make sure that your application is up and running. Okay, so you heard a couple of people that do web and work roles. How many are doing Cloud Services PaaS? Cool, a good number. It usually beats the number that I see when I talk about IS. From a Cloud Services perspective, we've done quite a few things recently. I know most of the work in partial has been around IS, but we've started going back and adding some functionality to PaaS as well. Just the core stuff that you can do from Cloud Services. You can deploy .cspkg and .cscfg files. You can deploy PaaS services to Windows Azure Virtual Networks. You know if you go to Visual Studio you can't deploy to a VNET. They didn't know that either until a few weeks ago, which is shocking, and the reason is because when you deploy to a VNET, you always have to deploy to an affinity group, and with a cloud service in an affinity group the only thing you can do from Visual Studio is a data center location, not an affinity group. Anyway, you can do that from PowerShell, which is awesome. You can restart and reimage. You can change your role instance count. You can now dynamically turn on RDP and diagnostics. This was a really huge pain point before, especially for our support people in Windows Azure. You'd deploy your application. A few weeks later something wrong would go with it, and you want to RDP into it, and you forgot that when you created your deployment you have to do that at compile time. It's actually a build option in Visual Studio to enable RDP. It's not something you can go turn on via the portal, or it wasn't until now. Now we've added functionality to dynamically enable RDP and even diagnostics. The only thing that will get you here is only turn these on and off dynamically if they haven't ever been enabled in the first place. The new functionality to turn on RDP and diagnostics is not compatible with the existing functionality to turn on RDP and diagnostics. What I'm going to show you from PowerShell, if you want the ability to use it, you won't be able to deploy it on cloud services that already have RDP turned on. I know it's confusing, but it's the way. The second thing is you can enable change configuration. We added a new upgrade mode called simultaneous. Has anyone tried simultaneous upgrade yet? I'll show you what that is really quick, and we also have a bunch of developer-based cmdlets that allow you to create scaffold for new Cloud Service apps for Node, PHP, Python, and even .NET now if you want. Let's walk through a little bit of Cloud Services management. The first thing I'm going to show is how to do a deployment. How many of you have done a command line Cloud Services deployment? Anyone? Awesome. This is how you can deploy a cloud service without having to go to a portal or from Visual Studio. I have a cloud service here. I'm going to go ahead and package it up, and you can package this from the command line too using CS pack if you want. But I'm lazy, so I'm going to use Visual Studio to do this. It's packaging up my web app, and I need this path to do this. I'm going to create another PowerShell tab, and I'm going to say New-AzureService. Hopefully that's not too long. I don't think it is. I need to create a cloud service first, and then I'm going to create a new deployment inside of that cloud service. It has to be the exact same name, mytechedcloudsvc1a. And I need to specify the location of the package I created and then the location of the configuration file and then the slot to put it in. This could be production or staging, and then I hit F5. And assuming I didn't forget something, this will take your CS package file and your CS config, it will upload it to your active storage count that we set earlier, and then it will deploy that out to the cloud. That's pretty straightforward for an automated deployment. How come you guys don't do this? Is it a lack of awareness? This is how you create a new deployment. Now let's talk about how you do an upgrade. How do you do simultaneous? The reason you want to use simultaneous— well, a couple of caveats. Simultaneous upgrade, the internal name for it is blast upgrade, and why it's called blast is instead of locking the upgrade domains and doing everything in a nice, orderly fashion it obliterates everything and copies new files over. It's really fast, but it also gives you downtime. You don't want to do a simultaneous upgrade on production deployments. It's really only useful in staging your dev slot unless you don't mind a few minutes of "Hey, what was that weird error I saw?" How we can do an upgrade is— Let's see. I want to use an existing service that doesn't take so long. That's not going to work. I have an existing service, paasdeploydemos1. Actually, let's pull it up in a browser really quick. This is my cloud service. It's a web role. "My ASP.NET Web Application Deployed with PowerShell." Super fancy. I spent a lot of time building that demo. What I'm going to do now is I'm going to completely screw it up and change it around. "Updated with PowerShell." Now it's completely different, and hopefully all my customers will be able to recognize it. But now I'm going to go ahead and package this up just like before. And I will warn you, I demoed this in MMS a few months ago, and for the life of me I couldn't figure it out. It would run, and it wouldn't update. Thankfully, it was the end of the session, so I just said, "Get out." No, but I did try it earlier, and it worked like a champ, which means I'm sure it won't work now. Let's go ahead and try this upgrade. I'm going to say "Set-AzureDeployment, upgrade, mode," and this will be simultaneous, and I need to specify the service name, which is paasdeploydemos1. The same thing. I need that configuration file. I don't know why it didn't put it up for me. And I need the configuration. I remember that file name. And of course it wants the slot. I'm going to go ahead and run this deployment, and instead of walking each upgrade domain I think this web and work role has 2 roles, so it should at least be in 2 upgrade domains. It's going to basically take them down and upgrade them at the same time. It should be faster compared to the standard automatic, which does it one upgrade mode at a time. And of course, it does have to upload files, so it's going to take a few minutes for it to actually work. While that's working—actually, let's wait for it. I'm really hoping that it works this time so I can redeem myself from MMS. Question. [inaudible audience question] Right, and you're doing automatic deployment right now? [inaudible audience question] It's going to do the same thing. TFS, it basically does CS package and calls the same APIs. It's not going to buy you a whole lot. A couple of things you can do to speed up deployments is you can try simultaneous mode, deploy the staging. It should be faster. As you can see, the deployment is already done, hopefully. Before I talk so highly about it, let's see if it actually worked. In theory, simultaneous will be faster, but you should only use it whenever you're deploying to your staging slot. Once you're deployed to your staging slot, you can do a VIP swap to swap those around. That should buy you some time. It all depends on what the reason for the delay is. Usually it's a cause of it has to deploy a bunch of virtual machines and do the upgrade. All right, moment of truth. Maybe. It's taking a long time to load again. But another option is—that's a little too simultaneous. Maybe we're having network problems. Let's see. Another thing it could be is how big is the package size that you're deploying? Yeah, that's usually the case, because it does have to upload that blob file. One of the things that I've seen customers do is they'll take that package, and a lot of that content, if it's not changing a whole lot, then they'll stage it somewhere, like in a closed storage account, and then they have a start-up task that downloads it at deployment time. You're not going through the multiple upload process. [inaudible audience question] Yeah, libraries there is nothing you can do, because those are actually in your BIN directory, right? Actually, I think it's even possible to do that. I'm very disturbed that my deployment stopped, though. There it is. Okay, remember what I said about having downtime? There you have it. This is why you only do that in your staging slot. We were able to upgrade, but I would certainly look into doing simultaneous on your staging slot and then do a VIP swap after the deployment is complete. And obviously, you're probably going to want to time it, because even after the cmdlets says it's complete, it's still going through the upgrade, and I remember that whenever we went through this there is no solid way of knowing if whenever everything is up and running except writing some code [inaudible] roles. What you saw is a lot of powerful cloud automation focused on PowerShell, and I hope that everyone was here for virtual machines and cloud services and not, "Hey, why didn't you automate Windows Azure websites?" I'm going to do that later, but right now it's all focused on compute. If you guys have any questions, I'll stick around for a few more minutes, and also feel free to ping me on Twitter, or you can hit me at my blog. I'll be happy to answer any questions. When you actually go try it is usually when the questions come. Thank you very much. [applause]

Video Details

Duration: 1 hour, 15 minutes and 17 seconds
Country: United States
Language: English
Genre: None
Views: 6
Posted by: asoboleva99 on Jul 9, 2013

http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/WAD-B305#fbid=kG7OLm6xV3l

Langauges to MT; Fra, Ger, Spa, Bra, Kor, Jpn, Rus, CHT, Ita

Caption and Translate

    Sign In/Register for Dotsub to translate this video.