Watch videos with subtitles in your language, upload your videos, create your own subtitles! Click here to learn more on "how to Dotsub"

Lessons Learned: Real-World Architectures and Examples

0 (0 Likes / 0 Dislikes)
  • Embed Video

  • Embed normal player Copy to Clipboard
  • Embed a smaller player Copy to Clipboard
  • Advanced Embedding Options
  • Embed Video With Transcription

  • Embed with transcription beside video Copy to Clipboard
  • Embed with transcription below video Copy to Clipboard
  • Embed transcript

  • Embed transcript in:
    Copy to Clipboard
  • Invite a user to Dotsub
Welcome. This session is about real-world architectures and examples. But before we go there I want to introduce myself. Go there—no. Okay—so—the easy version of my name is Panos. Do we have any people from Russia in the room? That is a good thing because my name doesn't sound good in Russian. It means something really bad. No—seriously—it is not a joke. So I am a Windows Azure MVP and Microsoft v-TS. I work as a Principal Cloud Architect in the Devoteam Group which is an enterprise in Europe with presence in Europe and the Middle East. You can also find a like to my blog and also my Twitter handle in case that you want to follow or ask any questions after this session. And just to give you an idea about what I am going to talk today I'll talk about the problems that we have with the architectures— existing architectures and customers. I'll also talk about the challenges that we have with those customers and what they want to do when we want to migrate to the cloud. Also I will do real-world—I will show you real-world architectures from those customers. And you will see considerable differences on the visual diagrams. So—and I cannot tell you which customers they are exactly, but I can just give you a hint because my legal team doesn't like all that stuff. So I had to anonymize all of the architectures although I had the agreement from the customers to use them they have to be anonymous. I also have to say that they might not fit your scenarios, so do not hold me accountable—neither me nor my company in case those do not work because each case is unique, and I express personal views in this presentation. And to end with the boring stuff I tackle the problems both from technical and from human perspective because they are both important. Just to set the expectations I do not have demos—all right? So you are either going to die during the presentation or you are going to like it. So—the problems that we have with current architectures and what we have seen to customers before the migration to the cloud. There are different sources of those problems. One is legacy, and I don't know if it's the same in the U.S. but in Europe legacy is a serious problem. We find a lot of old technologies and a lot of old practices in architectures in customers. Interoperability is a another source of problems— so systems that cannot talk with each other because they are just not meant to. And that doesn't mean only between languages like JAVA or .NET but it also means between different products that they're existing to the customer that they created specific integration points, so those two different products can talk with each other. Third party vendor software—software that the customers have and they rely upon to do every day tasks that it's not ready for the cloud. So it cannot really be used as it is on the cloud. Custom software that that customer might have—so internal applications or internal development teams or even products that are created by another company, but it is custom only for that specific—for that specific customer. So it's like a combination between the third party vendor software and custom software. And the last one is company policies and the human factor. So—talking about legacy—we've seen .NET 1.1 in customers literally about three months ago. And honestly I was shocked. I was like, "1.1? Seriously? That thing is like out of rotation for I don't know how many years." We have seen Windows 2000. We have seen Windows 2003 Exchange 2003. We have also seen people who do not really think about scalable applications in the cloud, and they are just thinking of, "I have this box, and I want to take this box and move it to the cloud," which is not the case because we're not talking about boxes anymore. We are talking about applications. We do talk—up to a point—about boxes in infrastructure to service, but that is a completely different story. And that is a completely different set of architectures that go there. So legacy mindset is another source of problems that we encounter in the customers. They don't think as being cloud-ready. They just think the typical approach, "I have a problem." "What is the problem?" "I don't have memory." "Okay—let's add more memory." And so it goes on. They don't care about what is the source of the problem. But if you try to do the same thing on the cloud that is not going to work—right? It will cost you a lot of money. And we also have seen legacy deployment models and tools. And that means outdated tools like SourceSafe. Microsoft people might not like me saying that, but it's true. I don't like it, and customers don't like it anymore because there are better solutions to that. We have seen problems with continuous integration versus manual deploy. And this had—this is a huge problem into a bank—worldwide bank actually— with primarily presence in Europe—that they have a lot of problems with deployments, and when we—when they're with Windows Azure and the way we can do deployments—which is explained a bit later— it greatly helped them meet their financial targets which reduced their operating cost by 25% each year for three consecutive years— which means 75% within three years. It is kind of insane, but anyway. And paperwork—it might sound funny, but there are still customers that they need signatures to be able to deploy. So developers need signatures—like physical signatures—from managers to be able to deploy to the cloud which is—again—a bit crazy. Interoperability—existing integration between systems that they— are already present into the customer but they are tightly integrated with each other. So you cannot really take one component and put it into the cloud and then just make the other one listen to it. They just have to be there, and they're tightly coupled. You cannot really take one or the other—you have to take both. But compliance reasons that don't allow you to do that. So legal reasons they do not allow you to do that. Bad implementations and practices on Enterprise Service Bus in case we're talking about providing an integration layer between different applications. And bad implementations like sending a message into the bus, and all the interceptors will get the message. And they will decide if they are about to process it or not which is completely reserved of the purpose of the bus. It is the bus that should actually choose where to deliver the message not the interceptors themselves if they are going to accept the message. And also bad implementations that—using custom code instead of standard XSLT transformations—using custom code to do transformations and then put the message into the bus so the other system can listen to that and receive the message. So outdated technology—not only on tools like using old tools .NET 1.1 and all those things— but also outdated implementations, achieving high availability just by doing clustering, and achieving high availability by just not re-architecting the applications to be highly available but just doing the high available layer on the infrastructure level, and no cloud support. So for—as a platform as a service model—which is the one that we try to push customers to— they cannot really go there. Now—third party vendor software—no cloud readiness. So—we have almost the same problem as with integration. So those things—some of the things repeat themselves. Unintended installs for platform as a service support—they are not there for some of the products. Even if they're there it is easy to automate the install, so every time a new instance comes in Windows Azure then we can really do an unintended install and prepare that software to work with the rest. Sticky sessions—there are workarounds, but if the product is not designed to use workarounds anyway you still cannot use them. So using iSAR—if the product doesn't know that it's there then it's not going to work either. In-memory state and caching—so that is a really bad practice anyway. You shouldn't do that in the home premise as well, but it goes in hand with sticky sessions. So as they have sticky sessions they didn't care about the state in-memory because if they have sticky sessions the memory is already there—so the state is already there. In file system persistence old CMS systems—that are described a bit later— is something that we've seen, and they do the persistence on the file system instead of in combination between the database and another—and the file system as well. So—we've also seen from those vendors that they're not willing to actually fix those issues. They're just willing to sell consultancy and new products to those customers. So—which makes sense when you want to do business, but it doesn't make sense when—as a consultancy—we want to build a business case for them. Custom software—again—same problems. Not cloud ready—same problems as the vendors. So everything described in the vendors could be part of custom software. Developers are not really trained for that stuff. So we have seen that a lot into customers, and Microsoft is doing a great job trying to provide as much as— as much resources as possible to help skill developers, but this is an ongoing problem, and we've seen it everywhere. People are not really trained with de-coupled architectures. People do not really know how to build de-coupled architectures and implement de-coupled architectures. They are not used to asynchronous models; they just use like I send something and I get something back—they are not used to the whole idea of asynchronous operations. And they're not really embracing failure. So—which is something really common in cloud architecture. So if something goes down the rest would work—you have to find a way to make them work. And developers get offended because if there is a significant mind change from what they do they feel offended because they slightly feel that they're not up-to-date, so when somebody goes there and tries to explain the new stuff they're in denial. So—they don't like that somebody is trying to change the way they work every day. So that causes denial. And policies and human factor—this might be the most important of all instead of the technical part; I don't know how many of you actually encounter those things, but this is sometimes more important than the technical challenges themselves because customers do have preferred vendors. And Microsoft is not always one of them—right? There is competition out there and having to convince them to use this it's not the easiest thing. And preferred vendors come from preferred people. So personal relations with key people in different companies is what also dictates what's going to happen. There is no governance at all, so finding who is responsible for something is not easy. We've been to a customer that—as I will explain a bit later—it's a car manufacturer—a worldwide car manufacturer that uses Windows Azure. It wasn't exactly the easiest thing to find who is responsible for something. There were like three or four different people responsible for the same thing. And they have to be synchronized to take a decision which is kind of crazy because there was not really one who can say this is going to happen and the rest will just listen. There were like four different people for the same thing. And they only realized when we were there we were just looking for answers. It was like, "Who is responsible for that?" "That guy, that guy, and that guy." "All right—who takes the final decision?" "Everybody." So—it wasn't really easy. We have seen integration within the same company. So—it sounds crazy, but we've seen integration within the same company. So—different departments they don't talk with each other to set some standards. Everybody does whatever he wants, and then they do integration projects within the same company to talk with the same department. And we've also seen marketing-driven budget. So that means the marketing department decides what is going to happen not the CIO, and this is also common in companies that they're completely customer-related as car manufacturers. So—we had a couple of challenges in that case. In all of them we've seen some things that they just have to be there before moving to the cloud. So—developers and generally people there they have to accept the new era. So they really have to accept that they want to go to the cloud, and they know the risks, and they know why they have to go there. We have to up-skill people. So—that's a fact pretty much everywhere. If people are not trained to do that stuff you have to up-skill them. There is no other way. We had to find software that's compatible with cloud and the cloud principles and the cloud paradigms. Avoid integration but promote interoperability. Simplify deployments and also simplify every day operations. And not only that but we want cloud-based software— so software that runs out of the cloud with similar functionality. So—we don't care if its software as service or platform as service or something like that, but we just need it for those customers to have something that does what on-premise software does right now. Find an equivalent for the cloud. Not to lose functionality except if the cost justifies the difference. And that was the case with Office 365 and competitive products—right? We've seen—for Office 365—that CIOs were considering going to other solutions that they do like half of what Office 365 does just because it was cheaper to go through the other one, and maybe they didn't need the whole functionality anyway. And we wanted to remove any legacy boundaries which is—if there was anything like a .NET 1.1 component that cannot be rebuilt at least build a wrapper around it and expose it somehow to the cloud so people can consume that. And start isolating all those bits within the architecture, and then progressively remove them, and move them to the cloud as well. But as a first step we had to find some solutions to those challenges. So—to get to the real-world architectures I am going to start with a customer who is in an electricity company within the northern Europe, and they have a presence in six different countries. And they are using Windows Azure for a couple of reasons. They are using Windows Azure to host customer portals. They are using Windows Azure to collect information from smart meters that they have installed in different houses and different companies— so both for home and for enterprise. And they are also using Windows Azure for their own internal applications— not only Windows Azure itself but also Office 365 and stuff like that. So the overview is that this customer is just .NET, so that was a bit easier for us because it was only .NET. They have Exchange 2010 with an archiving extension from a very big company. And why do I say that—because we had to find a way to migrate those archiving—those archives from their local depository to the cloud which eventually didn't happen because they realized that it just doesn't make sense. But the numbers are huge—they are slightly insane. We are talking about 60 terabytes of backup data every week, and that data has to be retained for four years because of regulations. So if you do the math this is like a lot of data. And there are about 20 terabytes of personal storage for their employees. And this personal storage had to be migrated to the cloud and somehow be available to everybody from everywhere in a secure way. So we were looking in solutions like mounting Windows Azure storage with other products out there as actual drives into their PCs. We were looking at SkyDrive. We were looking at SkyDrive Pro. Eventually they went for SkyDrive Pro—that lives on top of Office 365, but that's—it's quite a lot of data. They had a very optimized virtualization, so it's the only customer that when we requested the actual number— the actual cost per machine—they were able to give us the pinpoint number of exactly how much money they spent just for one virtualized machine. And based on that then we did the calculations to find out how we can do the migration. They had a lot of applications; they had about 487 something internal applications and all for different things. Some of them were critical. Some of them were not critical. About two weeks ago they migrated their first critical application into the cloud, so this is a—it's a big win for them. What was always the good thing is that operations in that company loved the idea of platform as a service, so they were always in favor of creating packages and using those packages. And if I want to add more instances such as scale and—you know— turn the dial and a half more instance so it goes on. There should be something with—. Yeah—I have to face that way. Okay. So—one of the legacy things we found in there was AZMAN. Do you know AZMAN? All right. So—that's how they—they were finding identity of their consumers of their— I'm sorry—their employees, so I said consumers because they consume the software internally. So their employees—they were existing as identities in AZMAN, so they were using AZMAN for that thing. The good thing is that the custom framework that they built on top of .NET— the architect who designed that thing was really ahead on his mindset on how he thought things, and he was a Microsoft fan as well. So he tried to go with the flow. So the framework was pretty pluggable, so it was pretty easy to support Windows Azure active directory and ACS. So switching from AZMAN and going there was pretty much easy. It took about a month, so the change was fairly easy. And also because of operations that they wanted to be in control what they did is that all the configuration variables they moved to the portal. And they were exposed to the configuration API. They had WCF services in there. And they were using the discovery part of WCF services. Has anyone worked with this—WCF Discovery? No one? Seriously? Okay. All right. So that customer did. The problem with WCF services discovery is that it uses unicast to find the other ones. It sets a UDP package, and then it tries to find the necessary service to consume. That doesn't work on Windows Azure because unicast is not supported on Windows Azure. So we had to create proxy that actually goes and finds the necessary WCF service within the network and provide the URL back to the service that wants to consume that. They were using AppFabric for one reason only which is a warm-up. So AppFabric has a feature for WCF that just warms up the WCF service. They were using AppFabric for only that reason, and they didn't want to drop that. So that's actually installed on startup. State is persisted; they needed state so we were persisting that in— for some applications in Windows Azure storage and for some other on SQL database. Caching was another thing that they were using, so we proposed the equivalent of Windows Azure caching. And dependencies on services—they created generic facades on top of them. And basically we obstructed all the dependencies of those services with facades, so even if the implementation was changing they didn't care. So—and as I said—you will see a considerable difference between the two enterprise architects—we did those things. That's how the external—not external—the overview of the whole architecture looks like; they have some external users that they're trying to consume internal applications. They go through the cloud then through load balance and the firewall servers, and then basically they reach the service that they want to consume. We also needed a read-only active directory in the cloud because for their internal applications they didn't want to use Windows Azure active directory. They still wanted to use the normal Windows server active directory. So we created a read-only note on the cloud, and it enabled the application through VPN. That used to be side-by-side VPN, but now with the recent releases they changed that to point-to-side, so basically the server just goes and gets added to the VNET that is on the cloud. And then it starts their application directly. The second one was a typical web app architecture. So we have a couple of customers that are there. They are trying to access—in this case that comes from their billing system— so they have to—they want to know how much electricity they used. So they use this system to—this architecture to expose the web application. So there is—the deployment we have—in this case it's just three instances— so there are three medium instances that they handle the load. That is just for one country—right? There is a distributed cache that is served for the state between all of the notes, and then we have dependencies on SQL DB and Windows Azure Storage. But that system also has to send emails and also a couple of other services they just recently added a phone notifications and SMS notifications from their billing system. So what happens is that we are consuming through the cloud—again—external services. Office 365 is what they use, so we're—their old architecture was using Exchange internally, so we are using Office 265 web services to do that. The Exchange web services is to send emails and process incoming emails and so it goes on. And we still have the read-only active directory from the previous slide if you remember— which is a different deployment, but they still connect to find identity for single sign on for their internal employees. So if employees want to go to that Web site then it's a read-only active directory that feeds the identity to that system. This is the metering system. This is how they collect data from different countries. They have smart meters running in different countries, and that information goes to a local server and then the local server is just pushing the information to service bus. There is also another implementation where the meter is actually posting directly to service bus. So through the rest of the API they are putting messages directly to the bus. And then the service bus is filtering that to specific topics based from which country the actual message came. Then there is their cloud deployment which does the whole message processing and creates the billing and connects with the billing system. So it takes all that information from the Windows Azure server bus topics, it processes all that info, it keeps that information there, and then—again—we're creating messages from service bus that is pushing information back to the billing system. So we have the processing in the cloud, but the billing is stil on their own data center. So—we're using service bus again to push those messages from the cloud to the customer— to on-premise, and then there is an integration point from service bus to their internal billing system that does a transformation and then feeds that information directly to the billing system. So the result of this was we increased throughput of the messages a lot because in the past they were using a solution that was a custom JAVA implementation was not really a past; it was just a custom JAVA implementation that was creating messages. We increased interoperability because of service bus. It's way easier to consume and receive messages from the bus. As it is right now instead of only the JAVA part which was running JMS. And does anyone have any JAVA background here or at least knows some JAVA stuff? Or I am just too evil to share that? Okay. So it was using JMS to—which means only JAVA applications could get the information from there— now with the service bus it's easier to do that. We lowered the cost because they don't have to maintain all of that infrastructure now internally, and they don't have to maintain the bandwidth for all the countries to process all of that information because now they can just change the—they can even collect more information from the meters just by changing the message that the meters send and then up-process that information to their cloud infrastructure. And we enabled continuous integration with TFS. So their internal system was TFS to do all of the source management. So we enabled—through TFS—the continuous integration with the cloud deployment. So whenever they are releasing a new version the operations is— TFS is pushing the new version in staging, then operations is going to the portal—it's verifying the new settings that are exposed through the configuration API. And once everything is okay they change the configuration, and they do a VIP swap, and they do the deployment like that. Now—the second customer which is the most—let's say—prestigious one— not that the other one is not, but this is a car manufacturer, and it's a worldwide car manufacturer. And they sell a lot of cars. So if you can guess which one it is it's a good thing. So—those guys were planning for their next moves in 2015 and 2016 where all of the cars will be cloud enabled if you want. So what you could do is that all of the settings you have into a car like radio settings and stuff like that you could save them into a cloud-enabled service, and then if you go to another car and you log on with those credentials— for example—you go somewhere and you rent a car and you log on with those credentials you have all the data either from GPS, from your radio, from whatever to that car. Of course we are going to ask them what happens if I got from Europe to the States. The answer was like, "Yeah—probably nothing will work, but that's another thing." So they're—the architecture as it is right now is isolated to the big continents. It's for U.S., it's for Europe, and for Asia. So what you will see is per continent—right? There is no concept of between continents right now, but this is something that they're looking into on how they can do that. So the overview of those—of that customer is that they had the outdated components for .NET 1.1, so they were using that for the system that does the car configuration, so when you want to order a new car the whole configuration was using a component that was relying on 1.1. They have custom software that is based on .NET, but they also have strong JAVA environment into the company. They had the file-based CMS system which causes a lot of problems when we're talking about cloud. And this is a third party vendor system. So it is not something they created. They also had optimized virtualization as a previous one, so the cost perspective was not really—yeah—you do virtualization and you gain that money too— it was already a lot of optimizing. They had a lot of applications as well—about 300 and something. I don't remember the exact number, but it was 300 and something. And the enterprise architects in that case—they loved the idea of platform as a service. They considered other vendors as well. They are even starting with other vendors as well, but they are starting at the same time doing platform as a service on Windows Azure because they just like the idea, so if it really works out in the end for them they will do the whole implementation here instead of doing to another vendor. But—as you understand—all those big companies they have to try different vendors until they come up with a solution, but the platform as a service is given a significant advantage on Windows Azure in this case. Now—some problems were—there were also some problems where how they did business in the past. And local countries had a different approach on how they go to customers—right? Customers in Belgium are different than customers in Italy. And customers in the U.S. are completely different than the customers in Greece. So the way that we are approaching customers was how their system was built. So each country had their own Web site, but all those Web sites were running on the same service, so we are adding locally in Belgium. So it's where their headquarters is. So they wanted to change that because when they are releasing a campaign of a new car in Germany it doesn't make sense scaling the whole thing because Italy and the rest—although they are big as countries it just doesn't make sense moving them along in the scale units because they just don't need it—it is only Germany that runs the promotion at that time. That very basic disaster plan is only for the public sites which is their face and makes sense because if their public site goes down they're losing money. That's how they generate money actually. Internal systems can be down for 10-15 minutes, 20 minutes—they don't care. But their public Web site should never go down. The file-based CMS system was really a problem because if we re-image or add new instances the re-image instance will lose the content. The new instances that come up doesn't have the content at all. So if the request was ending up into an instance that didn't have the content then the customer will just get the 404, and nothing will be there. We had to do something with .NET 1.1, and that was quite a challenge because they didn't want to rewrite the component; they just needed to put something in front of it and then consume it somehow because they had other priorities—right—and this is what I say about marketing-driven budget. So marketing was like, "Yeah—why should we change it? It already works—right? Why would you do something with it?" "Yeah because we cannot move it there." "Yeah—but do something." The do something was create something in front of it. And their deployment system has nothing to do with Microsoft technologies. They are using Bamboo + Stash—which is purely from another account— they have nothing to do with Windows in general and Microsoft in general. So as that thing runs from Linux we have to find a way how we are going to create the Windows Azure packages because they need Windows. So—what we did in the end is that we added another machine which was just Windows and creates the package, and then that's the only thing it does which is kind of weird, but that's how we tackled that. If you want—thank you. Their JAVA applications are running on Jetty which is an application server just like as we have IES. This is one of the many application services in the JAVA world. They have a lightweight, stateless approach on whatever they do. So they don't care about state; they don't care about that stuff. They use SQL server database which is—I was expecting my SQL in there, but actually they are using SQL because that is what the whole company uses. So they are really using existing resources. All the services will be exposed through REST APIs. So whatever they do on the new architecture they want to expose it through REST APIs. We needed solutions to work around dependencies. They are using ElasticSearch which is a search software that you can use for doing—for searching internally. But the problem with ElasticSearch was that it uses multicasting. So it is exactly the same problem we had with WCF on the previous customer. And a lot of content has to go through CDN—to a CDN network because that is the way it can be closer to customers. And you will see a considerable difference in how those guys design stuff. So this is the overview of their architecture. And I am not going to spend too much time on this slide. I just want to show you that basically what they did is that they have different kinds of users and consumers. They have their CMS system that is creating content. That content actually goes to my mSQL, so it goes to SQL server to 2012. And what they wanted is they want to run this whole system—they want to run the public part of that system into medium instances. And they want to run SQL server on either SQL DB, but the performance was slightly a concern here, so they weren't sure if they will get the performance that they need from Windows Azure SQL database, so they are also considering infrastructure as a service with an always on feature. So then everything that is from the CMS system will go to Windows Azure storage. They are provisioning about two terabytes of storage because of all the videos, images, and all of that stuff that they have there. They also have the actual content system, so this is where they create it, and this is where they are going to host it publicly to the world. Now—there are two components in that architecture—one is the one that is hosting the content and only the content. That doesn't mean what is being served to the customer but the actual content—the actual text and images and so it goes on. And they also have a part which is the public face. And the public face is running Node.JS. So what they're doing is that they are using Node.JS and a handlebar to create the template engine—templating engine. They take all of the information they need from this system, and then they merge it into Node.JS with the information that comes from the code which is Node.JS—so JavaScript and HTML code. They merge that into Node.JS which runs on Windows Azure Web sites, and then they actually serve that to the customer. So they are bringing two completely different technologies and they merge them, and then they serve that content to the customer—to their customers. And also we need search—so ElasticSearch is what they're going to use for their Web site search. And lastly there is the car configuration—the thing that is being hosted internally. Now—as you can see—everything is exposed through REST APIs. There is one here, there is one here, there is also another one here, and so it goes on. So they are obstructing everything away with REST APIs. This is the overview of what they have there. These also—this slide which is how the CMS system is going to work.—right? So we have Windows Azure traffic manager which is what the customers will hit. We have a load balancer, and then we have the internal systems where—and by internal systems I refer to systems that the public Web sites are not seeing—right? So customers—as you and me—when we go to the Web site we are not seeing that. We are actually seeing this part here. So this part here is the one that is serving content. That is creating the content to be served because the actual content is served through Web sites. There is the publisher, and then there is deployer. What they do is they just have different roles. Those are two different deployments. This is one deployment, and this is a second deployment. They communicate internally, and they are using a database. Also they have another database where their content that they are--that is created internally from the company lives, and then they are actually exporting that using backpacks to do all the backups that they go to blob storage. Also in blob storage they have images, videos, and all this stuff that they need to be served through CDN. Now whenever you see this thing here— that greenish shape—they want OntoScale on that thing. And we're actually proposing to them to use Com to do the other scaling part. Now this slide is about serving the actual content—right? So if someone wants to access that content we have a traffic manager with a failover policy. So they are running two different data centers that you can see. And then the traffic manager is configured to run with a failover policy. So if something happens to one data center you can just go to the other one. The idea is—again—we have Node.JS on Windows Azure Web sites. There is the public endpoint on the work roll that has a CMS broker that runs TomCat which is another application server for JAVA. And then it gets the content which comes from a database, and we synchronize the two different databases between two different data centers using data synchronization. So the SQL data sync service we are using that to do synchronization between two different databases on the data centers. We also have CDN to serve all the content publicly for those customers. So this is the public Web sites. Search for the Web sites it looks pretty familiar with the other one. The only difference is that we are running ElasticSearch. And on the third customers I will show you how we solve the problem with unicast and what we're doing to find the notes that they used on ElasticSearch. And this is how they are going to do the connection between their on-premise towards the cloud. So these are the cloud services, and they also have— they have VPN that goes directly to their internal router, and then they have their own little balancing system and whatever. OpenAM is a system that provides identity, and it transfers identities—just another JAVA application they have in there. If you download the application you will—I'm sorry—my presentation— you will see all the explanation down here. I am just not spending a lot of time there. So you can really dive in on this, and if you have any questions after that I will be happy to answer those. So the beauty of using Node.JS is that we can add whatever version of Node.JS on Windows Azure Web site that is an actual tool that it is released by Glenn Block—which is the program manager for Node.JS on Web sites— that you can just download whatever version of Node.JS you want to run there because they wanted the latest version so be there. Configuration is exposed through the portal. And we bundled whatever kind of software they needed into the software— into the deployment package. That was needed because the CMS program—the CMS publisher— they have their own custom agents that connects with their own CMS system to deploy content on the instances that they are deployed on Windows Azure. So we needed a way to have those agents there. And we made them part of the deployment package. We download any kind of dependencies the JAVA applications have dynamically with scripts. So Unix startup we download everything. And we are dynamically adopting the JVM size which is necessary for JAVA applications. We just try to take leverage of everything that is available on the resources. So the results were that each country now can independently scale because they have their own Web site. So they can just move from whatever—from whatever service they have—add more or remove. We support both JAVA and .NET on the same architecture because everything is obstructed with REST API, so we don't care from where it comes. We can change services easily because of that reason. And content is close to customers, and we also have interoperability between the different systems. Monitoring is through SCOM. And basic monitoring happens through the portal, so developers see the portal operation see SCOM. That's how it goes; we lowered their cost because they are not wasting resources by scaling everything along with the rest of the sites. We have continuous integration with Bamboo. They are a source management system and continuous integration system. And if they want to create a new Web site—a mini-site for a campaign or whatever— it is just really easy now with the Windows Azure Web sites. Now—the last one is an architecture we applied to a bank— it's a banking group actually—a worldwide bank group. And we applied an architecture that uses jBoss— which is an application server JAVA—I've already said three today Seti, TomCat, JBoss— they have a lot. So—this customer was doing internal outsourcing of applications. Basically they had their own departments—two different companies but under the same umbrella— under the same group that they were saying—excuse my mic—sorry— they were saying, "We want that," and the other company was actually creating that. The problem is that they had to go through lengthy procurements— procurement periods that it takes a lot of time to take new hardware—which is not good. They were not able to respond fast to market. The banking market changes a lot, and they had to follow. Mixed environment of JAVA and .NET—again—the enterprise architects they loved the idea of platform as a service, and we could have some major wins from jBoss which in this case is JAVA, but the principle is the same for whatever technology you have. So what we are doing is that during start up we are running scripts and we automate the whole deployment, so the actual deployment package is—I think—about 20K or something. So—but it has a lot of scripts that they start downloading whatever we need from storage, so what we do is that we are generating a shared access signature which is a secure way to access storage. And we pass that information to storage alone with the SAS talking. We download what we need—JDK, jBoss and whatever the need for their applications. You can do the same principle with .NET—it doesn't matter. And we unzip and install whatever is necessary, and then we continue with whatever it has to go on into the application. So—does it—yeah—it looks good. So—we chose to go with worker roles because that's the easiest way to customize what we want to do for JAVA. And this is how it looks like, so when a new instance comes up we run the scripts and we download whatever we need into the worker roles. And we have two worker roles—one has the proxy because to be okay with session stickiness and state we needed a reverse proxy behind the load balance of Windows Azure. So we have the load balance of Windows Azure, then there is this proxy and we have the deployment of jBoss. So whatever comes from the load balance here then it goes to proxy, and the proxy decides to which instance it has to go depending on the load, depending on where the state was, and so it goes on. Basically it is just reading a cookie, and the actual proxy is a reverse of the part of the HDCP modules. It is not something crazy, but it is part of what was needed. Now the problems by moving this application server was the same as we had with WCF. jGroups is a discovery protocol that uses unicast and multicast to find the other nodes. We cannot use that on Windows Azure because it is not supported—multicast is not supported. So—what we did—actually the equivalent of me in the JAVA world in my company— they developed a custom protocol based on jGroups which is called AZURE_PING. And what that does it's using Windows Azure storage to find the other ones. So it puts any kind of information it needs into a storage blob container, and then everybody reads that information from there, and then we know how to join the cluster. And to put the picture in that imagine that we have the proxies, and we want to add more service, and we have three proxies in two jBoss application service, and we want to had more. So—what happens is that when we add another one we read some metadata from Windows Azure and we discover the proxies. And we go to the proxy, and we say, "Hi, I am a new server. Please add me to your list." The other way around—when we remove a proxy and we add another server the same thing happens. So when a proxy goes away it notifies everybody that, "I just went away. I am not there. Please remove me from your acceptable list." So—all that iteration and all that flow is happening with custom code, but this is an example of how you can solve some of the problems just by thinking a bit outside of the box. If we want to scale—and this is the reason why we have different proxies and different application servers— different worker roles—if we want to scale then we have two independent scale units— one is the worker role that has the proxy, and the other one is the worker role that has the jBoss application server. So what is happening is that if we want to add more application servers we just add more application servers without moving the proxy alone because the proxy—the only thing it does it just says go there. So we do not need to scale the same load. So—we either scale out or we scale up—it really depends on the customer. So we either add more resources by changing the instance size or we add more servers of the same instance size. So—we ended up with that customer a highly scalable jBoss clusters solution which follows all of the standards and all the requirements to be JAVA compliant in all that stuff which is weird for .NET developers, but it doesn't matter—it's weird for me as well. We have state if we need it, and we can dynamically add and remove instances, so there is no problem there. We also solved a problem that they had with deployments— that they were taking like forever because they were doing manual deployments. They were starting on Friday night, and they were finishing on Sunday morning just to verify that everything was okay. Now by using Windows Azure what they do is basically they just do deployments and staging, do all the tests that they need for as long as they want, and when they're done they are just doing an update deployment on their production system. And how they do the update deployment basically they upload the new packages into storage and they go and they restart the instances because startup scripts will do the rest—it will just pick up the latest information, download the latest version, and they are done. So—I don't know if you have any questions. There are microphones in the middle of the room. If you have any specific questions or if you want to find me later to talk about Windows Azure I will be at the booth in the "Ask the Experts" along with a couple of other Windows Azure MVPs that they actually sitting there and not paying attention to me. I know that there was quite some slide ware, but this was purely architecture and what we did to customers from a highly obstructed view. If you want to contact me for some reason those are my details. That is also my email there. I will be happy to answer any questions you might have. Otherwise—thank you very much, and thank you for being here.

Video Details

Duration: 52 minutes
Country: United States
Language: English
Genre: None
Views: 6
Posted by: asoboleva99 on Jul 9, 2013

http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/WAD-B345#fbid=kG7OLm6xV3l

Languages to MT: Fra, Ger, Spa, Bra, Kor, Jpn, Rus, CHT, Ita

Caption and Translate

    Sign In/Register for Dotsub to translate this video.