Early in my web development career, I always tried to avoid deployment work. It made me uneasy to watch other developers constantly bang their heads against their desks, frustrated with getting our app deployed to whatever cloud service we were using at the time. Deployment work became the “short straw” assignment because it was always a long, unpredictable and thankless task. It wasn’t until I advanced in my tech career that I realized why I felt this way.
My experience with deployment activities, up to this point, always involved a manual process. I thought that the time it took to set up an automated deployment mechanism was a lot of unnecessary overhead – I’d much rather spend my time developing the actual application and spend just a few hours every so often on a manual deployment process when I was ready. However, as I got to work with more and more experienced developers, I began to understand that a manual deployment process is slow, unreliable, unrepeatable, and rarely ever consistent across environments. A manual deployment process also requires detailed documentation that can be hard to follow and in constant need of updating.
As a result, the deployment process becomes this mysterious beast that only a few experts on your development team can tame. This will ultimately isolate the members of your development team, who could be spending more time working on features or fixing bugs related to your application or software. Although there is some initial overhead involved when creating a fully automated deployment pipeline, subsequent deployments of the same infrastructure can be done in a matter of seconds. And since validation is also baked into the automated process, your developers will only have to devote time to application deployment if something fails or goes wrong.
This three-part blog series will serve to provide a general set of instructions on how to build an automated deployment pipeline using Azure cloud services and Octopus Deploy, a user-friendly automation tool that integrates well with Azure. It might not detail out every step you need, but it will point you in the right direction, and show you the value of utilizing automated deployment mechanisms. Let’s get started. Read More…
The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.
Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.
That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.
Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…
Lift & Shift is an approach to migrating a legacy business application hosted in an on-premises data center environment to one hosted in the cloud. The goal is to move the application “as-is,” with little to no changes to the business functions performed by the application. One common lift and shift scenario is the migration of applications that were not originally developed for distributed cloud environments, but once moved, can take advantage of some of the benefits of cloud computing, such as increased availability and/or reduced total cost of operations (TCO).
This blog details some important considerations and challenges associated with the lift and shift method, based on our real-world experiences moving both custom and packaged (commercial) legacy applications to Microsoft Azure. Read More…
Transient exception handling and retry logic are considered an important defensive programming practice, especially in the public cloud. But how good is your exception handling? Unfortunately, it’s not always easy to simulate transient exceptions.
It’s 2017 and it’s official: Government agencies want to move to the cloud. But they are often unprepared for the transition, or stuck in the middle of a confusing process. So this week, AIS and Microsoft kicked off the new year with a terrific AzureGov Meetup full of valuable information, training resources and demos on exactly where and how to start a successful government cloud journey.
With the explosion of new sensors and service offerings producing geospatial telemetry, there’s an ever-increasing need for tools to gain business insights from this data. One of the premier tools for this in the geospatial domain is GeoServer.
Fully open-source and free to use, GeoServer provides Open Geospatial Consortium (OGC) web service interfaces to rendering images or complete metadata in most common geospatial interchange formats. In a consulting capacity, Applied Information Sciences has leveraged Geoserver with great success, allowing us to deploy a complete software stack in minutes instead days or weeks. In this post I’ll give an overview of the DevOps practices we’ve applied to enable this capability, as well as a brief overview of the supporting technologies. Read More…
Companies are adopting Docker containers at a remarkable pace and for a good reason – Docker containers are turning out to be key enablers for a micro-services based architecture.
As a quick recap, Docker containers are:
Encapsulated, deployable components that can run as isolated instances
Small in size with a fast boot-up time
Include tools that enable containerized application images to be easily moved across the public cloud and on-premises
Capable of applying limits on physical resources consumed by any given application
Given the popularity of Docker containers, it should come as no surprise that the Azure platform already provides first-class support for a container hosting solution, in the form of Azure Container Service (ACS). ACS makes it simple to create a cluster of Virtual Machines that can run containerized applications. ACS relies on popular open-source tools – with Docker as the container format, and a choice of Marathon, DC/OS, Docker Swarm and Kubernetes for orchestration and scheduling, etc. All this makes it possible to easily run containerized workloads on Azure in a portable manner.
But the Docker containerization story on Azure does not stop here.
It is also being weaved more and more into existing PaaS offerings, including Azure Batch, Azure App Service and Azure Service Fabric. Let’s briefly review the latest developments to see how Docker integrates with Azure PaaS: Read More…
This is an overview of a solution built by AIS with Microsoft for a federal client in the DC area. The client’s goal was to be able to automate the setup and takedown of virtual machine sandboxes on the fly. These sandboxes are used by the client’s developers to do security testing of their applications.
The first step of this project was to help the federal client provision their own Azure Government subscription, with some assistance from Microsoft. We then wanted to document the client’s on-premises environment so that it could be accurately replicated within Azure. The next step was to actually build and deploy the Azure services and scripts in the cloud environment. Lastly, we wanted to be able to define and implement automation use cases, such as the provisioning of an entire sandbox, or just specific machines within that sandbox. Read More…