Early in my web development career, I always tried to avoid deployment work. It made me uneasy to watch other developers constantly bang their heads against their desks, frustrated with getting our app deployed to whatever cloud service we were using at the time. Deployment work became the “short straw” assignment because it was always a long, unpredictable and thankless task. It wasn’t until I advanced in my tech career that I realized why I felt this way.

My experience with deployment activities, up to this point, always involved a manual process. I thought that the time it took to set up an automated deployment mechanism was a lot of unnecessary overhead – I’d much rather spend my time developing the actual application and spend just a few hours every so often on a manual deployment process when I was ready. However, as I got to work with more and more experienced developers, I began to understand that a manual deployment process is slow, unreliable, unrepeatable, and rarely ever consistent across environments. A manual deployment process also requires detailed documentation that can be hard to follow and in constant need of updating.

As a result, the deployment process becomes this mysterious beast that only a few experts on your development team can tame. This will ultimately isolate the members of your development team, who could be spending more time working on features or fixing bugs related to your application or software. Although there is some initial overhead involved when creating a fully automated deployment pipeline, subsequent deployments of the same infrastructure can be done in a matter of seconds. And since validation is also baked into the automated process, your developers will only have to devote time to application deployment if something fails or goes wrong.

This three-part blog series will serve to provide a general set of instructions on how to build an automated deployment pipeline using Azure cloud services and Octopus Deploy, a user-friendly automation tool that integrates well with Azure. It might not detail out every step you need, but it will point you in the right direction, and show you the value of utilizing automated deployment mechanisms. Let’s get started. Read More…

The central focus of DevOps has been the continuous delivery (CD) pipeline: A single, traceable path for any new or updated version of software to move through lower environments to a higher environment using automated promotion. However, in my recent experience, DevOps is also serving as the bridge between the “expectations chasm” — the gap between the three personas in the above diagram.

Each persona (CIO, Ops and App Teams) have varying expectations for the move to public cloud. For CIO, the motivation to move to the public cloud is based on key selling points — dealing with capacity constraints, mounting on-premises data center costs, reducing the Time to Value (TtV), and increasing innovation. The Ops Team is expecting a tooling maturity on par with on-premises including Capacity Planning, HA, compliance and monitoring. The Apps team is expecting to use the languages, tools, and CI process that they are already using, but in the context of new PaaS services. They also expect the same level of compliance and resilience from the underlying infrastructure services.

Unfortunately, as we will see in a moment, these expectations are hard to meet, despite the rapid innovation and cadence of releases in the cloud.

Consider these examples: Read More…

In this video blog, I’ll walk you through building a continuous integration and continuous delivery (CI/CD) pipeline using the latest tools from Microsoft, including Visual Studio Team Services (VSTS) and Azure. The pipeline is built to support a .NET core application, and the walkthrough includes the following steps:

  1. Configuring Continuous Integration (CI) with VSTS Build services
  2. Adding unit testing and validation to the CI process
  3. Adding Continuous Deployment (CD) with VSTS Release Management & Azure PaaS
  4. Adding automated performance testing to the pipeline
  5. Promotion of the deployment to production once validated
  6. Sending feedback on completion of the process to Slack

In a previous blog post, we discussed a quick overview of Continuous Integration and Deployment of .NET applications using Visual Studio Team Services (VSTS). This involved building and deploying regular old .NET applications with VSTS—something that we would definitely expect a Microsoft service to handle. However, there is some lesser-known support that VSTS has for other frameworks, including Java. The Microsoft VSTS website even has a portal page proclaiming their Java support: “Love Java? So do we!

VSTS support for Java build frameworks such as Maven and Ant came in handy for AIS recently, as we were tasked with developing some new features for an older Java desktop application for a federal client. And I will have to say that all of the VSTS tools for Java applications worked flawlessly. We were able to easily add the Java project source code to a Team Foundation Version Control (TFVC) repository hosted online in VSTS. Oracle even has an extension for integrating with a TFVC workspace—allowing us to check in changes right from the JDeveloper IDE. Read More…

If you need managed services to maintain peak IT network operations, consider us here at Applied Information Sciences. We’ll manage all your IT services for a predictable cost so you can focus on more strategic investments. AIS’ Managed Services Practice provides ongoing responsibility for monitoring, patching and problem resolution for specific IT systems on your company’s behalf.

Capabilities

  • Patching
  • Monitoring
  • Alerting
  • Backup and Restore
  • Incident Response

AIS’ Managed Service Practice has up to 24×7 coverage for initial responses to incidents through a combination of dedicated, part- and full-time staff, both onshore and offshore. AIS prides itself in being on the leading edge of managed services support. Our collaborative, disciplined approach is committed to quality, value, time and budget. Read More…

The emerging technology of cognitive services teaches computers to understand the world as we humans do. Last night’s #AzureGovMeetup in DC focused on exactly how the exciting and powerful capabilities of cognitive services can help government agencies better meet their unique missions and improve citizen services and engagement.

Our speakers included Justin “Doc” Herman, program lead of Emerging Citizen Technology, Technology Transformation Service, General Services Administration, who shared how government is advancing the use of emerging technologies. The night also featured compelling presentations and demos on Microsoft Cognitive Services by Steve Michelotti, senior program manager, Azure Government; and Ken Hausman, data platform solution architect of Microsoft.

In case you missed it, AIS livestreamed a few portions of the event on Twitter. Watch the videos below and be sure to follow us @AISTeam. (And check out the #AzureGovMeetup hashtag for more photos and insights from the outstanding lineup of experts.) For future DC AzureGov Meetup dates and details, go here. We hope to see you next month! Read More…

When you read about the Internet of Things, you often hear about connected cars, connected kitchen appliances, small devices that let you order things quickly, or other consumer-grade applications. In this post, I will quickly describe a recent IoT project I worked on where the devices are not small consumer-grade sensors…they are large industrial manufacturing machines.

In this case, machines arranged on a shop floor are responsible for cutting metal into various shapes. These machines must be both very powerful and very precise, and they have robotic arms that are programmed to grip specialized tools for this activity. These machines use the MT Connect protocol as the language for communicating their operational status and the results of any action taken. On collection, the data is streamed to a collection point for analysis. In older machines, adapters are installed to stream the machine’s data using the common language.

Our work on this project helped the engineers identify optimal cut times by analyzing the machine activity data. First, we needed to enhance the collection process so that all data was readily available, then apply the appropriate business rules to identify cut time, and finally provide quick, actionable feedback on the outcome. Read More…

The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.

Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.

That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.

Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…

With the ever-increasing complexity and proliferation of data, protecting our nation’s most valuable information is not an easy task. In response to this growing challenge, this month’s Microsoft Azure Government DC Meetup covered practical strategies for governance to help attendees prepare for today’s cloud era.

A great group of government/industry professionals and governance subject matter experts covered a wide array of governance and security topics, including:

• Empowering users while protecting sensitive information

• Ensuring compliance across platforms including: on-premises, hybrid, or cloud-hosted SharePoint/O365 deployments

• Enforcing governance by leveraging automation tools to ensure seamless provisioning of service requests across your organization’s varied lines of business.

In case you missed it, AIS livestreamed a few portions of the event on Twitter. Watch the videos below and be sure to follow us @AISTeam. (And check out the #AzureGovMeetup hashtag for more photos and great quotes from the outstanding lineup of expert speakers.) For future DC AzureGov Meetup dates and details, go here. We hope to see you next month!

If you want to skip reading the text that follows and simply want to download Visual Studio Code Snippets for Azure API Management policies, click here.

Azure API Management gives you a framework for publishing your APIs in a consistent manner with built-in benefits like developer engagement, business insights, analytics, security, and protection. However, the most powerful capability it offers is the ability to customize behavior of the API itself. Think of the customization as a short program that gets executed just before or after your API is invoked. The short program is simply a collection of statements (called policies in Azure API Management). Examples of policies that come out of the box include format conversion from XML to JSON, applying rate and quota limits and enforcing IP filtering. In addition, you have control flow policies such as choose that is similar to if-then-else, or a switch construct and set-variable that allows you declare a context variable. Finally, you have the ability to write C# (6.0) expressions. Each expression has access to the context variable, as well as, allowed to leverage a subset of .NET Framework types. As you can see, Azure API Management policies offer constructs equivalent to a programming language.

This begs the question, how do you author Azure API Management policies?

Well, today you have two options. Read More…