At our second session for Microsoft Ignite, Jason McNutt and I discussed Azure Resource Manager (ARM) and Compliance. We showed attendees how to develop ARM templates that are compliant out of the box, with security standards such as FISMA and FedRAMP. Additionally, we went over how to automatically generate security control documentation based on ARM tags and open-source libraries like OpenControl.

Below is a short 15-minute video summarizing our Secure DevOps with ARM presentation:

Steve Michelotti and I presented a session on AzureGov last week at Microsoft Ignite 2017 in Orlando. It focused on demonstrating the innovative capabilities in AzureGov that are specifically designed to help government agencies with their mission. We dedicated about 80% of the session to live demos.

Steve started out with a brief description of AzureGov and how to get started…along with some recent news announcements, including API Management and Key Vault. Steve then quickly transitioned into demos related to Cognitive Services, Azure IOT and Power BI. I conducted two demos related to Cosmos DB Graph database and the CNTK deep learning algorithm on an N Series GPU machine.

Please watch the video below and let us know if you have any questions.

In an earlier blog post, we talked about Excel as custom calculation engine. In a nutshell, a developer or power user can author the calculation logic inside an Excel workbook and then execute the workbook programmatically via either Excel Services or HPC Services for Excel. You can read about this approach in detail in our MSDN article. This approach has been successfully used by our customer on a large scale for many years now.

Lately though, we’ve been thinking about Jupyter Notebooks as another potential option for building custom calculation engines.

But before we make the case, let’s review some background information on Jupyter Notebooks. Read More…

At the Microsoft BUILD 2017 Day One keynote, Harry Shum announced the ability to customize the vision API. In the past, the cognitive vision API came with a pre-trained model. That meant that as a user, you could upload a picture and have the pre-trained model analyze it. You can expect to have your image classified based on the 2,000+ (and constantly growing) categories that the model is trained on. You can also get information such as tags based on the image, detect human faces, recognize hand-written text inside the image, etc.

But what if you wanted to work with images pertinent to your specific business domain? And what if those images fall outside of the 2,000 pre-trained categories? This is where the custom vision API comes in. With the custom vision API, you can train the model on your own images in just four steps: Read More…

Azure Role-Based Access Control (RBAC) offers the powerful ability to accord permissions based on the principle of “least privilege.” In this short video, we extend the idea of Azure RBAC to implement a JIT (just in time) permission control. We think a JIT model can be useful for the following reasons:

1) Ability to balance the desire for “least privilege” with the cost of managing an exploding number of fine-grained permission rules (hundreds of permission types, combined with hundreds of resources).

2) Allow coarse-grained access (typically DevOps teams need access to multiple services) that is “context aware” (permission is granted during the context of a task).

Of course JIT can only be successful if its accompanied with smart automation (so users have instant access to permissions that they need and when they need them).

Interested? Watch this 15-minute video that goes over the concepts and a short demonstration of JIT with Azure RBAC.

Over the years, AIS has leveraged “Excel on Server” to enable power users to develop their own code.

Consider a common requirement to implement calculations/reports that adhere to the Financial Accounting Standards Board (FASB) standards. These types of reports are often large and complex. The calculations in the reports are specific to a geographical region, so a multi-national company needs to implement different versions of these calculations. Furthermore, over time these calculations have to be adjusted to comply with changing laws.

Traditionally, these calculations have been implemented using custom code, and as a result, suffer from the challenges outlined above, including the high cost of development and maintenance, requirements being lost in translation, the lack of traceability, and the lack of a robust mechanism for making a quick change to a calculation in response to a change in a standard. This is where the power of Excel on Server comes in.

As you may know, Excel on the server is available via in two forms: Read More…

“How does one choose between all the various PaaS options available in Azure?”

This question comes up very often at conferences and forums. The answer, as you might have guessed, is “it depends…”

In this blog post, however, we will go beyond “it depends” and try to describe some factors that can help you choose between Azure PaaS offerings.

(I assume that you are familiar with the concept of PaaS. If not, please briefly review my introductory blog post on PaaS.)

Let’s gets started.  We will cover the following Azure PaaS offerings: Read More…

The central focus of DevOps has been the continuous delivery (CD) pipeline: A single, traceable path for any new or updated version of software to move through lower environments to a higher environment using automated promotion. However, in my recent experience, DevOps is also serving as the bridge between the “expectations chasm” — the gap between the three personas in the above diagram.

Each persona (CIO, Ops and App Teams) have varying expectations for the move to public cloud. For CIO, the motivation to move to the public cloud is based on key selling points — dealing with capacity constraints, mounting on-premises data center costs, reducing the Time to Value (TtV), and increasing innovation. The Ops Team is expecting a tooling maturity on par with on-premises including Capacity Planning, HA, compliance and monitoring. The Apps team is expecting to use the languages, tools, and CI process that they are already using, but in the context of new PaaS services. They also expect the same level of compliance and resilience from the underlying infrastructure services.

Unfortunately, as we will see in a moment, these expectations are hard to meet, despite the rapid innovation and cadence of releases in the cloud.

Consider these examples: Read More…

The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.

Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.

That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.

Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…

If you want to skip reading the text that follows and simply want to download Visual Studio Code Snippets for Azure API Management policies, click here.

Azure API Management gives you a framework for publishing your APIs in a consistent manner with built-in benefits like developer engagement, business insights, analytics, security, and protection. However, the most powerful capability it offers is the ability to customize behavior of the API itself. Think of the customization as a short program that gets executed just before or after your API is invoked. The short program is simply a collection of statements (called policies in Azure API Management). Examples of policies that come out of the box include format conversion from XML to JSON, applying rate and quota limits and enforcing IP filtering. In addition, you have control flow policies such as choose that is similar to if-then-else, or a switch construct and set-variable that allows you declare a context variable. Finally, you have the ability to write C# (6.0) expressions. Each expression has access to the context variable, as well as, allowed to leverage a subset of .NET Framework types. As you can see, Azure API Management policies offer constructs equivalent to a programming language.

This begs the question, how do you author Azure API Management policies?

Well, today you have two options. Read More…