At the Microsoft BUILD 2017 Day One keynote, Harry Shum announced the ability to customize the vision API. In the past, the cognitive vision API came with a pre-trained model. That meant that as a user, you could upload a picture and have the pre-trained model analyze it. You can expect to have your image classified based on the 2,000+ (and constantly growing) categories that the model is trained on. You can also get information such as tags based on the image, detect human faces, recognize hand-written text inside the image, etc.

But what if you wanted to work with images pertinent to your specific business domain? And what if those images fall outside of the 2,000 pre-trained categories? This is where the custom vision API comes in. With the custom vision API, you can train the model on your own images in just four steps: Read More…

Azure Role-Based Access Control (RBAC) offers the powerful ability to accord permissions based on the principle of “least privilege.” In this short video, we extend the idea of Azure RBAC to implement a JIT (just in time) permission control. We think a JIT model can be useful for the following reasons:

1) Ability to balance the desire for “least privilege” with the cost of managing an exploding number of fine-grained permission rules (hundreds of permission types, combined with hundreds of resources).

2) Allow coarse-grained access (typically DevOps teams need access to multiple services) that is “context aware” (permission is granted during the context of a task).

Of course JIT can only be successful if its accompanied with smart automation (so users have instant access to permissions that they need and when they need them).

Interested? Watch this 15-minute video that goes over the concepts and a short demonstration of JIT with Azure RBAC.

Over the years, AIS has leveraged “Excel on Server” to enable power users to develop their own code.

Consider a common requirement to implement calculations/reports that adhere to the Financial Accounting Standards Board (FASB) standards. These types of reports are often large and complex. The calculations in the reports are specific to a geographical region, so a multi-national company needs to implement different versions of these calculations. Furthermore, over time these calculations have to be adjusted to comply with changing laws.

Traditionally, these calculations have been implemented using custom code, and as a result, suffer from the challenges outlined above, including the high cost of development and maintenance, requirements being lost in translation, the lack of traceability, and the lack of a robust mechanism for making a quick change to a calculation in response to a change in a standard. This is where the power of Excel on Server comes in.

As you may know, Excel on the server is available via in two forms: Read More…

“How does one choose between all the various PaaS options available in Azure?”

This question comes up very often at conferences and forums. The answer, as you might have guessed, is “it depends…”

In this blog post, however, we will go beyond “it depends” and try to describe some factors that can help you choose between Azure PaaS offerings.

(I assume that you are familiar with the concept of PaaS. If not, please briefly review my introductory blog post on PaaS.)

Let’s gets started.  We will cover the following Azure PaaS offerings: Read More…

The central focus of DevOps has been the continuous delivery (CD) pipeline: A single, traceable path for any new or updated version of software to move through lower environments to a higher environment using automated promotion. However, in my recent experience, DevOps is also serving as the bridge between the “expectations chasm” — the gap between the three personas in the above diagram.

Each persona (CIO, Ops and App Teams) have varying expectations for the move to public cloud. For CIO, the motivation to move to the public cloud is based on key selling points — dealing with capacity constraints, mounting on-premises data center costs, reducing the Time to Value (TtV), and increasing innovation. The Ops Team is expecting a tooling maturity on par with on-premises including Capacity Planning, HA, compliance and monitoring. The Apps team is expecting to use the languages, tools, and CI process that they are already using, but in the context of new PaaS services. They also expect the same level of compliance and resilience from the underlying infrastructure services.

Unfortunately, as we will see in a moment, these expectations are hard to meet, despite the rapid innovation and cadence of releases in the cloud.

Consider these examples: Read More…

The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.

Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.

That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.

Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…

If you want to skip reading the text that follows and simply want to download Visual Studio Code Snippets for Azure API Management policies, click here.

Azure API Management gives you a framework for publishing your APIs in a consistent manner with built-in benefits like developer engagement, business insights, analytics, security, and protection. However, the most powerful capability it offers is the ability to customize behavior of the API itself. Think of the customization as a short program that gets executed just before or after your API is invoked. The short program is simply a collection of statements (called policies in Azure API Management). Examples of policies that come out of the box include format conversion from XML to JSON, applying rate and quota limits and enforcing IP filtering. In addition, you have control flow policies such as choose that is similar to if-then-else, or a switch construct and set-variable that allows you declare a context variable. Finally, you have the ability to write C# (6.0) expressions. Each expression has access to the context variable, as well as, allowed to leverage a subset of .NET Framework types. As you can see, Azure API Management policies offer constructs equivalent to a programming language.

This begs the question, how do you author Azure API Management policies?

Well, today you have two options. Read More…

What is MTConnect?

MTConnect is the communication standard of choice for manufacturing. It allows organized retrieval of data from shop floor equipment in a structured XML.

The adjacent diagram depicts the key components of MTConnect. Let us start with the shop floor equipment shown at the bottom of the diagram – a CNC lathe.  Above that, we have an optional adapter component that converts machine-specific data into a MTConnect defined format. The adapter component is optional as most manufacturers are building this capability directly into their machines.  On the top is the agent component responsible for converting MTConnect data into XML. Additionally the agent also exposes a RESTful service that can be used to retrieve data. Read More…

Transient exception handling and retry logic are considered an important defensive programming practice, especially in the public cloud. But how good is your exception handling? Unfortunately, it’s not always easy to simulate transient exceptions.

Consider the Azure Redis Service for example. It does not have a way to simulate failures. So we decided to create our own Chaos Redis library. Fortunately, Microsoft has developed a Windows port of Redis Cache.

We decided to modify the code so we can inject chaos. Read More…

At a recent holiday dinner, a conversation with a friend eventually progressed to the topics of self-driving cars and facial-recognition software – and the overall roles and capabilities of artificial intelligence (AI). My friend’s assertion was that “AI is ultimately about pattern matching.” In essence, you equip the AI with a library of “patterns” and their corresponding associated actions. Based on the input it receives from the real world, the AI software program will then make an attempt to match the input to a stored pattern and execute the corresponding associated action.

Of course any program, regardless of whether it is designed to steer a car or detect a face in an image, relies on pattern-matching at the lowest level. That said, as we will see shortly, a deep learning-based approach is a fundamentally different way to solve the problem. And it’s an approach that is poised to reinvent computing. Read More…