Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

Azure Web Apps Background

I’ve been working with Azure Web Apps for a long time. Before the launch of Azure Web Apps for Containers (or even Azure Web App on Linux), these web apps ran on Windows Virtual Machines managed by Microsoft. This meant that any workload running behind IIS (i.e., ASP.Net) would run without hiccups — but that was not the case with workloads which preferred Linux over Windows (i.e., Drupal).

Furthermore, the Azure Web Apps that ran on Windows were not customizable. This meant that if your website required a custom tool to work properly, chances are it was not going to work on an Azure Web App, and you’d need to deploy a full-blown IaaS Virtual Machine. There was also a strict lockdown regarding tools and language runtime versions that you couldn’t change. So, if you wanted the latest bleeding-edge language runtime, you weren’t gonna get it.

Azure Web Apps for Containers: Drum Roll

Last year, Microsoft released the Azure Web Apps for Containers or Linux App Service plan offering to the public. This meant we could build a custom Docker image containing all the binaries and files, and then deploy it on the PaaS offering. After working with the product for some time, I was like..

The product was excellent, and it was clear that it had potential. Some of  the benefits:

  • Ability to use a custom Docker image to run the Web App
  • Zero headaches from managing Docker containers
  • The benefits of Azure Web App on Windows like Backups, Kudu, Deployment Slots, Autoscaling (Scale up & Scale out), etc.

Suddenly, running workloads that preferred Linux or required custom binaries became extremely easy.

The Architecture

Compared to Azure Web App on Windows, the architecture implemented in Azure Web App for Containers is different.

diagram of Azure web apps architecture

Each of the above Web Apps is strictly locked down with minimal possibility of modification. Furthermore, the backend storage was based on Network File Shares which means that even if you don’t want any storage (like in cases when your app simply reads data from the database and displays it back), the app would still perform slowly.

diagram of Azure web apps architecture

The major difference is that the Kudu/SCM site runs in a separate container from the actual web app. Both containers are connected to each other with a private network. In this case, each App Service Plan is deployed on a separate Virtual Machine and all the plumbing is managed by Microsoft. The benefits of this approach are:

  • Better isolation. If your Kudu is experiencing issues, it reduces the chance of taking down your actual website.
  • Ability to customize the actual web app container running the website.
  • Better resource utilization

Stay tuned for the next part in which I would be discussing the various options related to Storage which are available in Azure Web App for Containers and their trade-offs.

Happy holidays!

Did you know you can build an intelligent twitter bot and run it for just pennies a month using Azure’s Logic and Function apps, coupled with Microsoft’s Language Understanding Intelligence Service (LUIS)? LUIS can “read” a tweet and determine the tweet’s sentiment with a little help from you. Run selected tweets through your LUIS app, determine their meaning, and then use that meaning to create a personalized tweet back at the original.

Here’s how…

Step One: Select a Twitter Query

Use Twitter’s advanced search tools to craft a query to narrow down your selection of tweets to the specific messages you want your bot to respond to. Your Azure charges will be usage-based, so you want this query to be specific enough to only pick up the kinds of messages your LUIS app will know how to respond to.

Step Two: Create an App with LUIS

If you don’t already have a LUIS app to use, follow the steps here to create your new LUIS app. For your utterances, I recommend using a sampling of tweets that were returned using the twitter query you created. Copy as many tweets from your query as possible into the LUIS test tool and assign them to the correct intent as needed. Train and publish your app before continuing.

Step Three: Create a Function App

Use the steps here to create a new Function App with a HTTP trigger.

Once you have the app and trigger created, download the function by clicking “Download app content.”

Screenshot with Download App content highlighted

Unzip your app and open it in Visual Studio. Add classes for the LUIS Prediction:

public class Prediction
    {
        [JsonProperty(PropertyName = "query")]
        public string Query { get; set; }

        [JsonProperty(PropertyName = "topScoringIntent")]
        public Intent TopScoringIntent { get; set; }

        [JsonProperty(PropertyName = "intents")]
        public List Intents { get; set; }

        [JsonProperty(PropertyName = "entities")]
        public List Entities { get; set; }

        [JsonProperty(PropertyName = "luisPrediction")]
        public string LuisPrediction { get; set; }

        [JsonProperty(PropertyName = "desiredIntent")]
        public string DesiredIntent { get; set; }

        [JsonProperty(PropertyName = "isDesiredIntent")]
        public bool IsDesiredIntent { get; set; }
    }

public class Intent
    {
        [JsonProperty(PropertyName = "intent")]
        public string IntentValue { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

   public class Entity
    {
        [JsonProperty(PropertyName = "entity")]
        public string EntityValue { get; set; }

        [JsonProperty(PropertyName = "type")]
        public string Type { get; set; }

        [JsonProperty(PropertyName = "startIndex")]
        public int StartIndex { get; set; }

        [JsonProperty(PropertyName = "endIndex")]
        public int EndIndex { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

Then Modify your HTTPTrigger to parse the prediction:

[FunctionName("HttpTrigger")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            dynamic data = await req.Content.ReadAsAsync          
            var prediction = ((JObject)data).ToObject();

            var message = GetTweetMessage(prediction);

            if (!string.IsNullOrEmpty(message))
            {
                return req.CreateResponse(HttpStatusCode.OK, message);
            }

            return req.CreateResponse(HttpStatusCode.NotFound);
        }

Replace “GetTweetMessage” with your own code to interpret the intent and entities (if defined/provided) and generate your tweet message. Then send the message string back in the response. Deploy your changes back to Azure. (Right click project in visual studio, select “Publish”, follow instructions)

Note: In order to use a free dev service plan for your function, you must turn its AlwaysOn setting to Off. You can only do this if you are using a HTTP trigger; a timer trigger won’t fire if you turn off AlwaysOn.

Do this by going to Application settings:

Screenshot with Application Settings highlighted
Toggle AlwaysOn and save the changes. You may now go to Platform features:

Screenshot with Platform features highlighted.

Then All Settings:

All Settings is highlighted.

Then scroll down to App Service Plan and choose Change App Service Plan:

All settings is highlighted.

Change the app service plan to your devtest (free) service plan.

Step Four: Create a Logic App

General information on creating new logic apps can be found here.

Once you’ve created your logic app, go to the Logic app designer:

Logic app designer is highlighted.
Create your first workflow item: a Twitter search tweets trigger. Use your search query from above and change the interval as needed:

Screenshot of query

Create your next workflow item by clicking the plus button at the bottom of your twitter search tweets trigger. Add a new LUIS get prediction action. (You will be prompted for your LUIS connection and key; you can find these in your LUIS app.) The connection value is your LUIS endpoint. Select your LUIS-connected app for the APP Id and then click on Utterance Text field. A flyout list of dynamic options will appear; choose Tweet text under the Twitter options. Leave Desired Intent blank.

Screenshot of Get Prediction input

Add a new flow item under Control -> Condition:

Screenshot of new flow item input.

This workflow checks for the Top Scoring Intent Name from LUIS. We don’t want to continue passing this message to our Azure function if LUIS did not recognize its intent, so we only continue if Top Scoring Intent is not equal to None.

The control flow added two boxes below it. One for If True, the other for If False. Leave If False blank, the workflow will stop here if LUIS has not returned a usable intent. In the If True box, add a new action for Azure Functions and select the function you created above.

In the Request Body field of your function trigger, put the LUIS Body parameter. Then add another Twitter action to Post a tweet. Use the Function’s body to post the resulting message. Include a link back to the original tweet to make the tweet appear as a quoted retweet:

Screenshot of If True input

Your overall logic should look something like this: (You can see this bot in action at @LeksBot.)

Twitter bot logic is pictured.

Step Five: Train & Improve Your Bot

Open your logic app and scroll down to Runs history. You can see each time your bot has triggered. If you see tweets that weren’t responded to properly, you can open up each run and inspect the flow. You can see the run’s parameters and make adjustments. Paste the tweet into your LUIS app and train it on the correct intent. Each time you do this your app will become “smarter” and make fewer mistakes.

After you have re-trained LUIS, (make sure you click Publish!), or made any adjustments to your flow, you can resubmit the same run (tweet) and make sure it’s processed correctly. Re-train and adjust as needed to improve your bot’s experience.

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

AIS’ work with the NFL Players Association (NFLPA) was showcased as a Microsoft Featured Case Study. This customer success story was our most recent project with NFLPA, as they’ve sought our help to modernize multiple IT systems and applications over the years. We were proud to tackle the latest challenge: Creating a single, shared player management system, using Dynamics 365, for the NFLPA and all its sister organizations.

The Challenge

This case study was featured on Microsoft. Click here to view the case study.As the nonprofit union for NFL players, the NFLPA constantly looks for ways to better serve its members—current and former NFL players—during and after their football careers. But multiple player management systems across the associated support organizations resulted in poor customer service and missed opportunities for NFLPA members. Valuable data captured by one department wasn’t accessible to another, causing headaches and delays when licensing opportunities arose, and limited the organization’s ability to be proactive about the challenges members face after retirement.

The Solution: A Single Source

We used Microsoft Dynamics 365 to create a single, shared player management system, called PA.NET, for all the NFLPA organizations. We customized Dynamics 365 extensively to meet the unique needs of the NFLPA and integrated it with the organization’s Office 365 applications.

At the same time, we shifted all legacy IT systems (websites, financial applications, and others) to Microsoft Azure, giving NFLPA an entirely cloud-based business.

The Results: More Opportunities, More Time, Fewer Costs

With one master set of player data and powerful reporting tools that employees use to find answers to their own questions, the NFLPA can uncover marketing and licensing opportunities for more players and identify other ways to help its members.

Because PA.NET automates so many previously manual processes, it frees up hours of drudge work each week for NFLPA employees, which they convert to creative problem solving for members. And its IT staff has freed up 30 percent more time by not having to babysit infrastructure, time it uses to come up with new technology innovations.

By moving its business systems to the cloud, the NFLPA can scale its infrastructure instantly when traffic spikes—such as when football season ends and licensing offers heat up. No more over-provisioning servers to meet worst-case needs. In fact, no more servers, period. With cloud-based systems, the NFLPA no longer has to refresh six-figure server and storage systems every few years.

Read the full Microsoft Featured Case Study here to learn more about our work and more about great work the NFLPA does on behalf of its members.

SCORE LIKE NFLPA. WORK WITH AIS. Transformation is on the horizon for your organization. All it takes is the right partner. With the experience, talent, and best practices to lead you to success, AIS is the right partner for you.

A big announcement from Microsoft this month: The introduction of Azure DevOps, the most complete offering of proven, modern DevOps tools and processes available in the public cloud. Used together, the Azure DevOps services span the entire breadth of the development lifecycle so enterprises can modernize apps in a faster and more streamlined way.

What Is DevOps, Anyway?

DevOps solutions bring together people, processes, and technology, automating and streamlining software delivery to provide continuous value to your users.What is DevOps?

If you want your next development or app modernization project to be a success, DevOps is the way to go.

High-performance DevOps enterprises achieve increased revenue with a faster time to market and produce solutions that are more powerful, flexible, and open. (Yes, Microsoft has been partnering with the open-source community to ship products that work for everyone.) New features can be safely deployed to users as soon as they’re ready vs. bundling them together in one large update down the road.

New Services & Tools in Azure DevOps

  • Azure Pipelines – Continuously build, test and deploy to ANY language, platform, or cloud.  Azure Pipelines offers unlimited build minutes and 10 free parallel jobs for all open-source projects!
  • Azure Boards – Plan, track and discuss your work and ideas across teams with proven agile tools.
  • Azure Artifacts – With the click of a button, add artifacts to your CI/CD pipeline.
  • Azure Repos – Unlimited private Git repos (cloud-hosted) allow team members to build and collaborate better.
  • Azure Test Plans – These manual and exploratory testing tools will allow you to test and ship with ease and confidence.

Azure DevOps is what’s next for Visual Studio Team Services. VSTS users will be automatically upgraded without jeopardizing functionality.

With the services provided with Azure DevOps, you can choose the tools and cloud services that you want to use and build end-to-end solutions for an enterprise-level toolchain. As long-time believers in both Azure and DevOps, we’re really excited about this offering and what it can offer our clients.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

The President’s Management Agenda (PMA) called on all government agencies to accelerate their IT modernization efforts with a continued focus on security. So…now what?

At this month’s #AzureGov meetup, our panel of speakers discussed exactly how agencies can navigate the world of automated ATOs, revamped TIC compliance and beyond.  And at the same time, fully realize the benefits of the cloud and achieve greater agility while strengthening their security posture.

Last night’s speakers included:

Mark Cohn, CTO, Unisys Federal
Greg Elin, CEO, GovReady & Former Chief Data Officer, Federal Communications Commission
• Nate Johnson, Cloud Security & Compliance Director, Microsoft
• Scott Thompson, Cloud Solution Strategist, Microsoft

For a replay of the full Meetup, click here. For past Meetups, visit the Azure Government Meetup YouTube channel here.

The next Meetup is set for Wednesday, September 26. RSVP today to claim your spot and join us for great networking and presentations. We hope to see you there!

In this blog post, I discuss an app modernization approach that we call “modernize-by-shifting.” In essence, we take an existing application and move it to “managed” container hosting environments like Azure Kubernetes Service or Azure Service Fabric Mesh. The primary goal of this app modernization strategy is to undertake minimal possible change to the existing application codebase. This approach to modernization is markedly different from a “lift-and-shift” approach where workloads are migrated to the cloud IaaS unchanged with little to no use of cloud native capabilities.

Step One of App Modernization by Shifting

As the first step of this approach, an existing application is broken into a set of container images that include everything needed to run a portion of the application: code, runtime, system tools, system libraries, and settings. Approaches to breaking up the application in smaller parts can vary based on original architecture. For example, if we begin with multi-tier application, each tier (e.g. presentation, application, business, data access) could map to a container image. While this approach will admittedly lead to coarser-grained images, compared to a puritanical microservices-based approach of light-weight images, it should be seen as the first step in modernizing the application.

Read More…

One of the biggest roadblocks to government digital transformation is the lack of effective IT governance. Unresolved concerns including privacy, security and organizational silos that limit data sharing and analysis continue to pose hurdles for agencies.

Last night’s Azure Government Meetup in Washington, D.C. featured a stellar lineup of industry-leading experts who shared insights and strategies on achieving effective IT governance in areas including identity, portfolio and records management.

If you missed it, you can catch the replay hereRead More…