Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

Azure Web Apps Background

I’ve been working with Azure Web Apps for a long time. Before the launch of Azure Web Apps for Containers (or even Azure Web App on Linux), these web apps ran on Windows Virtual Machines managed by Microsoft. This meant that any workload running behind IIS (i.e., ASP.Net) would run without hiccups — but that was not the case with workloads which preferred Linux over Windows (i.e., Drupal).

Furthermore, the Azure Web Apps that ran on Windows were not customizable. This meant that if your website required a custom tool to work properly, chances are it was not going to work on an Azure Web App, and you’d need to deploy a full-blown IaaS Virtual Machine. There was also a strict lockdown regarding tools and language runtime versions that you couldn’t change. So, if you wanted the latest bleeding-edge language runtime, you weren’t gonna get it.

Azure Web Apps for Containers: Drum Roll

Last year, Microsoft released the Azure Web Apps for Containers or Linux App Service plan offering to the public. This meant we could build a custom Docker image containing all the binaries and files, and then deploy it on the PaaS offering. After working with the product for some time, I was like..

The product was excellent, and it was clear that it had potential. Some of  the benefits:

  • Ability to use a custom Docker image to run the Web App
  • Zero headaches from managing Docker containers
  • The benefits of Azure Web App on Windows like Backups, Kudu, Deployment Slots, Autoscaling (Scale up & Scale out), etc.

Suddenly, running workloads that preferred Linux or required custom binaries became extremely easy.

The Architecture

Compared to Azure Web App on Windows, the architecture implemented in Azure Web App for Containers is different.

diagram of Azure web apps architecture

Each of the above Web Apps is strictly locked down with minimal possibility of modification. Furthermore, the backend storage was based on Network File Shares which means that even if you don’t want any storage (like in cases when your app simply reads data from the database and displays it back), the app would still perform slowly.

diagram of Azure web apps architecture

The major difference is that the Kudu/SCM site runs in a separate container from the actual web app. Both containers are connected to each other with a private network. In this case, each App Service Plan is deployed on a separate Virtual Machine and all the plumbing is managed by Microsoft. The benefits of this approach are:

  • Better isolation. If your Kudu is experiencing issues, it reduces the chance of taking down your actual website.
  • Ability to customize the actual web app container running the website.
  • Better resource utilization

Stay tuned for the next part in which I would be discussing the various options related to Storage which are available in Azure Web App for Containers and their trade-offs.

Happy holidays!

Did you know you can build an intelligent twitter bot and run it for just pennies a month using Azure’s Logic and Function apps, coupled with Microsoft’s Language Understanding Intelligence Service (LUIS)? LUIS can “read” a tweet and determine the tweet’s sentiment with a little help from you. Run selected tweets through your LUIS app, determine their meaning, and then use that meaning to create a personalized tweet back at the original.

Here’s how…

Step One: Select a Twitter Query

Use Twitter’s advanced search tools to craft a query to narrow down your selection of tweets to the specific messages you want your bot to respond to. Your Azure charges will be usage-based, so you want this query to be specific enough to only pick up the kinds of messages your LUIS app will know how to respond to.

Step Two: Create an App with LUIS

If you don’t already have a LUIS app to use, follow the steps here to create your new LUIS app. For your utterances, I recommend using a sampling of tweets that were returned using the twitter query you created. Copy as many tweets from your query as possible into the LUIS test tool and assign them to the correct intent as needed. Train and publish your app before continuing.

Step Three: Create a Function App

Use the steps here to create a new Function App with a HTTP trigger.

Once you have the app and trigger created, download the function by clicking “Download app content.”

Screenshot with Download App content highlighted

Unzip your app and open it in Visual Studio. Add classes for the LUIS Prediction:

public class Prediction
    {
        [JsonProperty(PropertyName = "query")]
        public string Query { get; set; }

        [JsonProperty(PropertyName = "topScoringIntent")]
        public Intent TopScoringIntent { get; set; }

        [JsonProperty(PropertyName = "intents")]
        public List Intents { get; set; }

        [JsonProperty(PropertyName = "entities")]
        public List Entities { get; set; }

        [JsonProperty(PropertyName = "luisPrediction")]
        public string LuisPrediction { get; set; }

        [JsonProperty(PropertyName = "desiredIntent")]
        public string DesiredIntent { get; set; }

        [JsonProperty(PropertyName = "isDesiredIntent")]
        public bool IsDesiredIntent { get; set; }
    }

public class Intent
    {
        [JsonProperty(PropertyName = "intent")]
        public string IntentValue { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

   public class Entity
    {
        [JsonProperty(PropertyName = "entity")]
        public string EntityValue { get; set; }

        [JsonProperty(PropertyName = "type")]
        public string Type { get; set; }

        [JsonProperty(PropertyName = "startIndex")]
        public int StartIndex { get; set; }

        [JsonProperty(PropertyName = "endIndex")]
        public int EndIndex { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

Then Modify your HTTPTrigger to parse the prediction:

[FunctionName("HttpTrigger")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            dynamic data = await req.Content.ReadAsAsync          
            var prediction = ((JObject)data).ToObject();

            var message = GetTweetMessage(prediction);

            if (!string.IsNullOrEmpty(message))
            {
                return req.CreateResponse(HttpStatusCode.OK, message);
            }

            return req.CreateResponse(HttpStatusCode.NotFound);
        }

Replace “GetTweetMessage” with your own code to interpret the intent and entities (if defined/provided) and generate your tweet message. Then send the message string back in the response. Deploy your changes back to Azure. (Right click project in visual studio, select “Publish”, follow instructions)

Note: In order to use a free dev service plan for your function, you must turn its AlwaysOn setting to Off. You can only do this if you are using a HTTP trigger; a timer trigger won’t fire if you turn off AlwaysOn.

Do this by going to Application settings:

Screenshot with Application Settings highlighted
Toggle AlwaysOn and save the changes. You may now go to Platform features:

Screenshot with Platform features highlighted.

Then All Settings:

All Settings is highlighted.

Then scroll down to App Service Plan and choose Change App Service Plan:

All settings is highlighted.

Change the app service plan to your devtest (free) service plan.

Step Four: Create a Logic App

General information on creating new logic apps can be found here.

Once you’ve created your logic app, go to the Logic app designer:

Logic app designer is highlighted.
Create your first workflow item: a Twitter search tweets trigger. Use your search query from above and change the interval as needed:

Screenshot of query

Create your next workflow item by clicking the plus button at the bottom of your twitter search tweets trigger. Add a new LUIS get prediction action. (You will be prompted for your LUIS connection and key; you can find these in your LUIS app.) The connection value is your LUIS endpoint. Select your LUIS-connected app for the APP Id and then click on Utterance Text field. A flyout list of dynamic options will appear; choose Tweet text under the Twitter options. Leave Desired Intent blank.

Screenshot of Get Prediction input

Add a new flow item under Control -> Condition:

Screenshot of new flow item input.

This workflow checks for the Top Scoring Intent Name from LUIS. We don’t want to continue passing this message to our Azure function if LUIS did not recognize its intent, so we only continue if Top Scoring Intent is not equal to None.

The control flow added two boxes below it. One for If True, the other for If False. Leave If False blank, the workflow will stop here if LUIS has not returned a usable intent. In the If True box, add a new action for Azure Functions and select the function you created above.

In the Request Body field of your function trigger, put the LUIS Body parameter. Then add another Twitter action to Post a tweet. Use the Function’s body to post the resulting message. Include a link back to the original tweet to make the tweet appear as a quoted retweet:

Screenshot of If True input

Your overall logic should look something like this: (You can see this bot in action at @LeksBot.)

Twitter bot logic is pictured.

Step Five: Train & Improve Your Bot

Open your logic app and scroll down to Runs history. You can see each time your bot has triggered. If you see tweets that weren’t responded to properly, you can open up each run and inspect the flow. You can see the run’s parameters and make adjustments. Paste the tweet into your LUIS app and train it on the correct intent. Each time you do this your app will become “smarter” and make fewer mistakes.

After you have re-trained LUIS, (make sure you click Publish!), or made any adjustments to your flow, you can resubmit the same run (tweet) and make sure it’s processed correctly. Re-train and adjust as needed to improve your bot’s experience.

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Last week we laid out some basics of what we call the “Full PaaS” approach to legacy app modernization. While it might not make sense in every situation, we recently completed a modernization effort using the Full PaaS approach. Here’s some background and the steps we took…

Stop Playing Legacy App “Whack-a-Mole” 

Our enterprise customer developed and owned a budgeting application. The application was over five years old and built on tech that — while modern at the time — had become “stale” over the years.  Usage patterns for the application included huge spikes in demand during specific times of the year, and the need to meet these demands prompted the team to “reactively” invest in servers with more memory, better networking equipment, and other fixes. Problems were only addressed as they cropped up, with no time for long-term planning.

Yet despite throwing money and equipment at those problems, the issues with the platform continued, while customers were demanding more functionality. Unfortunately, since most of the application team’s time was spent reacting to operating issues, that simply couldn’t happen. Additionally, the application team’s O&M budget shrank over time, leaving a smaller staff responsible for the application.

After analyzing the application, we determined that three tiers of the application (compute, cache, database) could be moved to the “as-a-service” model with a reasonable amount of refactor, for two main reasons:

  • This model would address the seasonal demand challenges by leveraging “auto-scaling” capabilities built directly into the services the application would now be consuming.
  • As an added benefit, these services allowed scripted deployments, automated monitoring, and easier provisioning of “test” environments to get new features in the hands of users more quickly.

The 7 Steps to “Full-PaaS” Modernization

Once we chose the Full PaaS approach, we completed the modernization effort by following these seven (high level) steps:

  1. Analyze application dependencies: This includes compute and data tiers, software architecture, reliance on existing resources.
  2. Identify services to replace legacy components: Not everything will port directly, so map out your replacements ahead of time.
  3. Establish candidate PaaS architecture: Choose your cloud platform, specific services to be used, and architect the flow of communication between the services.
  4. Validate with internal stakeholders: We talk to everyone from operations staff to security to business users of the application.
  5. Refactor your application code: Target the PaaS services included in the candidate architecture.
  6. Automate your application delivery and integrate CI/CD: This is one of the biggest benefits to this modernization approach, so take full advantage of it!
  7. Establish a living roadmap for ongoing improvement: We’re looking for both ongoing improvements to delivery automation and for any additional applications that can also adopt the model.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

If you’re looking to modernize a legacy application, there are quite a few paths and approaches to choose from. Today, let’s look at “Full PaaS” modernization, which re-architects legacy applications to target cloud-native “serverless” technologies wherever possible.

This can solve many of the most common challenges organizations face when dealing with legacy applications:

  • The need to provide modern capabilities, innovate faster with limited resources
  • Your existing infrastructure is expensive and difficult to provision, maintain, scale, secure
  • Your existing staff could improve efficiency by focusing on their strengths
  • Your customers expect evolving, innovative functionality that relies on expensive, complex underlying technology

Why “Full PaaS” Modernization Is Different

Going the “Full PaaS” route allows your company to take advantage of the “best of the best” that the public cloud has to offer, including:

  • The responsibility for delivering platform quality shifts from IT staff to industry experts (uptime, security, etc.)
  • Managed offerings for all application components provide peace of mind and zero day-to-day maintenance.
  • You can immediately and automatically leverage the innovation and improvements being applied (almost constantly) to the underlying cloud platform.

Once your application is migrated, you’ll enjoy vastly improved speed, flexibility and agility. Modern PaaS platforms offer opportunities to automate and extend delivery processes as a core feature of the service, which creates a lower barrier to entry to incorporate modern, innovative technology for improvement of your software products.

And of course, we can’t talk about moving applications to the cloud without mentioning the cost savings and lower total cost of ownership. You’ll eliminate significant effort required to build, maintain, and evolve the “foundation” of your application’s infrastructure (servers, networking equipment, monitoring stack, data backup and disaster recovery, etc.). This, in turn, will let you focus your development resources on core competency, avoid inefficiencies related to effort not directly focused on improving the quality of your software products.

Sounds Great! Any Drawbacks? 

Well, yes. There are a few things to consider before taking this approach:

  • More significant application refactor or re-architecture could be required, which can include more significant up-front investment.
  • More potential for vendor lock-in specific to a specific cloud provider than other modernization approaches.
  • Existing staff may need to invest in modernizing existing skillsets and deeply ingrained ideas about “the way things are done” – instead emphasizing “software-defined” principles (networking, automation, monitoring).

This approach targets the highest level of maturity for cloud adoption, where you’re consuming cloud native features as a service to provide all of the building blocks for your application.  This won’t fit right away for all situations, as organizations balance an application’s internal dependencies with the need to modernize.  However, this approach can provide significant benefit for legacy application teams given the right circumstances.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

So Is “Full PaaS” the Right Approach For Me?

AIS looks for the following characteristics when evaluating this approach:

  • The team is willing to invest a bit more time and money up front to modernize in return for the benefits listed above.
  • The team has significant challenges managing infrastructure which aren’t fully addressed by more basic lift-and-shift or “containerization” app modernization approaches. In many cases this comes from a slow erosion of operations and maintenance (O&M) staff over time, leaving developers responsible for all portions of development and delivery.
  • The team is interested in providing evolving features, but is constrained by the lack of innovation on the current platform.
  • The team releases updates/features too infrequently and is under pressure to improve the “cycle time.”

Next week, we’ll take a look at the specific steps involved in the “Full PaaS” modernization approach, and share an example of a successful legacy app modernization project the AIS Team recently completed.

AIS’ work with the NFL Players Association (NFLPA) was showcased as a Microsoft Featured Case Study. This customer success story was our most recent project with NFLPA, as they’ve sought our help to modernize multiple IT systems and applications over the years. We were proud to tackle the latest challenge: Creating a single, shared player management system, using Dynamics 365, for the NFLPA and all its sister organizations.

The Challenge

This case study was featured on Microsoft. Click here to view the case study.As the nonprofit union for NFL players, the NFLPA constantly looks for ways to better serve its members—current and former NFL players—during and after their football careers. But multiple player management systems across the associated support organizations resulted in poor customer service and missed opportunities for NFLPA members. Valuable data captured by one department wasn’t accessible to another, causing headaches and delays when licensing opportunities arose, and limited the organization’s ability to be proactive about the challenges members face after retirement.

The Solution: A Single Source

We used Microsoft Dynamics 365 to create a single, shared player management system, called PA.NET, for all the NFLPA organizations. We customized Dynamics 365 extensively to meet the unique needs of the NFLPA and integrated it with the organization’s Office 365 applications.

At the same time, we shifted all legacy IT systems (websites, financial applications, and others) to Microsoft Azure, giving NFLPA an entirely cloud-based business.

The Results: More Opportunities, More Time, Fewer Costs

With one master set of player data and powerful reporting tools that employees use to find answers to their own questions, the NFLPA can uncover marketing and licensing opportunities for more players and identify other ways to help its members.

Because PA.NET automates so many previously manual processes, it frees up hours of drudge work each week for NFLPA employees, which they convert to creative problem solving for members. And its IT staff has freed up 30 percent more time by not having to babysit infrastructure, time it uses to come up with new technology innovations.

By moving its business systems to the cloud, the NFLPA can scale its infrastructure instantly when traffic spikes—such as when football season ends and licensing offers heat up. No more over-provisioning servers to meet worst-case needs. In fact, no more servers, period. With cloud-based systems, the NFLPA no longer has to refresh six-figure server and storage systems every few years.

Read the full Microsoft Featured Case Study here to learn more about our work and more about great work the NFLPA does on behalf of its members.

SCORE LIKE NFLPA. WORK WITH AIS. Transformation is on the horizon for your organization. All it takes is the right partner. With the experience, talent, and best practices to lead you to success, AIS is the right partner for you.

Microsoft Power Platform: Application Development Platform for General Purpose Business Apps

In recent years, with the transition to the cloud, SharePoint teams’ recommendation has been to move custom functionality “down” to the client computer or “over” to a host outside of SharePoint. This change has had a direct impact on enterprises that have long viewed SharePoint as an application development platform.

Today’s best practices show that a grouping of applications, such as case management apps, inventory or issue tracking, and fleet management (depicted in the dotted area of the diagram below) is no longer best suited to be built on top of SharePoint. This is where Microsoft Power Platform comes in for cross-platform app development.

Microsoft Power Platform as an app development platform

What is the Microsoft Power Platform (MPP)?

Microsoft Power Platform (or MPP) is being seen as the aPaaS layer that nicely complements mature Azure IaaS and PaaS offerings. Here are a few reasons we’re excited about MPP:

  • It offers a low-code/no code solution for rapid application development
  • The PowerApps component supports cross-platform app development for mobile and responsive web application solutions
  • Flow within MPP also offers a workflow and rules capability for implementing business processes
  • CDS offers a service for storing business objects

SharePoint as an App Development Platform

On January 30, 2007, we released a paper called “SharePoint as an Application Development Platform” to coincide with the release of Microsoft Office SharePoint Server 2007 or MOSS. We didn’t have the slightest expectation that this paper would be downloaded over half a million times from our website.

Based on the broad themes outlined in this paper, we went on to develop several enterprise-grade applications on the SharePoint platform for commercial as well as public sector enterprises. Of course, anyone who participated in SharePoint development in 2007 and onwards can vouch for the excitement around the amazing adoption of SharePoint.

Features, Add-Ins or “Apps,” and More

A few years later, SharePoint 2010 was announced with even more features that aligned with the application development platform theme. In 2013, the SharePoint team went on to add the notion of apps (now called add-ins) – we even wrote a book on SharePoint Apps. The most recent version of SharePoint is 2019 and it continues the tradition of adding new development capabilities.

All the Ingredients for Unprecedented Success

If you look back, it is easy to see why SharePoint was so successful. SharePoint was probably the first platform that balanced self-service capabilities with IT governance. Even though the underlying SharePoint infrastructure was governed by IT, business users could provision websites, document libraries, lists, and more in a self-service manner.

Compare this to the alternative at that time, which was to develop an ASP.NET application from scratch and then having to worry about operationalizing it including backup, recovery, high availability, etc. Not to mention the content and data silo that may result from yet another application being added to the portfolio.

Furthermore, the “Out of the Box” (OOTB) SharePoint applications constructs including lists, libraries, sites, web parts, structured and unstructured content, granular permissions, and workflows allowed developers to build applications in a productive manner.

With all these ingredients for success, SharePoint went on to become an enterprise platform of choice with close to 200 million licenses sold and, in the process, creating a ten-billion-dollar economy.

By 2013, Signs of Strain Emerged for SharePoint as an App Development Platform

With the transition to the cloud and lessons learned from early design choices, weaknesses in SharePoint as an app development platform started to show. Here are some examples:

  • Limitations around structured data – Storing large lists has always been challenging, despite the improvements over the years to increased scalability targets for lists. Combine scalability challenges with the query limitations and it makes for a less than ideal construct for general-purpose business application development.
  • Isolation limitations – SharePoint was not designed with isolation models for custom code. As a result, SharePoint farm solutions that run as full-trust solutions on the server side have fallen out of favor. The introduction of sandbox mode didn’t help since isolation/multi-tenancy was not baked in the original SharePoint design. The current recommendation is to build customizations as add-ins that run on the client side or on an external system.
  • External data integration challenges – BDC (Business Data Services) was designed to bring data from external systems into SharePoint. Business Connectivity Services (BCS) extensibility model was designed to allow an ecosystem of third-party and custom connectors to be built. But BCS never gained significant adoption with limited third-party support. Furthermore, BCS is restricted in the online model.
  • Workflow limitations – Workflows inside SharePoint are based on Windows Workflow Foundation (WF). WF was designed before the REST and HTTP became the lingua franca of integration over the web. As a result, even though SharePoint-based workflows work well for a document-centric workflow like approval, they are limited when it comes to integrating with external systems. Additionally, the combination of WF and SharePoint has not been the easiest thing to debug.
  • Lack of native mobile support – SharePoint was not designed for mobile experiences from the ground up. Over the years, SharePoint has improved the mobile experience through the support for device channels. Device channels allow for selection of different master pages and style sheets based on the target device. But device support is limited to publishing pages and even those require non-trivial work to get right. In a mobile-first world, a platform built with mobile in mind is going to be the ideal cross-platform app development tool for enterprises.
  • Access Services (and other similar services) never took off – Access databases are quintessential examples of DIY Line of Business (LOB) Apps. So when SharePoint 2010 introduced a capability to publish Access databases to SharePoint, it sought to offer the best of both worlds, self-service combined with governance (the latter being the bane of access databases). However, that goal proved too good to be true and Access Services never caught on and are now being deprecated.
  • Development “cliffs” – SharePoint was supposed to enable business users to build their own customizations through tools like Visio and SharePoint designer. The idea was that business users would be able to build customizations themselves using SharePoint designer and if they ran into a limitation (or the “cliff”), they could export their artifacts into a professional development tool like Visual Studio. In reality, this dichotomy of tools never worked and you almost always had to start over.
  • State of art in-low / no-code development – If you look at leading high-productivity application development platforms, the state of art seems to be around a declarative model-driven application approach. In other words, using a drag and drop UI, a user can generate a metadata-based configuration that describes the application, flow of application pages, etc. At runtime, the underlying platform interprets this configuration and binds the actions to the built-in database. SharePoint obviously has a rich history of offering no-code solutions, but it is not based on a consistent and common data model and scripting language.
  • Monolith versus micro-services – In many ways, SharePoint has become a “monolith” with tons of features packed into one product — content management, records management, business process, media streaming, app pages — you name it. Like all monoliths, it may make sense to break the functionality into “micro” services.

Note: With all the challenges listed above, SharePoint as a collaboration software continues to grow. In fact, it’s stronger than ever, especially when it comes to building collaboration-centric apps and solutions using the SharePoint framework and add-ins.

Just visit the thriving open source-based development centered around SharePoint Development to see for yourself.

Enter the Microsoft Power Platform (MPP)

The below diagram depicts a high-level view of the Microsoft Power Platform (MPP).

Microsoft Power Platform Infrastructure Overview

The “UI” Layer: Power Apps, Power BI, and Flow

At the highest level, you have the Power Apps, Power BI, and Flow tools.

Power Apps is a low-code platform as a service (PaaS) solution that allows for business app development with very little code. These apps can be built with drag and drop UI elements that work across mobile, tablet, and web form factors. In addition to the visual elements, there’s an Excel-like language called PowerApps Expression Language that’s designed to implement lightweight business logic and binding visual controls to data. Since PowerApps comes with a player for Android and iPhone devices, it doesn’t need to be published or downloaded from app stores.

Additionally, admin functions like publication, versioning, and deployment environments are baked into the PowerApps service. The PaaS solution can be used to build two types of apps:

  1. Canvas apps – as the name suggests, these allow you to start with a canvas and build a highly-tailored app.
  2. Model-driven apps – also as the name suggests, these allow you to auto-generate an app based on a “model” – business and core processes.

You’re likely already familiar with Power BI. It’s a business analytics as a service offering that allows you to create rich and interactive visualizations of sources of data.

Flow is a Platform as a Service (PaaS) capability that allows you to quickly implement business workflows that connect various apps and services.

Working together, PowerApps, Flow and Power BI allow for business users to easily and seamlessly build the UI, business process, and BI visualizations of a cross-platform, responsive business application. These services are integrated together to make the experience even better. As an example, you can embed a Canvas PowerApp inside of Power BI or vice versa.

The “Datastore and Business Rules” Layer

Common Database Service (CDS) allows you to securely store and manage data used by business applications. In contrast to a Database as a service offering like Azure SQL Database, think of CDS as a “business objects as a service”. Azure SQL Database removes the need for physical aspects of the database but as a consumer of Azure SQL, you’re still required to own the logical aspects of the data, such as schema, indexing and query optimization, and permissions.

In CDS all you do is define your business entities and their relationships. All logical and physical aspects of the database are managed for you. In addition, you get auditing, field level security, OData API, and rich metadata for free.

Finally, CDS offers a place to host “server-side” business rules in the form of actions and workflows. It’s important to note that CDS is powered by Dynamics CRM under the covers (see [1] for more information on this). This means that any skills and assets that your team has around Dynamics will seamlessly transition to CDS.

“External Systems Connector” Layer

PowerApps comes with a large collection of connectors that allow you to connect with a wide array of third-party applications and services (like SalesForce and Work Day) and bind the data to PowerApps visual controls. In addition, you can also connect to a custom app of service via Azure API Management and Functions.

How Can MPP Alleviate SharePoint Development Challenges?

MPP is a platform designed from the ground up for building business apps. Here’s how it can help alleviate the challenges users may experience with SharePoint as an app development platform.

Challenge SharePoint as a dev platform Power Platform
Structured Data Limits of items, throttling, and queries Backed by a fully relational model
Isolation No server-side isolation model. Server-based farm solutions are discouraged. Each tenant is isolated and allows for running custom code
External Data Integration BCS – limited third-party support 230+ connectors
Workflow Limitations Document-centric workflow HTTP REST based scalable workflow construct that can leverage OOTB connectors
Mobile Support Limited support in the form of device channels Designed from the ground up for mobile support
Development “Cliffs” Hard to transition from citizen dev to prod dev tools There a single tool for citizen developers and pro dev users. Pro dev users have the ability to use the extensibility to call Azure Functions, Logics Apps etc. for richer and complex functionality.
Low Code Development No common domain model or scripting language Designed from the ground-up as a low code development environment.
Monolith vs micro-services-based approach Monolith Comprised of a collection of “micro” services PowerApps, Flow, Power BI, CDS

Comparing Reference Architectures

The following diagram compares the SharePoint and MPP reference architectures.

SharePoint and MPP Reference Architectures Compared

Power Platform Components Inside SharePoint?

It’s noteworthy that advancements in MPP are available in SharePoint. For example, “standalone” Canvas PowerApps and Flow integrate directly into SharePoint are shown in the table below.

In many ways, this development represents a realignment of SharePoint’s place in the application development space. We believe that MPP is the new “hub”, while SharePoint (and other Office 365 components) represent “spokes” in this model.

PowerApps as a “custom form” for a SharePoint list item.

PowerApps as a custom form for a SharePoint list item example

Flow powering a document-centric workflow

Flow powering a document-centric workflow example

Consider MPP for Your Next Business App Development Project

The Microsoft Power Platform is an aPaaS offering that’s designed from the ground up as a general-purpose business application development platform, which includes native mobile support, first-class low-code environment, business data as a service, a lightweight workflow engine, and rich business analytics all in one. We believe that a category of applications, that were previously built on SharePoint will benefit in moving to the Microsoft Power Platform.

[1] xRM with Dynamics CRM

xRM style applications have been built on Dynamics CRM for years. Unfortunately, the licensing story to support this model was not ideal. For example, customers could not license a “base” version without paying for sales, marketing functionality built OOB.

That’s all changing with CDS. PowerApps licenses, such as P1 and P2, give customers access to what’s referred to as “Application Common” or Base instance of CDS.

CDS Apps Instance vs Dynamics Instance

Learn More About MPP & Your App Development Platform Options

The road to general-purpose business application development can be challenging to navigate with all of the tools and platforms available, as well as the unique pros and cons of each option. Not sure where to start? Start with a conversation!

START THE CONVERSATION
See if MPP is a good fit for your orgs app development needs. Contact your partner at AIS today!

A big announcement from Microsoft this month: The introduction of Azure DevOps, the most complete offering of proven, modern DevOps tools and processes available in the public cloud. Used together, the Azure DevOps services span the entire breadth of the development lifecycle so enterprises can modernize apps in a faster and more streamlined way.

What Is DevOps, Anyway?

DevOps solutions bring together people, processes, and technology, automating and streamlining software delivery to provide continuous value to your users.What is DevOps?

If you want your next development or app modernization project to be a success, DevOps is the way to go.

High-performance DevOps enterprises achieve increased revenue with a faster time to market and produce solutions that are more powerful, flexible, and open. (Yes, Microsoft has been partnering with the open-source community to ship products that work for everyone.) New features can be safely deployed to users as soon as they’re ready vs. bundling them together in one large update down the road.

New Services & Tools in Azure DevOps

  • Azure Pipelines – Continuously build, test and deploy to ANY language, platform, or cloud.  Azure Pipelines offers unlimited build minutes and 10 free parallel jobs for all open-source projects!
  • Azure Boards – Plan, track and discuss your work and ideas across teams with proven agile tools.
  • Azure Artifacts – With the click of a button, add artifacts to your CI/CD pipeline.
  • Azure Repos – Unlimited private Git repos (cloud-hosted) allow team members to build and collaborate better.
  • Azure Test Plans – These manual and exploratory testing tools will allow you to test and ship with ease and confidence.

Azure DevOps is what’s next for Visual Studio Team Services. VSTS users will be automatically upgraded without jeopardizing functionality.

With the services provided with Azure DevOps, you can choose the tools and cloud services that you want to use and build end-to-end solutions for an enterprise-level toolchain. As long-time believers in both Azure and DevOps, we’re really excited about this offering and what it can offer our clients.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

AIS Gets Connection of DoD DISA Cloud Access Point at Impact Level 5

Getting the DoD to the Cloud

Our team was able to complete the near-impossible. We connected to the DoD DISA Cloud Access Point at Impact Level 5, meaning our customer can now connect and store any unclassified data they want on their Azure subscription.

About the Project

The project started in July 2017 to connect an Azure SharePoint deployment to the DoD NIPRnet at Impact Level 5. Throughout the process, the governance and rules of engagement were a moving target, presenting challenges at every turn.

Thanks to the tenacity and diligence of the team, we were able to successfully achieve connection to the Cloud Access Point (CAP) on September 6th, 2018. This was a multi-region, with 2 connections, SharePoint IaaS always-on deployment, which involved completing all required documentation for the DISA Connection (SNAP) process.

We are now moving towards the first Azure SharePoint Impact Level 5 production workload in the DoD, so be sure to stay tuned for more updates.

A Repeatable Process for Government Cloud Adoption

Azure Government was the first hyperscale commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency, and this was the first public cloud connection on Azure in the DoD 4th Estate.

With fully scripted, repeatable cloud deployment, including Cloud Access Point connection requirements, we can now get Government Agencies to the cloud faster, and more securely than ever before.

We work with fully integrated SecDevOps processes and can leverage Microsoft’s Azure Security Team for assistance in identifying applicable security controls, inherited, shared and customer required controls.

See how you can make the cloud work for you. Contact AIS today to start the conversation, or learn more about our enterprise cloud solutions.

HARNESS THE POWER OF CLOUD SERVICES FOR YOUR ORG
Discover how AIS can help your org leverage the cloud to modernize, innovate, and improve IT costs, reliability, and security.