I recently encountered an issue when trying to create an Exact Age column for a contact in Microsoft Dynamics CRM. There were several solutions available on the internet, but none of them was a good match for my specific situation. Some ideas I explored included:

  1. Creating a calculated field using the formula DiffInDays(DOB, Now()) / 365 or DiffInYears(DOB, Now()) – I used this at first, but if the calculated field is a decimal type, then you end up with a value like 23 years old which is not desirable. If the calculated field is a whole number type, then the value is always the rounded value. So, if the DOB is 2/1/1972 and the current date is 1/1/2019, the Age will be 47 when the contact is actually still 46 until 2/1/2019.
  2. Using JavaScript to calculate the Age – The problem with this approach is that if the record is not saved, then the data becomes stale. This one also does not work with a view (i.e., if you want to see a list of client ages). The JavaScript solution seems more geared towards the form of UI experience only.
  3. Using Workflows with Timeouts – This approach seemed a bit complicated and cumbersome to update values daily across so many records.

Determining Exact Age

Instead, I decided to plug some of the age scenarios into Microsoft Excel and simulate Dynamic CRM’s calculations to see if I could come up with any ideas.

Note: 365.25 is used to account for leap years. I originally used 365, but the data was incorrect. After reading about leap years, I decided to plug 365.25 in, and everything lined up.

Excel Formulas

Setting up the formulas above, I was able to calculate the values below. I found that subtracting the DATEDIF Rounded value from the DATEDIF Actual value produced a negative value when the month/day was after the current date (2/16/2019 at the time). This allowed me to introduce a factor of -1 when the Difference was less than or equal to 0.  Using this finding, I set up the solution in CRM.

Excel Calculations

The Solution

  1. Create the necessary fields.
    Field  Data Type  Field Type  Other  Formula 
    DOB  Date and Time  Simple  Behavior: User Local   
    Age Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(new_dob, Now()) / 365.25 
    Age Rounded  Whole Number  Calculated    DiffInDays(new_dob, Now()) / 365.25 
    Age Difference  Decimal Number  Calculated  Precision: 10  new_ageactual – new_agerounded 
    Age  Whole Number  Calculated    See below 
  1. Create a business rule for DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Age calculated field as follow:
    Age Calculated Field

Once these three steps have been completed, your new Age field should be ready to use. I created a view to verify the calculations. I happened to be writing this post very late on the night of 2/16/2019. I wrote the first part before 12:00 a.m., then I refreshed the view before taking the screenshot below. I was happy to see Age Test 3 record flip from 46 to 47 when I refreshed after 12:00 a.m.

Age Solution Results

Determining Exact Age at Some Date in the Future

The requirement that drove my research for this solution was the need to determine the exact age in the future. Our client needed to know the age of a traveler on the date of travel. Depending on the country being visited and the age of the traveler on the date of departure, different forms would need to be sent in order to prevent problems when the traveler arrived at his or her destination. The solution was very similar to the Age example above:

The Solution

  1. Here is an overview of the entity hierarchy:
    Age at Travel Entities
  2. Create the necessary fields.
    Entity  Field  Data Type  Field Type  Other  Formula 
    Trip  Start Date  Date and Time  Simple  Behavior: User Local   
    Contact  DOB  Date and Time  Simple  Behavior: User Local   
    Trip Contact  Age at Travel Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Rounded  Whole Number  Calculated  n/a  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Difference  Decimal Number  Calculated  Precision: 10  new_ageattravelactual – new_ageattravelrounded 
    Trip Contact  Age at Travel  Whole Number  Calculated  n/a  See below 
  1. Create a business rule for Contact DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Trip Contact’s Age at Travel calculated field as follow:
    Age at Travel Calculated Field

Once these steps have been completed, your new Age at Travel field should be ready to use. I created a view to verify the calculations.

You’ll notice that in the red example, the trip starts on 8/14/2020. The contact was born on 9/29/2003 and is 16 on the date of travel but turns 17 a month or so later. In the green example, the trip is also on 8/14/2020. The contact was born 4/12/2008 and will turn 12 before the date of travel.

Age at Travel Solution Results

Conclusion

While there are several approaches to the Age issue in Dynamics CRM, this is a great alternative that requires no code and works in real time. I hope you find it useful!

Define cloud apps and infrastructure in your favorite language and deploy to any cloud with Pulumi.

Pulumi logoIf you search the Internet for Infrastructure-as-Code (IaC), it’s pretty easy to come up with a list of the most popular tools: Chef, Ansible, Puppet, Terraform…and the freshman to the IaC:  PULUMI.

It’s 4 a.m. and the production server has gone down. You can’t keep calm?

Sure, how tough is it? Except that you’ll probably need to recall what you did a year ago to set up your environment, then desperately try to figure out what you’ve installed or implemented or configured since. Finally, you’ve gathered all your findings to closely replicate the environment.

Wouldn’t it be nice to have something that manages all this configuration for you? No, there aren’t robots coming to take over the DevOps team yet. I’m talking about using Infrastructure-as-Code to automatically and consistently manage infrastructure configuration.

What is Infrastructure as Code (IaC)?

As the name suggests, Infrastructure-as-Code is the concept of managing your operations environment in the same way you manage applications or other code.

Infrastructure as code simply means to convert your infrastructure into code, where it is managed by some kind of version control system (e.g., Git), and stored in a repository where you can manage it similar to your application.

Pulumi: the new IaC tool

While learning Azure, I tried implementing IaC with Azure Resource Manager Templates (aka ARM Templates). For this, I learned Powershell and wrote several templates using it. As a developer, PowerShell isn’t the language I use on a daily basis to write my code, but I use Javascript abundantly for many of my projects.

Then the internet community whispered about Pulumi.

I’ve tried my hand at Pulumi and the experience has been very enlightening, so I’m sharing some of the more important and interesting findings with you all.

Pulumi is a multi-language and multi-cloud development platform.

Pulumi supports all major clouds — including Amazon Web Services (AWS), Azure and Google Cloud, as well as Kubernetes clusters. It lets you create all aspects of cloud programs using real languages (Pulumi currently supports JavaScript, TypeScript, and Python, with more languages supported in the future) and real code, from infrastructure on up to the application itself. Just write programs and run them, and Pulumi figures out the rest.

Using real languages unlocks tremendous benefits:

  • Familiarity: no need to learn new bespoke DSLs or YAML-based templating languages.
  • Abstraction: build bigger things out of smaller things.
  • Sharing and reuse: we leverage existing language package managers to share and reuse these abstractions, either with the community, within your team, or both.
  • Full control: use the full power of your language, including async, loops, and conditionals.

My favorite things about Pulumi

  1. Multi-Language and real language: Using general-purpose programming languages reduces the learning curve and makes it easier to express your configuration requirements.
  2. Developer friendly and easily configurable: Pulumi bridges the gap between Development and Operations teams by not treating application code and infrastructure as separate things. Developers can easily list out dependencies in the package.json file. The below snippet explains:
{
   "name": "azure-javascript",  // Name of the Pulumi project
   "main": "index.js",          // start point of the Pulumi program.
   "dependencies": {            // Dependencies with version number to be
       "@pulumi/pulumi": "latest",   installed with NPM
       "@pulumi/azure": "latest",
       "azure-storage": "latest",
       "mime": "^2.4.0"
   }
}

The YAML is created while we initialize the Pulumi Stack to configure all the parameters required for the program like credentials, location, etc.

  1. Reusable Components: Thanks to having a real language, we can build higher-level abstractions.

Below is one of my example code snippets using a Pulumi component that creates an instance of the Azure Resource Group to be used in other programs. You can find the full source code that provisions Azure Load Balancer GitHub Code.

class ResourceGroup extends pulumi.ComponentResource {
    constructor(resourceGroupName, location,path, opts)
    {
    	 super("az-pulumi-createstorageaccount:ResourceGroup", resourceGroupName,location, {}, opts); 

         console.log(`Resource Group ${resourceGroupName} : location ${location} `);
    	 // Create an Azure Resource Group
		const resourceGroup = new azure.core.ResourceGroup(resourceGroupName, 
		{
		    location:location,
		},

           { 
              parent: this 
           }
        );

	   // Create a property for the resource group name that was created
        this.resourceGroupName = resourceGroup.name,
        this.location = location
        

         // For dependency tracking, register output properties for this component
        this.registerOutputs({
            resourceGroupName: this.resourceGroupName,
           
        });

    }

}


module.exports.ResourceGroup = ResourceGroup;


This class can be instantiated as below:

// import the class 
const resourceGroup = require("./create-resource-group.js");


// Create an Azure Resource Group
// Arguments : Resource group name and location
let azureResouceGroup = new resourceGroup.ResourceGroup("rgtest","EastUS");

  1. Multi-Cloud: Pulumi supports all major clouds — including AWS, Azure and Google Cloud, as well as Kubernetes clusters. This delivers a consolidated programming model and tools for managing cloud software anywhere. There’s no need to learn three different YAML dialects, and five different CLIs, just to get a simple container-based application stood up in production.

The below code uses a single Pulumi program to provision resources in both AWS and GCP (Google Cloud Platform). The example is in typescript and it is required to install @pulumi/aws and @pulumi/gcp packages from NPM.

import * as aws from "@pulumi/aws";
import * as gcp from "@pulumi/gcp";

// Create an AWS resource (S3 Bucket)
const awsBucket = new aws.s3.Bucket("my-bucket");

// Create a GCP resource (Storage Bucket)
const gcpBucket = new gcp.storage.Bucket("my-bucket");

// Export the names of the buckets
export const bucketNames = [
awsBucket.bucket,
gcpBucket.name,
];

Pulumi ensures that resources will be created in both clouds. Let’s take a look at how Pulumi creates the plan for both clouds and deploy the resources to the respective clouds.

Previewing update (multicloud-ts-buckets-dev):

Type Name Plan
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev create
+ ├─ gcp:storage:Bucket my-bucket create
+ └─ aws:s3:Bucket my-bucket create

Resources:
3 changes
+ 3 to create

Do you want to perform this update? yes
Updating (multicloud-ts-buckets-dev):

Type Name Status
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev created
+ ├─ gcp:storage:Bucket my-bucket created
+ └─ aws:s3:Bucket my-bucket created

Outputs:
bucketNames: [
[0]: "my-bucket-c819937"
[1]: "my-bucket-f722eb9"
]

Resources:
3 changes
+ 3 created

Duration: 21.713128552s

The outputs show the name of the AWS and GCP buckets respectively.

Another scenario would be to create a storage account and S3 object in Azure and AWS respectively using Pulumi.

// Creating storage account in Azure

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/azure");

const storageAccount = new azure.storage.Account(storageAccountName, {
   	resourceGroupName: rgName,
    	location: rgLocation,
    	accountTier: "Standard",
    	accountReplicationType: "LRS",
 });

// Creating S3 bucket  in AWS

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/aws");

const siteBucket = new aws.s3.Bucket("my-bucket",{
	website: {
    indexDocument: "index.html",
  }
});

Pulumi enables you to mix and match these cloud resources inside of the same or different program or file.

  1. Stacks: A core concept in Pulumi is the idea of a “stack.” A stack is an isolated instance of your cloud program whose resources and configuration are distinct from all other stacks. You might have a stack each for production, staging, and testing, or perhaps for each single-tenanted environment. Pulumi’s CLI makes it trivial to spin up and tear down lots of stacks.

Closing Thoughts

I would like to close this post with a statement: Cloud Renaissance for DevOps and Developers as called by the whole internet community. Building powerful cloud software will be more enjoyable, more productive, and more collaborative for the developers. Of course, everything comes with a cost: after exploring, I found that Pulumi lacks some documentation. Besides this, for developers to write IAC, a deep understanding of infrastructure is a must.

I hope that this post has given you a better idea of the overall platform, approach, and unique strengths.

Happy Puluming 🙂

As organizations increase their footprint the cloud, there’s increased scrutiny on mounting cloud consumption costs, reigniting a discussion about longer-term costs.

This is not an entirely unexpected development. Here’s why:

  1. Cost savings were not meant to be the primary motivation for moving to the cloud – At least not in the manner most organizations are moving to the cloud – which is to move their existing applications with little to no changes to the cloud. For most organizations, the primary motivation is the “speed to value,” aka the ability to offer business value at greater speeds by becoming more efficient in provisioning, automation, monitoring, the resilience of IT assets, etc.
  2. Often the cost comparisons between cloud and on-premises are not a true apples-to-apples comparison – For example, were all on-premises support staff salaries, depreciation, data center cost per square foot, rack space, power and networking costs considered? What about troubleshooting and cost of securing these assets?
  3. As these organizations achieve higher cloud operations maturity, they can realize increased cloud cost efficiency – For instance, by implementing effective auto-scaling, optimizing execution contexts by moving to dynamic consumption plans like serverless, take advantage of discounts through longer-term contracts, etc.

Claim Your Free Whitepaper

In this whitepaper, we talk about the aforementioned considerations, as well as cost optimization techniques (including resource-based, usage-based and pricing-based cost optimization).

FREE WHITEPAPER ON AZURE COST MANAGEMENT: BACKGROUND, TOOLS, AND APPROACHES

About the Podcast

During KubeCon 2018, I had the pleasure to once again be a guest on the .NET Rocks! podcast. I talked to Carl and Richard about what it means to be cloud-native, the on-going evolution, and what that all means for 2019. We talked in depth about how the cloud-native approach impacts how we build applications on the cloud. We also talked about how the Cloud Native Computing Foundation (CNCF) is fostering an ecosystem of projects like Kubernetes, Envoy and Prometheus. Finally, we talked about cloud-native computing in the context of Microsoft Azure.

Listen to the full podcast here!

Related Content

If you’re curious about what it means to be cloud-native, you may also enjoy our previous blog post, What Are Cloud-Native Technologies & How Are They Different From Traditional PaaS Offerings. In this post, we discussed the key benefits of cloud-native architecture, compared it to a traditional PaaS offering, and laid out a few use cases.

Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Microsoft Power Platform: Application Development Platform for General Purpose Business Apps

In recent years, with the transition to the cloud, SharePoint teams’ recommendation has been to move custom functionality “down” to the client computer or “over” to a host outside of SharePoint. This change has had a direct impact on enterprises that have long viewed SharePoint as an application development platform.

Today’s best practices show that a grouping of applications, such as case management apps, inventory or issue tracking, and fleet management (depicted in the dotted area of the diagram below) is no longer best suited to be built on top of SharePoint. This is where Microsoft Power Platform comes in for cross-platform app development.

Microsoft Power Platform as an app development platform

What is the Microsoft Power Platform (MPP)?

Microsoft Power Platform (or MPP) is being seen as the aPaaS layer that nicely complements mature Azure IaaS and PaaS offerings. Here are a few reasons we’re excited about MPP:

  • It offers a low-code/no code solution for rapid application development
  • The PowerApps component supports cross-platform app development for mobile and responsive web application solutions
  • Flow within MPP also offers a workflow and rules capability for implementing business processes
  • CDS offers a service for storing business objects

SharePoint as an App Development Platform

On January 30, 2007, we released a paper called “SharePoint as an Application Development Platform” to coincide with the release of Microsoft Office SharePoint Server 2007 or MOSS. We didn’t have the slightest expectation that this paper would be downloaded over half a million times from our website.

Based on the broad themes outlined in this paper, we went on to develop several enterprise-grade applications on the SharePoint platform for commercial as well as public sector enterprises. Of course, anyone who participated in SharePoint development in 2007 and onwards can vouch for the excitement around the amazing adoption of SharePoint.

Features, Add-Ins or “Apps,” and More

A few years later, SharePoint 2010 was announced with even more features that aligned with the application development platform theme. In 2013, the SharePoint team went on to add the notion of apps (now called add-ins) – we even wrote a book on SharePoint Apps. The most recent version of SharePoint is 2019 and it continues the tradition of adding new development capabilities.

All the Ingredients for Unprecedented Success

If you look back, it is easy to see why SharePoint was so successful. SharePoint was probably the first platform that balanced self-service capabilities with IT governance. Even though the underlying SharePoint infrastructure was governed by IT, business users could provision websites, document libraries, lists, and more in a self-service manner.

Compare this to the alternative at that time, which was to develop an ASP.NET application from scratch and then having to worry about operationalizing it including backup, recovery, high availability, etc. Not to mention the content and data silo that may result from yet another application being added to the portfolio.

Furthermore, the “Out of the Box” (OOTB) SharePoint applications constructs including lists, libraries, sites, web parts, structured and unstructured content, granular permissions, and workflows allowed developers to build applications in a productive manner.

With all these ingredients for success, SharePoint went on to become an enterprise platform of choice with close to 200 million licenses sold and, in the process, creating a ten-billion-dollar economy.

By 2013, Signs of Strain Emerged for SharePoint as an App Development Platform

With the transition to the cloud and lessons learned from early design choices, weaknesses in SharePoint as an app development platform started to show. Here are some examples:

  • Limitations around structured data – Storing large lists has always been challenging, despite the improvements over the years to increased scalability targets for lists. Combine scalability challenges with the query limitations and it makes for a less than ideal construct for general-purpose business application development.
  • Isolation limitations – SharePoint was not designed with isolation models for custom code. As a result, SharePoint farm solutions that run as full-trust solutions on the server side have fallen out of favor. The introduction of sandbox mode didn’t help since isolation/multi-tenancy was not baked in the original SharePoint design. The current recommendation is to build customizations as add-ins that run on the client side or on an external system.
  • External data integration challenges – BDC (Business Data Services) was designed to bring data from external systems into SharePoint. Business Connectivity Services (BCS) extensibility model was designed to allow an ecosystem of third-party and custom connectors to be built. But BCS never gained significant adoption with limited third-party support. Furthermore, BCS is restricted in the online model.
  • Workflow limitations – Workflows inside SharePoint are based on Windows Workflow Foundation (WF). WF was designed before the REST and HTTP became the lingua franca of integration over the web. As a result, even though SharePoint-based workflows work well for a document-centric workflow like approval, they are limited when it comes to integrating with external systems. Additionally, the combination of WF and SharePoint has not been the easiest thing to debug.
  • Lack of native mobile support – SharePoint was not designed for mobile experiences from the ground up. Over the years, SharePoint has improved the mobile experience through the support for device channels. Device channels allow for selection of different master pages and style sheets based on the target device. But device support is limited to publishing pages and even those require non-trivial work to get right. In a mobile-first world, a platform built with mobile in mind is going to be the ideal cross-platform app development tool for enterprises.
  • Access Services (and other similar services) never took off – Access databases are quintessential examples of DIY Line of Business (LOB) Apps. So when SharePoint 2010 introduced a capability to publish Access databases to SharePoint, it sought to offer the best of both worlds, self-service combined with governance (the latter being the bane of access databases). However, that goal proved too good to be true and Access Services never caught on and are now being deprecated.
  • Development “cliffs” – SharePoint was supposed to enable business users to build their own customizations through tools like Visio and SharePoint designer. The idea was that business users would be able to build customizations themselves using SharePoint designer and if they ran into a limitation (or the “cliff”), they could export their artifacts into a professional development tool like Visual Studio. In reality, this dichotomy of tools never worked and you almost always had to start over.
  • State of art in-low / no-code development – If you look at leading high-productivity application development platforms, the state of art seems to be around a declarative model-driven application approach. In other words, using a drag and drop UI, a user can generate a metadata-based configuration that describes the application, flow of application pages, etc. At runtime, the underlying platform interprets this configuration and binds the actions to the built-in database. SharePoint obviously has a rich history of offering no-code solutions, but it is not based on a consistent and common data model and scripting language.
  • Monolith versus micro-services – In many ways, SharePoint has become a “monolith” with tons of features packed into one product — content management, records management, business process, media streaming, app pages — you name it. Like all monoliths, it may make sense to break the functionality into “micro” services.

Note: With all the challenges listed above, SharePoint as a collaboration software continues to grow. In fact, it’s stronger than ever, especially when it comes to building collaboration-centric apps and solutions using the SharePoint framework and add-ins.

Just visit the thriving open source-based development centered around SharePoint Development to see for yourself.

Enter the Microsoft Power Platform (MPP)

The below diagram depicts a high-level view of the Microsoft Power Platform (MPP).

Microsoft Power Platform Infrastructure Overview

The “UI” Layer: Power Apps, Power BI, and Flow

At the highest level, you have the Power Apps, Power BI, and Flow tools.

Power Apps is a low-code platform as a service (PaaS) solution that allows for business app development with very little code. These apps can be built with drag and drop UI elements that work across mobile, tablet, and web form factors. In addition to the visual elements, there’s an Excel-like language called PowerApps Expression Language that’s designed to implement lightweight business logic and binding visual controls to data. Since PowerApps comes with a player for Android and iPhone devices, it doesn’t need to be published or downloaded from app stores.

Additionally, admin functions like publication, versioning, and deployment environments are baked into the PowerApps service. The PaaS solution can be used to build two types of apps:

  1. Canvas apps – as the name suggests, these allow you to start with a canvas and build a highly-tailored app.
  2. Model-driven apps – also as the name suggests, these allow you to auto-generate an app based on a “model” – business and core processes.

You’re likely already familiar with Power BI. It’s a business analytics as a service offering that allows you to create rich and interactive visualizations of sources of data.

Flow is a Platform as a Service (PaaS) capability that allows you to quickly implement business workflows that connect various apps and services.

Working together, PowerApps, Flow and Power BI allow for business users to easily and seamlessly build the UI, business process, and BI visualizations of a cross-platform, responsive business application. These services are integrated together to make the experience even better. As an example, you can embed a Canvas PowerApp inside of Power BI or vice versa.

The “Datastore and Business Rules” Layer

Common Database Service (CDS) allows you to securely store and manage data used by business applications. In contrast to a Database as a service offering like Azure SQL Database, think of CDS as a “business objects as a service”. Azure SQL Database removes the need for physical aspects of the database but as a consumer of Azure SQL, you’re still required to own the logical aspects of the data, such as schema, indexing and query optimization, and permissions.

In CDS all you do is define your business entities and their relationships. All logical and physical aspects of the database are managed for you. In addition, you get auditing, field level security, OData API, and rich metadata for free.

Finally, CDS offers a place to host “server-side” business rules in the form of actions and workflows. It’s important to note that CDS is powered by Dynamics CRM under the covers (see [1] for more information on this). This means that any skills and assets that your team has around Dynamics will seamlessly transition to CDS.

“External Systems Connector” Layer

PowerApps comes with a large collection of connectors that allow you to connect with a wide array of third-party applications and services (like SalesForce and Work Day) and bind the data to PowerApps visual controls. In addition, you can also connect to a custom app of service via Azure API Management and Functions.

How Can MPP Alleviate SharePoint Development Challenges?

MPP is a platform designed from the ground up for building business apps. Here’s how it can help alleviate the challenges users may experience with SharePoint as an app development platform.

Challenge SharePoint as a dev platform Power Platform
Structured Data Limits of items, throttling, and queries Backed by a fully relational model
Isolation No server-side isolation model. Server-based farm solutions are discouraged. Each tenant is isolated and allows for running custom code
External Data Integration BCS – limited third-party support 230+ connectors
Workflow Limitations Document-centric workflow HTTP REST based scalable workflow construct that can leverage OOTB connectors
Mobile Support Limited support in the form of device channels Designed from the ground up for mobile support
Development “Cliffs” Hard to transition from citizen dev to prod dev tools There a single tool for citizen developers and pro dev users. Pro dev users have the ability to use the extensibility to call Azure Functions, Logics Apps etc. for richer and complex functionality.
Low Code Development No common domain model or scripting language Designed from the ground-up as a low code development environment.
Monolith vs micro-services-based approach Monolith Comprised of a collection of “micro” services PowerApps, Flow, Power BI, CDS

Comparing Reference Architectures

The following diagram compares the SharePoint and MPP reference architectures.

SharePoint and MPP Reference Architectures Compared

Power Platform Components Inside SharePoint?

It’s noteworthy that advancements in MPP are available in SharePoint. For example, “standalone” Canvas PowerApps and Flow integrate directly into SharePoint are shown in the table below.

In many ways, this development represents a realignment of SharePoint’s place in the application development space. We believe that MPP is the new “hub”, while SharePoint (and other Office 365 components) represent “spokes” in this model.

PowerApps as a “custom form” for a SharePoint list item.

PowerApps as a custom form for a SharePoint list item example

Flow powering a document-centric workflow

Flow powering a document-centric workflow example

Consider MPP for Your Next Business App Development Project

The Microsoft Power Platform is an aPaaS offering that’s designed from the ground up as a general-purpose business application development platform, which includes native mobile support, first-class low-code environment, business data as a service, a lightweight workflow engine, and rich business analytics all in one. We believe that a category of applications, that were previously built on SharePoint will benefit in moving to the Microsoft Power Platform.

[1] xRM with Dynamics CRM

xRM style applications have been built on Dynamics CRM for years. Unfortunately, the licensing story to support this model was not ideal. For example, customers could not license a “base” version without paying for sales, marketing functionality built OOB.

That’s all changing with CDS. PowerApps licenses, such as P1 and P2, give customers access to what’s referred to as “Application Common” or Base instance of CDS.

CDS Apps Instance vs Dynamics Instance

Learn More About MPP & Your App Development Platform Options

The road to general-purpose business application development can be challenging to navigate with all of the tools and platforms available, as well as the unique pros and cons of each option. Not sure where to start? Start with a conversation!

START THE CONVERSATION
See if MPP is a good fit for your orgs app development needs. Contact your partner at AIS today!

AIS Gets Connection of DoD DISA Cloud Access Point at Impact Level 5

Getting the DoD to the Cloud

Our team was able to complete the near-impossible. We connected to the DoD DISA Cloud Access Point at Impact Level 5, meaning our customer can now connect and store any unclassified data they want on their Azure subscription.

About the Project

The project started in July 2017 to connect an Azure SharePoint deployment to the DoD NIPRnet at Impact Level 5. Throughout the process, the governance and rules of engagement were a moving target, presenting challenges at every turn.

Thanks to the tenacity and diligence of the team, we were able to successfully achieve connection to the Cloud Access Point (CAP) on September 6th, 2018. This was a multi-region, with 2 connections, SharePoint IaaS always-on deployment, which involved completing all required documentation for the DISA Connection (SNAP) process.

We are now moving towards the first Azure SharePoint Impact Level 5 production workload in the DoD, so be sure to stay tuned for more updates.

A Repeatable Process for Government Cloud Adoption

Azure Government was the first hyperscale commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency, and this was the first public cloud connection on Azure in the DoD 4th Estate.

With fully scripted, repeatable cloud deployment, including Cloud Access Point connection requirements, we can now get Government Agencies to the cloud faster, and more securely than ever before.

We work with fully integrated SecDevOps processes and can leverage Microsoft’s Azure Security Team for assistance in identifying applicable security controls, inherited, shared and customer required controls.

See how you can make the cloud work for you. Contact AIS today to start the conversation, or learn more about our enterprise cloud solutions.

HARNESS THE POWER OF CLOUD SERVICES FOR YOUR ORG
Discover how AIS can help your org leverage the cloud to modernize, innovate, and improve IT costs, reliability, and security.

In this blog post, I discuss an app modernization approach that we call “modernize-by-shifting.” In essence, we take an existing application and move it to “managed” container hosting environments like Azure Kubernetes Service or Azure Service Fabric Mesh. The primary goal of this app modernization strategy is to undertake minimal possible change to the existing application codebase. This approach to modernization is markedly different from a “lift-and-shift” approach where workloads are migrated to the cloud IaaS unchanged with little to no use of cloud native capabilities.

Step One of App Modernization by Shifting

As the first step of this approach, an existing application is broken into a set of container images that include everything needed to run a portion of the application: code, runtime, system tools, system libraries, and settings. Approaches to breaking up the application in smaller parts can vary based on original architecture. For example, if we begin with multi-tier application, each tier (e.g. presentation, application, business, data access) could map to a container image. While this approach will admittedly lead to coarser-grained images, compared to a puritanical microservices-based approach of light-weight images, it should be seen as the first step in modernizing the application.

Read More…

I put together a two-part video presentation on how (and why!) to take on-premises applications and move them to the cloud (specifically the Azure PaaS platform), and how to do it quickly.

The second video continues the process and covers app modernization with Service Fabric.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.