This is the second of two blog posts that will share solutions to common Azure DevOps Services concerns:

Storing Service Connection Credentials

To deploy Azure infrastructure from an Azure DevOps pipeline, your pipeline agent needs permission to create resources in your subscription. These permissions are granted in Azure DevOps with a Service Connection. You have two options when creating this service connection:

  1. Service Principal Authentication
  2. Managed Identity Authentication

If you choose to use a Service Principal, you must store a secret or certificate of an Azure Active Directory (AAD) managed application in Azure DevOps. Since this secret is equivalent to a service account password, some customers may be uncomfortable with storing this in Azure DevOps. In the alternative option, Managed Identity Authentication, you configure your pipeline agent as a system-assigned managed identity and give that identity the necessary permissions to your Azure resources. In this configuration, the certificate needed to authenticate is stored on the pipeline agent within your private network.

Storing Sensitive Information in the Backlog

It is important for the User Stories in the backlog to be well defined so that everyone involved in the development process understands how to define “done”. In some environments, the requirements themselves could contain information that is not appropriate to store in Azure DevOps.

You will need to ensure your team has a strong understanding of what information should not be captured in the backlog. For example, your organization may not allow the team to describe system security vulnerabilities, but the team needs to know what the vulnerability is to fix it. In this case, the team may decide to define a process in which the backlog contains placeholder work items that point to a location that can safely store the details of the work that are considered too sensitive to store within the backlog itself. To reinforce this process, the team could create a custom work item type with a limited set of fields.

New Security Vulnerability

This is an example of how a tool alone, such as Azure DevOps, is not enough to fully adopt a DevOps. You must also address the process and people.

Storing Sensitive Information in Code

Depending on your organization and the nature of your project, the content of the source code could also become an area of concern. Most developers are aware of the techniques used to securely reference sensitive values at build or runtime so that these secrets are not visible in the source code.

A common example is using a pointer to an Azure Key Vault Secrets in your code that is used to pull the secret down at runtime. Key Vault makes it easy to keep secrets out of your code in both application development and in infrastructure as code development by using the secret reference function in Azure Resource Manager (ARM) Template or retrieving them at compile time in your Desired State Configurations (DSC).

Features in some code scanning tools, such as Security Hotspot detection in SonarQube, can be used to detect sensitive values before they’ve been accidentally checked in, but like backlog management do not rely entirely on tools to get this done. Train your team to think about security throughout the entire development lifecycle and develop a process that can detect and address mistakes early.

Storing Deployment Logs

Even if using a private pipeline agent that is running on your internal network, deployment logs are sent back to Azure DevOps. You must be aware of any logs that contain information that your organization’s policies do not allow to be stored in Azure DevOps. It is difficult to be certain of the contents of the logs as they are not typically included in any type of quality assurance or security testing, and because they will constantly evolve as your application changes.

The solution is to filter the logs before they leave your internal network. While the pipeline agent has no configuration option to do this, you can design your pipeline to execute deployment jobs within a process you control. David Laughlin describes two ways to achieve this in his post Controlling Sensitive Output from Azure Pipelines Deployment Scripts.

Conclusion

The Azure DevOps product is becoming increasingly valuable to government development teams as they adopt DevSecOps practices. You have complete control over Azure DevOps Server at the cost of additional administrative overhead. With Azure DevOps Services, Microsoft takes much of the administrative burden off your team, but you must consider your organization’s compliance requirements when deciding how the tool is being used to power your development teams. I hope the solutions I have presented in this series help your team decide which to choose.

Relational database source control, versioning, and deployments have notoriously been challenging. Each instance of the database (Dev, Test, Production) can contain different data, may be upgraded at different times, and are generally not in a consistent state. This is known as database drift.

Traditional Approach and Challenges

Traditionally, to move changes between each instance, a one-off “state-based” comparison is done either directly between the database or against a common state like a SQL Server Data Tools project. This yields a script that has no direct context to changes being deployed and requires a tremendous effort to review to ensure that only the intent of the changes being promoted/rolled back is included. This challenge sometimes leads to practices like backing up a “known good copy” aka production and restoring it to lower tiers. For any but the smallest applications and teams, this raises even more challenges like data governance and logistics around test data. These patterns can be automated, but generally do not embrace the spirit of continuous integration and DevOps.

State Based

For example, the above three changes could be adding a column, then adding data to a temporary table, and the third populating the new column with the data from the temporary table. In this scenario it isn’t only important that a new column was added, it is also how the data was added. The context of the change is lost and trying to derive it from the final state of the database is too late in the process.

DevOps Approach

Architecturally, application persistence (a database) is an aspect or detail of an application, so we should treat it as part of our application. We use continuous integration builds to compile source code into artifacts and promote them through environments. Object-Relational Mapping (ORM) Frameworks like Entity Framework and Ruby on Rails have paved the way out with a “migrations” change-based approach out of necessity. This same concept can be used for just the schema with projects like FluentMigrator. At development time the schema changes to upgrade and rollback are expressed in the framework or scripted DDL and captured in source control. They are compiled and included in the deployment artifact. When the application invokes a target database, it identifies the current version and applies any changes up or down sequentially to provide deterministic version compatibility. The application is in control of the persistence layer, not the other way around. It also forces developers to work through the logistics (operations) of applying the change. This is the true essence of DevOps.

Migration Scripts

In the same example above, the three changes would be applied to each database in the same sequence and the intent of the change would be captured in both.

Summary

In summary, a migrations-based approach lends itself to a DevOps culture. It may take some additional effort up front to work through and capture how changes should be applied, but it allows all aspects of the database deployment process to be tested throughout a project lifecycle. This promotes repeatability and ultimately the confidence needed to perform frequent releases.

In my previous post, I discussed lessons learned about migrating SQL Server databases from on-premise to Azure SQL Databases. This post will share several of my issues and solutions around automating Azure resource provisioning and application code deployment. Before this project, I had never used Azure DevOps or Team Foundation Server for anything except a code repository or task tracking. Most of these applications would use Build and Release pipelines to configure the Azure subscription and deploy the applications. Besides, this was my first foray into the Azure Resource Manager (ARM) templates.

Azure Resource Manager (ARM) Templates

ARM templates provide a declarative method for provisioning and configuring Azure resources. These JSON files use a complex schema that provides deep configuration options for most of the available resources available in Azure. Some of the key benefits of ARM templates are:

  • Idempotent: templates can be redeployed many times returning the same result. If the resource exists, changes will be applied, but the resource will not be removed and recreated.
  • Intelligent processing: Resource Manager can manage the order of resource deployment, provisioning resources based on dependencies. It will also process resources in parallel whenever possible, improving deployment times.
  • Modular templates: JSON can get incredibly hard to maintain when files are large. ARM provides Linked Templates to split resources into separate files to simplify maintenance and readability. This also provides the opportunity to reuse some templates in multiple deployment scenarios.
  • Exportable: Resources have an option to export the template in Azure Portal. This is available if you’re creating a new resource at the validation stage or in the resource’s management pane. A Resource Group also provides a method for exporting the template for multiple resources. This was very useful for understanding more advanced configurations.

For the first few projects, I built large templates that deployed several resources. This presented several hurdles to overcome. First, large templates are hard to troubleshoot. Unlike scripting solutions, no debugger allows you to step through the deployment. The portal does provide logging, but some of the error descriptions can be vague or misleading especially in more advanced configuration settings. In situations where there are multiple instances of one type of resource, there may be no identifying information for which instance caused the error. In theory, linked templates would be a way to handle this, but linked templates require a publicly accessible URI. The client’s security rules did not allow this level of exposure. The best solution was to add resources one at a time, testing until successful before adding the next resource.

I also had some minor issues with schema support in Visual Studio Code. The worst of it was false positive warnings on the value of the “apiVersion” property for a resource not being valid despite the schema documentation showing that it is. This didn’t cause any deployment errors, just the yellow “squiggly” line under that property. Another inconvenience to note is when exporting a template from the portal, not all resources will be in the template. This was most noticeable when I was trying to find the right way of adding an SSL certificate to an App Service. The App Service had a certificate applied to it, but the template did not include a resource with the type, Microsoft.Web/certificates.

While templates are supposed to be idempotent, there are some situations where this isn’t the case. I found this out with Virtual Machines and Managed Disks. I had a template that was creating the disks and attaching them to the VM but found out later that the disk space was too small. Changing the “diskSizeGB” property and re-deploying fails because attached disks are prohibited from resizing. Since this isn’t likely to happen when we get out of the lab environment, I changed the sizes in the portal by deallocating the VM and changing the size.

Azure & TFS Pipelines

Azure Pipelines, part of the Azure DevOps Services/Server offering, provides the capability to automate building, testing, and deploying applications or resources. Team Foundation Server (TFS) is the predecessor to Azure DevOps also offers pipeline functionality. These projects use both solutions. Since there is no US Government instance of Azure DevOps Services, we use TFS 2017 deployed to a virtual machine in their environments, but we use Azure DevOps Services for our lab environment since it’s less expensive than standing up a VM. While the products are very similar, there are some differences between the two systems.

First, TFS does not support YAML. YAML allows using and managing a configuration in the source code as it becomes part of the repository making it more integrated into Git Flow (versioning, branching, etc). Also, YAML is just text. Editing text is much quicker than having to click through tasks to change textbox values.

Another difference is being able to change release variables in queue time. Azure DevOps release variables can be flagged “Settable at release time.” This is very useful as we typically have at least two, if not three, instances of an application running in a lab environment. A variable for the environment can be added and set at release time making a single pipeline usable for all the environments instead of having to either edit the pipeline and change the value or create multiple pipelines that do essentially the same thing.

Create a New Release

There were some positive lessons learned while working with both services. Pipelines could be exported and imported between the two with only minor modifications to settings. Since we were using the classic pipeline designer, there are far more mouse clicks to create pipelines. Exporting them generates a JSON file. Once imported into another environment, there were usually one or two changes that had to be made because the JSON schema uses IDs to reference other resources, such as Task Groups, instead of a name. Not only did this save time, but it cut down on human error when configuring the tasks.

Task Groups provide a way to group a set of tasks to be reused across multiple pipelines and can be exported, too. In some projects, we had multiple pipelines that were essentially the same process, but with different settings on the tasks. For example, one client had eight web applications that deploy to a single Azure App Service. One was at the root while the others were in their virtual directories. Also, they each had their development cycle so it wouldn’t make sense to deploy all eight every time one needed updates. We created a build and release pipeline for each application. Creating two Task Groups, one build, and one release, allowed us to add a reference to the appropriate group in each pipeline and just change the parameters passed to it.

New Pipeline

Conclusion

Automating resource provisioning and application deployment saves time and creates a reliable, reusable process over a manual alternative. ARM templates provide deep, complex customization even more than Azure PowerShell or the Azure CLI, in some cases. Pipelines then take those templates and consistently provision those resources across environments. It would have made life much easier in several of my past roles. While there were some “gotchas” with the various technologies, Azure has been developed with automation being a top priority.

One of our clients has been pushing big to migrate all the application infrastructure to Azure. Some of the applications have been using on-prem file servers and we have been looking at different options available to migrate these file shares to Azure. We looked at Azure Blob storage, Azure Files and Azure Disks to find out the most fitting solution for us that would offer high performance, permissions at the folder level, and long-term backup retention.

Although Azure Blob storage is great for massive-scale storage, it was easy to dismiss it as our applications were using the native file system and didn’t have to support any streaming and random-access scenarios. Azure Files offered fully managed file shares in the cloud. Unlike Azure Blob storage, Azure Files offer SMB access to Azure file shares. By using SMB, we could mount an Azure file share directly on Windows, Linux, or macOS, either on-premises or in cloud VMs, without writing any code or attaching any special drivers to the file system. It provided windows like file-locking but there were some limitations with Azure Files e.g., we don’t have folder/file level control over the permission and the only way to accomplish that was to create shared access signature on folders where we could specify read-only or write-only permissions, which didn’t work for us. Azure Files Backup provided an easy way to schedule backup and recovery. However, the important limitation was that backups could retain files only for a maximum of 180 days.

Azure Disk fulfilled all our requirements. Although running a file server with Azure Disk as back-end storage was much more expensive than Azure File share but it was the most high-performance file storage option in Azure, which was very important in our scenario as files were used in real-time under heavy load. For compliance and regulatory reasons, all files needed to be backed up, which could be easily done by Azure Virtual Machine Backup without any additional maintenance. The only limitation of Azure Virtual Machine Backup was that it only supports disks that are less than 4 TB. So, in the future, if a need arises for additional storage that meant having multiple disks in a striped volume. Also, after implementing the file server in Azure VM, we could still get the best of both Azure Files and Data Disk by using Azure File Sync. Having a file server and Azure file share in the Sync group would ensure minimal duplication and set the volume of free space. So finally, we decided to deploy a file server in Azure Windows VM with premium SSD.

Troubleshooting

After we deployed the file server in Azure Virtual Machine everything worked like a charm and we had found an ideal solution in Azure for our file servers. However, after some time we used to encounter intermittent issues in the VM, CPU usage would near 100 percent, shares would become inaccessible and VM would hang at OS level bringing everything to a halt.

CPU Average

Generally, to troubleshoot issues in Azure VM, we can connect to a VM using the below tools:

  • Remote CMD
  • Remote PowerShell
  • Remote Registry
  • Remote services console

However, the issue we were encountering hung the OS per se so we could not use the above tools to troubleshoot the VM. There was not even any event log generated that would indicate the possible guest OS hung situation. So, the only option left was to generate the memory dump to find the root cause of the issue. Now, I will explain how to configure the Azure VM for a crash dump and how to trigger the NMI (Non-Maskable Interrupt) crash dumps from the serial console.

Serial Console is a console for Azure VMs that can be accessed from the Azure portal for VMs that have been deployed using the resource management deployment model. It connects directly to COM1 serial port of the VM. From Serial Console we can start a CMD/PowerShell session or send an NMI to VM. NMI creates a signal that VM cannot ignore so it is used as a mechanism to debug or troubleshoot systems that are not responding.

To enable Serial Console, we need to RDP into the VM and run the below commands.

bcdedit   /ems {default} on
bcdedit   /emssettings EMSPORT:1 EMSBAUDRATE:115200

We also need to execute the below commands to enable boot loader prompts.

bcdedit    /set {bootmgr} displaybootmenu yes
bcdedit    /set {bootmgr} timeout 7
bcdedit    /set {bootmgr} bootems yes

When the VM receives an NMI, its response is controlled by the VM configuration. Run the below commands in Windows CMD to configure it to crash and create a memory dump file when receiving an NMI.

REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v DumpFile /t REG_EXPAND_SZ /d "%SystemRoot%\MEMORY.DMP" /f
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v NMICrashDump /t REG_DWORD /d 1 /f
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v CrashDumpEnabled /t REG_DWORD /d 1 /f

Before we can use Serial Console to send NMI, we also need to enable Boot diagnostics of VM in Azure portal. Now we are all set to generate the crash dump. Whenever the VM hangs all we need to do is open the Serial Console in the Azure portal, wait till it displays SAC prompt, and hit the Send Non-Maskable Interrupt (NMI) from the menu, as shown below.

Serial Console in Azure Portal

In our case, we have been using FireEye service for security purposes. After we analyzed the memory dump it was found the FeKern.sys (FireEye) is stuck waiting for a spinlock and the driver exhausting CPU % time. FeKern.sys is a file system filter driver that intercepts the requests and in this case, was not able to handle the load. There can be varied reasons for unresponsive VMs and NMI crash dumps can be of great help to troubleshoot and resolve the issues.

Happy Computing!
Sajad Deyargaroo

This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.

Background

As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models

Application Performance Management (APM) is the monitoring and management of performance and availability of a software application. A proactive approach to monitoring the application can reduce business escalations and increase the availability of an application to greater than 99.9%. It helps detect and diagnose complex application issues with minimum effort. The customer experience quotient is a powerful metric to understand whether the end-user of an application is pleased or disappointed depending on how the application is performing. The APM tools can be used by IT and business teams to better understand the pain points for the customers and identify opportunities to constantly improve the features of an application. The tool monitors multiple aspects of an application like the application layer, network servers, integration point to other systems, database, workflow process, etc.

Dynatrace is a software-intelligence monitoring platform that simplifies enterprise cloud complexity and accelerates digital transformation. It can be used for Real User Monitoring, Mobile App Monitoring, Server-side service monitoring, Network monitoring, process, and host monitoring, Cloud and Virtual Machine Monitoring, Container Monitoring, and Root Cause Analysis.

In this article I will explain the process of setting up a Dynatrace Synthetic Browser Type Monitor which helps monitor the availability and performance of web applications as experienced by an end-user. It provides 24/7 visibility to how an application is performing which helps application teams in critical decision making. Dynatrace offers two types of Synthetic monitoring.

  1. Browser Type Monitors
  2. HTTP Type Monitors

Google Chrome is the supported browser for building synthetic monitors.

Creating a Browser Monitor

After logging into Dynatrace with the appropriate credentials click on the Synthetic Option.

Log into Dynatrace and Click on Synthetic icon

Step 1: Click on Create a browser monitor.

Install Dynatrace Synthetic Recorder Chrome
Step 2: Install the Dynatrace Synthetic Recorder Chrome extension. This extension will help us in recording the click paths easily.

Install the Dynatrace Synthetic Recorder Chrome extension
Step 3: In the next step we provide a name for the monitor and the URL for the application or an endpoint that we plan to monitor.

Name for the monitor and the URL

Step 4: Click on the Record Click path option so that Dynatrace Synthetic recorder will start recording the actions performed by us. In the record browser instance that pops up, perform the necessary actions to simulate a use case as per the requirement.

lick on the Record Click path option so that Dynatrace Synthetic recorder starts recording

After performing the necessary actions click on the Cancel option.
Events in the recorded click path will be displayed.

Event is record path displayed

Step 5: The next step is selecting the frequency and the locations.]

Select the frequency
Step 6: The Summary Screen will provide the details of the monitor and you can perform a review before creating it.

The Summary Screen will provide the details of the monitor

After a few minutes, you can see the monitoring data for the monitor.

Monitoring Data
For more details on how the application is performing click on the monitor.

Application is performing click on the monitor

Click on Analyze Availability.

Click on Analyze Availability
To analyze a specific run, click on a location and you can drill down on the details.

Click on a location and you can drill down on the details
There is an option to enable/disable/delete/edit a monitor by clicking on the 3 dots available at the top.

Enable/disable/delete/edit a monitor

Synthetic Alerting Process

Dynatrace can create Problems and send alerts whenever a monitor fails due to availability or performance issues because of the thresholds that we set. The thresholds for the monitors are specified in the monitor settings.

Availability

Local problem is created when the monitor is unavailable

A local problem is created when the monitor is unavailable for one or more consecutive runs at any location.
A global problem is created when all the monitored locations are unavailable.

Performance

Performance Thresholds

For example if we expect the 1st event which is Loading of the page http://www.google.com to complete in less than 5 seconds then we can set the threshold accordingly. A problem is created when any location exceeds the threshold configured.

A problem is created when any location exceeds the threshold configured.

Problems

A problem is created when a monitor violates the thresholds. To identify the list of Active Problems, click on the Problems section.

Identify the list of Active Problems, click on the Problems section
For example, there is an ongoing Problem for a monitor. For more details on the failure click on the Problem.

Ongoing Problem for a monitor
Availability Check
Click on Analyze availability to get more details on the reason for failure. We see that http://www.google.com1 is not a valid URL. So Dynatrace has reported a DNS Lookup Failure Error. Please note that the URL was modified to report a failure.

Dynatrace has reported a DNS Lookup Failure Error

Dynatrace Email Notification

Dynatrace allows you to integrate with multiple third-party systems such as email for sending notifications when a Problem is created. Let’s understand the process of doing this setup.
Navigate to Settings > Integrations > Problem notifications > Set up notifications

Dynatrace Email Notification

Click on Email.

Click on Email

Provide the recipients for the alerts.
Alerting Profile allows you to control exactly which metrics can result in Problem Creation.
Navigate to Settings > Alerting > Alerting Profiles

Alerting Profiles

Let’s create a Profile with the name TestProfile. By default, the profile will be created with the below settings.

Create a Test Name

The setting can be modified based on current requirements.
By default, the system alerting rules trigger notifications immediately for availability and after 30 minutes for slowdown performance problems.
Availability and Slowdown are the metrics that pertain to Synthetic Monitoring.
We can adjust the recipient configuration using these settings available in the Alerting Profile. For example, you can notify or escalate an issue if a Problem remains open for a longer duration.

To ensure this alerting profile is associated to a single monitor, create a Tag, and associate the alerting Profile accordingly.
For example, I created a Tag for the monitor. TEST-Google.com is the tag name.

Tag for the monitor. TEST-Google.com

Now in the Alerting Profile Section I can filter the monitor based on the Tags. So Dynatrace will apply this rule only for this specific Monitor.

Filter Monitor based on Tags

After we associate the Tag to the alerting profile the next step is to associate the Email Integration setup with the Alerting Profile.

Email Integration setup with the Alerting Profile

After the TestProfile is selected click on Send test notification.
Dynatrace will trigger a test notification email to the recipients.

Dynatrace will trigger a test notification email to the recipients

So, in this blog we have seen how Dynatrace Browser monitors can help identify problems and determine if a website is experiencing slowness or downtime before the problem impacts end-users or customers. It helps monitor critical business flows and helps Applications teams take necessary proactive action.

I hope you found this interesting. Thank you for reading!

The pandemic has changed the way Microsoft has had to deliver new product enhancements, but it hasn’t slowed down the respective productive teams from unveiling significant changes to Microsoft 365. Last week, the Microsoft Build conference became the showcase for several Microsoft 365 announcements, and now that it is complete, we can summarize and reflect on how these announcements will change the way we use the platform.

In this post we will look at the highlight announcements and discuss how these changes can impact your usage of Microsoft 365, whether you’re an administrator, user, or implementer.

Microsoft Lists

There is no doubt that one of the biggest announcements last week was Microsoft Lists. What this effectively continues is the trend of Microsoft taking the pieces of SharePoint and building them out across Microsoft 365.

The biggest change is that now Microsoft Lists are their own application inside of Microsoft 365 with its own landing page. It takes what we already had in modern SharePoint lists and made them available outside of just a SharePoint context. Now these lists, which are really small applications, can be outside of SharePoint or can be created inside of a Group connected SharePoint Team Site (but unfortunately it doesn’t seem to be available to create in Communication sites, although you can still get much of the functionality as a SharePoint list in that site design).

Microsoft Lists

These lists have the functionality we are used to like custom formatting, integration with Power Apps/Power Automate, rich filtering, and editing experiences, and more. There are some good enhancements such as a gallery (or “card”) view, a modern monthly calendar view, conditional metadata show/hide based on criteria, a conversational notification creation interface, and a lot more. Also, there are now prebuilt templates for various list types, and all of this is seamlessly available to be surfaced inside of Microsoft Teams.

The richness of Microsoft Lists will allow users to build rather complex applications with a very straight forward yet powerful interface, and when you want to do something more complex, the Power Platform will allow you to enhance them even further.

Here are Microsoft resources explaining the announcement in greater detail:

Enhancements to Microsoft Teams

While Microsoft Lists may have been the biggest single addition to Microsoft 365 last week, there remains no mystery that Microsoft Teams continues to be the darling of Microsoft 365. To that end, there are several changes that make Teams an ever more compelling product, and that is especially true as the pandemic pushes more organizations to embrace distributed work.

ACCELERATED TEAMS ENABLEMENT
AIS' Accelerated Teams solution quickly deploys Microsoft Teams within days to support your remote workforce using Teams and staying productive.

There have been recent changes such as a new 3×3 video grid when in a call, “raise a hand” to ask a question and changes to the pre-join experience that allows you to set settings easier. These weren’t announced directly at BUILD, but these are important changes worth mentioning. To get an overview, see this video on Microsoft Mechanics: Microsoft Teams Updates | May 2020 and Beyond. One seemingly small but important change is that now when using the search box in Teams, it can now default to your current context such as a chat, which will have a very big discoverability improvement.

Regarding developer announcements at Build, several new changes were announced:

  • New interface inside of tenant administration to build Teams templates where you can set pre-defined channels and tabs/apps.
  • New Visual Studio and Visual Studio Code extensions to build apps for Teams.
  • Single-button deployment of Power Apps applications into Teams.
  • New Power Automate triggers for Teams.
  • Customizable application notifications using the Microsoft Graph.

The biggest takeaway from all these announcements is that Microsoft wants to provide as many avenues to quickly extend Teams whether that’s a more traditional programmatic solution using the Visual Studio family of products or using the Power Platform to enable a new class of power users that are familiar with those products.

Read more about these announcements at the Microsoft Teams blog: What’s New in Microsoft Teams | Build Edition 2020.

Project Cortex Release Date and Taxonomy APIs

While Project Cortex was announced at the Ignite Conference last year, we now know that Project Cortex will enter general availability in early summer this year, which may be no more than a month or two away. While the impact of Project Cortex will have on our Microsoft 365 implementations remains to be seen, it certainly has the promise to change the dynamic of how we do information management in Microsoft 365.

The interesting announcement that came out for developers were new APIs to complete CRUD operations on the Term Store through the Microsoft Graph. This has never been possible before, and it will be interesting to see how customers will integrate this functionality. What is clear is that if you have been ignoring either the Microsoft Graph or Managed Metadata, the time is to investigate how these opportunities can maximize your Microsoft 365 investment.

Microsoft Graph Connectors Entering Targeted Release

Like Project Cortex, this is not a new announcement, but the fact that these are now going to be more broadly available in the targeted release channel in the near future is an exciting development. Essentially, these connectors allow your organization to surface external data sources into search using the Microsoft Graph. If you’re interested in seeing the range of connectors available, check out the Microsoft Graph Connectors gallery.

Implement Today

If you are interested in more Microsoft 365 Announcements, Microsoft has released its Build conference book of news that summarizes all the announcements across all their product lines.

There are great announcements last week but digesting them can be daunting. Let AIS help you understand their impact on your organization and help ensure your investment in Microsoft 365 is being maximized. Contact us today to start the conversation.

In this post let’s see how Microsoft Teams’ capability to embed tools and applications on Teams channels can be leveraged to enhance collaboration and visibility within a team.

We will create a Power App to record Objectives and Key Results (OKR) of employees and teams. We then create a Power BI report to better visualize the data and metrics. The Power App and the Report will be added as tabs to the Teams channel so that users can collaborate with their team members, keep updating the OKR, and view the progress charts.

Let’s Get Started!

Data Model

One of the critical aspects of the app is to identify the attributes that are needed to capture relevant data. As the name suggests, Objectives and Key results are the primary attributes that will be needed. Along with that, we will also have to create Owner (users), the Team (to which the owners belong), and the Progress Bar to track the completion of the “Key Results” on the “Objective”. For this app, we will use a SharePoint List as a data source and implement the above data model.

Power Apps

  1. The canvas app is created in a tablet mode and the SharePoint list is added as a data source. Relevant styling and functionality are built in the app.
  2. This is a screen on which the data from SharePoint is displayed in the context of the current logged in user. Users can perform CRUD operations on the data from this screen.Canvas App in SharePoint
  3. This tab displays basic information such as the current date and the currently logged-in user.
  4. This is a refresh button that refreshes the data source and brings the latest data to the app.
  5. This icon creates a new item in SharePoint with a parent title to store the Objectives.
  6. This is a gallery control that displays data from the SharePoint list. Each objective can have multiple key results that can be created by hitting the ‘+’ icon in the gallery.

Power BI

The data stored on the SharePoint list is analyzed via charts on a Power BI report. Charts are created to visualize metrics as below:

  1. Percentage objective completion
  2. Objective completion by team
  3. Objective completion by individual users

Data stored on the SharePoint list is analyzed via charts on a Power BI report

Relevant metrics can be similarly visualized using charts in Power BI and the data set can be set to refresh on a schedule.

MS Teams (Setup in Action)

Team members can collaborate on a single channel and do not have to switch between devices, platforms, screens, applications, etc. The canvas app is added to the Teams channel as a tab as shown in the image below.

MS Teams Set Up in Action

The Power BI report is added to the channel as a tab as shown in the image below.

Power BI Report

In this post we saw how day-to-day processed within groups in an organization can be digitally transformed to apps using Power Apps, metrics analyzed on Power BI visuals can be implemented on MS Teams channels to enhance collaboration and teamwork.
Teammates can view their respective objectives, responsibilities, and track their productivity in a single place without having to launch or switch between different applications or reports.

Processes like these bring transparency in evaluating productivity with respect to the tasks, teamwork, and individual standpoints. Microsoft Teams is a great way to enable your teams to better collaborate, and plugging in Power Apps, Power BI, and other tooling can snowball the impact it has on your organization’s digital transformation efforts.

I hope you found this interesting and it helped you. If you’re looking to empower your team with a similar solution, check out our Accelerated Teams Enablement or Teams Governance Quick Start offerings, as well as our Power Platform Quick Start! Thank you for reading!

The first time I was introduced to Azure Cognitive Search was from the Microsoft AI Dev Intersection conference in 2019. I thought to write a quick blog post on it to help others understand its features and benefits. This blog is not only for developers, so if you are a Business Analyst, SharePoint Analyst, Project Manager, or Ops Engineer, you will still find the information useful from this blog.

Search-as-a-Service

Azure Cognitive Search (ACS) is a technique for using artificial intelligence (AI) to extract additional metadata from images, blobs, and other unstructured data. It works well for both structured and unstructured data. In the past, we needed to set up a separate search farm to fulfill the search requirements for a web application. Since ACS is a Microsoft Cloud service, we do not need to set up any servers or be a search expert. You can prove these concepts in front of your customer in minutes.

When can we use it?

Most of the businesses have many handwritten documents, forms, emails, PowerPoints, Word documents, of unstructured data. For handwritten documents, even if you scan and digitize it, how can we make content searchable?  If you have images, drawings, and picture data, how do we extract text contents out of it and make it searchable? If you have many handwritten documents, you can scan it, upload it to Azure Blob Storage containers in an organized fashion and Azure Cognitive search can import the documents from Blob Containers and create the search indexes.  The below diagram shows the paper document flow.

Paper Documents Flow:

Paper Documents Flow

Below are a few cases where ACS can really come handy:

  • If the local-file share has many documents and running out of space. Example: If your organization is storing documents in File Server, you can index those documents using ACS and can provide a good search experience so users do not have to use Windows, search explorer to search. You can design nice web application UI which can search using ACS indexes.
  • The customer already has data in the cloud. Like data stored in Azure Blob Storage, Azure SQL Database, or Azure Cosmos Db. ACS can easily connect and create indexes on Azure Blob Storage, Azure SQL Db, and Azure Cosmos DB.
  • International business companies have documents in many languages. Out of the box, ACS search indexes translated results in many different languages. You can show your search result in a different language as well.
  • The client needs to apply AI to business documents.
  • Documents are lacking the Metadata. Example: Documents that are having Title only as metadata so all you can search by is Title! But ACS can extract many key phrases from documents, and we can search on key phrases as well.

We will next learn how to quickly prove this concept.

Creating Service and Indexes from Azure Portal

The below diagram shows the simple flow from the Azure portal. You can prove the ACS concepts in front of clients in minutes.

Creating Service and Indexes from Azure Portal

Log in to the Azure portal and create the Azure cognitive search service. You can find steps on how to create ACS here.

Once your service has been created, follow the below steps to quickly prove the concept.

  • Step 1: Start with documents (unstructured text) such as PDF, HTML, DOCX, Emails, and PPTX in Azure Blob storage. Upload your contents in Azure blob Storage and in ACS. Import your data from Azure Blob Storage.
  • Step 2: Select this option if you would like to apply cognitive skills (see the next section for understanding the cognitive skills)
  • Step 3: Define an index (structure) to store the output (raw content, Step 2-generated name-value pairs).
  • Step 4: Create an indexer, Indexer fills the data into your index fields.

(See the next section for understanding the Index and Indexer)

  • Step 5: You can quickly search on indexes by using Azure Search Explorer.

Understanding Index and Indexer

The search index is like you are creating an empty table and fields. If you want to search on your data, first we need to figure out which fields we want to make it searchable. Once we decide the fields, how can we populate data into it? The search indexer pulls the data from your source and fills your search indexes with data so you can search on search indexes. It is very quick to define your search indexes and create an indexer from Azure Portal in ACS. In ACS search index is just Json objects.

Understanding Index and Indexer

Understanding Text Cognitive Skills and Image Skills 

Out of the box Text Cognitive skills in ACS can extract the people’s names, organization names, location names, and key phrases from your data or documents. Text Cognitive skills can also translate the result in different languages and can also detect the language.

See below an example of results translated into the Hindi language.

Understanding Text Cognitive Skills and Image Skills 

Image skills can generate tags and captions from images and can also identify celebrities.

See below JSON search index as an example of Image cognitive skill.

Image Cognitive Skill

Conclusion:

Since Azure Cognitive Search is cloud service, it is very quick to use it if you already have data in cloud or on-premises. If you have data in your own data center, you can push the data into Azure cognitive search indexes. Below two are my favorite demo sites, they used ACS to extract the content out of paper documents and images.

Background

On a previous project, I was a part of, the client wanted to explore a chatbot application for its employees. The goal was for the chatbot to help increase the office’s productivity. Certain skills would be developed to enable swift access to popular actions such as opening a timesheet, password reset help, etc. The client also expressed a need for seamlessly adding new features, to the chatbot, when necessary. It was also decided that the chatbot would communicate with external services to fetch data. Taking what was discussed, we went to the drawing board to devise a plan on how to develop a scalable solution.

On top of the application having to be scalable, there was a decision to try and make the application as maintainable as possible too. Since this application will increase in size over time, it was key for us to lay down a foundation for how the chatbot would interact with classes and other services. As the architecture was finalized, it was apparent to us that there were critical dependencies on several Azure cognitive services. Thus, it became important that we try and ensure that the chatbot application would be maintainable to accommodate for those services. In order to accomplish this, a cascading approach to calling our dependencies was used.

Before I delve into the cascading approach, I want to spend some time talking about bots and the services used alongside them. Ultimately, the main goal of a bot is to accept a request from a user and process it based on what they ask for. For example, a user can ask a bot a question about company policies, the weather, recent documents they worked on or to open web pages.

LUIS

Now, in order to process those types of requests, Azure provides a couple of cognitive services to assist. One of these services is called LUIS (Language Understanding Intelligent Service). At a high level, LUIS determines an “intent” from statements (often called utterances) that you define in custom models for which you build and train. For example, LUIS can receive an utterance of “What’s the weather”. When an intent is found, there will be a confidence score (a value ranging from 0–1 inclusive) associated with the intent. This score just shows you how confident the service was in determining the intent. The closer the value is to 1, the more confident the service was, and the closer it is to 0 denotes how less confident the service was. In this example, the intent could be something like “GetWeather” with a 0.96 confidence score.

QnA Maker

Another cognitive service that is used with bot apps is QnA Maker. This service excels at housing data that is best suited for the question and answer pairs. The question and answer pairs are stored in what’s called a knowledgebase. A knowledgebase typically encapsulates data that pertains to a specific business domain (i.e. Payroll, HR, etc.). Like LUIS, QnA Maker utilizes machine learning, cognitive models, and confidence scores. When a QnA Maker knowledge base receives a question, it will use machine learning to determine if there is an answer associated with the question. A confidence score (ranging from 0-1 inclusive) will be associated with the results. If you would like to learn more about bot development and the different cognitive services offered in Azure, check out the links at the bottom of this post.

The Initial Approach

The chatbot solution included 1 LUIS service along with 3 separate QnA Maker knowledgebases. In our initial approach, we created intent definitions in LUIS that corresponded with each of our QnA Maker knowledgebases. We then trained LUIS to recognize if the user’s message was a question that could be answered by one of the knowledgebases. When messages came to the bot from the user, we would always send them to LUIS first. If it returned an intent that corresponded with one of our QnA Maker knowledgebases, we would then redirect the request to the identified knowledgebase. Then the knowledgebase would hopefully recognize the question and return an answer. That said, each call to a knowledgebase was dependent on the LUIS service correctly recognizing intents. This was not an ideal approach.

Having the QnA Maker knowledgebases dependent on the LUIS service was an issue. This meant that for a knowledge base to get a hit, the LUIS model would need to be properly trained and up to date. The LUIS model would need to be built and trained with data that closely matches that of each QnA Maker knowledgebase. That said, if the LUIS model is updated and it impacts a given QnA Maker knowledge base, then that knowledgebase would have to be updated and trained to contain the new data from the LUIS model. This approach would ensure the models from both LUIS and QnA Maker are in sync with each other. As you can probably see, this poses as a maintenance concern.

Cascading Solution

So, in order to alleviate this concern, a different approach was taken. The LUIS model would have no knowledge of any data from the QnA Maker knowledgebases and vice versa. That meant updating the LUIS model to remove data that corresponded to any of the QnA Maker knowledgebases. The same approach was done within each QnA knowledge base. This made it so both LUIS and QnA Maker were completely independent of each other. This led to having a cascading approach to calling each of our dependencies. As a result, this would resolve the imposing maintenance issue. (See image below)

Cascading Approach

It is worth noting that we used Microsoft’s Bot Framework SDK for this solution, but the strategies you will see in this post can be used for any chatbot technology.

If the LUIS request handler was unable to handle the request, no problem! The next request handler would attempt to handle the request. This flow would proceed until one of the request handlers successfully handled a request. If none were successful, then the chatbot would tell our telemetry client, in our case Azure App Insights, to log the unrecognized user message. This would provide insight into model training opportunities. Finally, the chatbot would return a default message back to the client. (See image below)

Chatbot return a default message back to the client

Cascading Solution: Confidence Score Thresholds

Each result returned by a cognitive service holds a confidence score. This data proved to be very useful for the chatbot. In the LUIS and QnA Maker request handler classes, there was logic to determine if the returned confidence score met a given threshold. If the score was high enough, meaning the service was confident that it found the right data, then the given request handler can proceed with handling the request. If the score was found to be lower than the threshold, then the request handler does not continue with handling the request. (See image below of a sample method to handle an intent request)

Cascading Solution: Confidence Score Thresholds

Instead, the next request handler will be told to execute. Having this implementation in place helps be explicit with defining an acceptable confidence score. That said, determining a confidence score threshold depends on how extensive your cognitive models are. If your cognitive models account for various phrases and spelling of keywords, then your cognitive services will have an easier time identifying intents/answers. In practice, I found that 0.70 and 0.75 to be satisfactory threshold values.

Cascading Solution: Dialog Turn Status

The final piece to the cascading solution was handling the progression or conclusion of a dialog turn. Think of a dialog turn as a face-to-face conversation. You might initiate the conversation with a question, which is a turn. Then, the other person would answer your question. That is also a turn. This can continue until the conversation ends. Conversations with a bot follow a similar flow. When it’s the bot’s “turn” to reply to the client, it performs its logic then responds. Below is a diagram, provided by Microsoft, illustrating the high-level nature of a conversation with a bot.

Cascading Solution: Dialog Turn Status

Image from: https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-basics?view=azure-bot-service-4.0&tabs=csharp
In the cascading solution, we were explicit when the bot’s turn was over or when it should continue processing the request. Ultimately, when the chatbot found an answer to the user’s question or request, we would tell the chatbot that its turn is complete. On the contrary, we had several scenarios where we told the chatbot to keep its turn going. One scenario was if LUIS did not return an intent or if the confidence score was below our threshold. Another one was if a QnA Maker knowledge base did not find an answer to the given question passed to it. After each request handler executes, there is a check to verify if the turn is complete or not.

Closing Thoughts

The cascading approach for handling calls to the different services/knowledgebases was a huge win for this bot application. It offers clear, concise, and manageable code. Every LUIS and QnA Maker cognitive model is independent of each other and each request handler is independent of each other as well. In addition, the implementation of confidence score thresholds ensured that we were explicit with how we further processed client requests. Finally, adding progression and termination logic for a dialog turn certified that it would be appropriately processed. This whole approach helped paint a clear picture of what our chatbot was doing.

Links: