Over the last couple of years, I’ve moved from serious SharePoint on-premise development to migrating web applications to Azure.  My exposure to Azure prior to the first application was standing up SharePoint development virtual machines that pretty much just used the base settings.  Trial by fire tends to be the way I learn things best which is good because not only was I having to learn about Azure resources, but I also had to refamiliarize myself with web development practices that I haven’t had to work with since .Net 2.0 or earlier.

One of my first tasks, and the focus of this post, was to migrate the SQL Server database into an Azure SQL (Platform as a Service) instance.  The databases generally migrated well.  Schemas and the majority of the stored procedures were compatible with Azure SQL.  Microsoft provides a program called Data Migration Assistant. This program not only analyzes but can migrate the database.  The analysis will return any suggested or required changes.  There are several types of issues that could arise including deprecated features, incompatible features, and syntax blockers.

Deprecated Field Types

There are three field types that will be removed from future versions: ntext, text, and image. This won’t hold up a migration, but I did change the fields to future-proof the database. The suggested solution is to use: nvarchar(max), varchar(max), or varbinary(max).

Azure SQL Database

T-SQL Concerns

I ran into several issues in scripts. Some had simple solutions while others required some redesign.

  • ISSUE: Scripts can’t connect to multiple databases using the USE statement.
  • SOLUTION: The scripts that did use the USE statement didn’t need it as it was only meant to connect to a single database.
  • ISSUE: Functions such as sp_send_dbmail are not supported. The scripts would send notification emails rather than sending from the application.
  • SOLUTION: This required an involved redesign. I created a table that logged the email and a scheduled Azure Web Job to send the email and flag the record as sent. The reference to sp_send_dbmail was replaced with an INSERT INTO Email command. If the email needed to be sent near-instantly, I could have used an Azure Queue and had the Web Job listen to the Queue then send the email when a new one arrives.
  • ISSUE: Cross-database queries are not supported. A couple of the apps had multiple databases and data from one was needed to add or update records in the other.
  • SOLUTION: This was another major design change. I had to move that logic into the application’s business layer.T-SQL Concerns

Linked Servers

Some of the applications had other databases, including other solutions like Oracle, that the application accessed. They use Linked Server connections to execute queries against the other data sources. So far, they’ve just been read-only connections.

Linked Servers are not available in Azure SQL. To keep the functionality, we had to modify the data layer of the application to pull the resources from the external database and either:
A. Pass the records into the SQL Stored Procedure, or
B. Move the logic of the Stored Procedure into the application’s data layer and make the changes in the application

One of the applications had a requirement that the external database was staying on-prem. This caused an extra layer of complexity because solutions to create a connection back to the database, like Azure ExpressRoute, were not available or approved for the client. Another team was tasked with implementing a solution to act as a gateway. This solution would be a web service that the Azure application would call to access this gateway.

SQL Agent Jobs

SQL Agent Jobs allow for out-of-process data manipulation. A couple of the applications used these to send notification emails at night or to synchronize data from another source. While there are several options in Azure for recreating this functionality such as Azure Functions and Logic Apps, we chose to use WebJobs. WebJobs can be triggered in several ways including a Timer. The jobs didn’t require intensive compute resources so it could share resources with the application in the same Azure App Service. This simplified the deployment story because it could be packaged and deployed together.
SQL Agent Jobs
Database modifications tend to be one of the major parts of the migration project. Some of the projects have been simple T-SQL changes while others have needed heavy architectural changes to reproduce functionality in a PaaS environment. Despite the difficulties, there will be major cost savings for some of the clients because they no longer need to maintain an expensive, possibly underutilized, server. Future posts in this series will cover Automation & Deployment, Session State, Caching, Transient Fault Handling, and general Azure lessons learned.

Screen Shot 2015-12-09 at 1.21.26 PMWith the abundance of JavaScript libraries and frameworks available today, it’s hard to decide what is going to work best for a certain requirement. Add in the fact that there are many server-side tools that can also accomplish the task and you could spend hours just narrowing down options to test before deciding on the path you’ll take in the end. This was a recent conundrum for me when approached to incorporate child data management in the parent forms on a SharePoint 2010 project. My experience with JavaScript has been limited over my career because I’ve been so focused on the backend of SharePoint during the majority of that time. My current client has need for a better user experience, so I’ve been trying to fill that hole in my skills.  This project offered an opportunity to do just that.

While it’s possible to put an ASP GridView control in an Update Panel, a client-side approach seemed cleaner and a way to expand my JavaScript skills. I looked at many options like JQuery Datatables, koGrid, and a few others, but they didn’t give me the look, price (free), and/or TypeScript definitions for me to easily take off with implementing it.

I decided to build my own solution since it would be a relatively simple design and it would let me dig into KnockoutJS. In addition, it would be easier to use TypeScript to build an easier-to-maintain solution since it incorporates many of the ECMAScript 6 features like classes and modules, among others. Read More…

SharePoint Server 2013 offers a completely new architecture for Workflow utilizing Workflow Foundation 4.5.  I’ve already covered the high-level changes in a previous post called “What Changed in SharePoint 2013 Workflow? Pretty Much Everything” and discussed how a SharePoint 2010 Workflow project would be designed differently in my post titled “Redesigning a SharePoint 2010 Workflow Project for SharePoint 2013.”  Both of these posts discuss the new reliance on web services for data in SharePoint Workflow.  While it’s obvious that Visual Studio workflows would interact with web services, SharePoint Designer 2013 offers web service communication, as well.  This post will detail the new actions available in SharePoint Designer 2013 for interacting with web services and how to use them. Read More…
Workflow in SharePoint 2013 has undergone quite the architectural change from its SharePoint 2010 ancestor.  I documented many of the major changes in a previous blog post, “What Changed in SharePoint 2013 Workflow? Pretty Much Everything.”  While SharePoint 2013 is backwards-compatible with SharePoint 2010 workflows, you may decide that the benefits of the new design are needed.  The purpose of this post is to illustrate the new considerations you’ll need to keep in mind when targeting SharePoint 2013 workflows. The SharePoint 2010 project we’ll use for this example is the one from my very first AIS blog post, “Developing Multi-Tiered Solutions for SharePoint.”

Web API Methods Mockup screen shot
Figure 1 - Sample WebAPI methods for Section Document Merge and Post-Merge Actions.

In our example project there are actually two workflows, SectionDocumentApprovalState (SDAS) and MasterDocumentApproval (MDA). The MDA checks if the various SDAS-related sections have been merged and finalized, then notifies specific users for approval of the final document. An instance of SDAS is created for each section, created from the Master Document that monitors the editing and approval of the specific section. We’ll focus on just the SDAS workflow. In the previous post, I referred to the workflows as being part of the Presentation Layer and the custom code called into the Business Layer.  Both of these layers will change in a SharePoint 2013 workflow solution.

Read More…

We’ve reached the end of this series.  In part one, we discussed the basics of PowerShellPart two showed some of the ways to interact with SharePoint via PowerShell.  Today we’ll look at parts of a script I compiled to build out a SharePoint 2013 development virtual machine.

Environment and Build Notes

I want to start off with some notes about the assumptions I took and the configuration I used. First, this VM is running in Hyper-V on Windows 8 and uses Windows Server 2012 which was installed through the GUI. (I’ll try to figure out PowerShell remoting and Hyper-V at a later date, but that wasn’t in the cards for this post.) Second, I’ve configured two virtual networks, one internal with a static IP and one external with a dynamic IP. I configured those through the GUI as well. However, almost everything else has been built using PowerShell. While we’ll only highlight some of the script in this post, you can find the full script at my CodePlex Project: Useful PowerShell Cmdlets.

Read More…

The first post in this series covered the basics of PowerShell including variables, loops, and decisions. It also introduced a few scripts I’ve used over the last several weeks. In this post, we’ll discuss how to use PowerShell in a SharePoint farm, some of the more useful capabilities (especially to developers), and a few more scripts that I’ve written to bring the topics covered together.

Integrating PowerShell With SharePoint

PowerShell has been natively supported in SharePoint since SharePoint Foundation 2010 and SharePoint Server 2010. When SharePoint is installed, in addition to the Product Configuration Wizard and Central Administration shortcuts, a shortcut for the SharePoint Management Shell is available. This application is a PowerShell console with a blue background and the SharePoint Snap-in loaded. The PowerShell Snap-in is a PowerShell version 1 object that, when loaded, makes additional functions (or cmdlets) available to call in the current PowerShell session. To do this, simply execute the following:

Add-PSSnapin "Microsoft.SharePoint.PowerShell"

This will allow you to access the SharePoint cmdlets in any PowerShell session. It’s also important to run the PowerShell or SharePoint Management Shell console as administrator as the logged-in user may only have limited access to the SharePoint farm.

Read More…

When I apply for speaking engagements, I usually submit at least one session that will require me to learn a new language or technology.  I did this with the upcoming SharePoint Saturday Austin with a session titled “PowerShell for Developers: IT Pros Need to Share.”  This session is the catalyst for this three-part blog series, which will be published over the next few Fridays. This first post is intended to get everyone up to speed with the basics of PowerShell.

What is PowerShell?

PowerShell is a command-line language built on the Microsoft .NET Framework. This is a ridiculously simplified explanation, though. PowerShell is a task automation tool that takes common command-line languages and magnifies their power exponentially through the use of Cmdlets.  Cmdlets are verb-noun commands that perform computer and application management tasks. These tasks can be as simple as restarting the computer or changing the time zone.

PowerShell is a sophisticated development language that contains many of the constructs found in other modern languages. Variables, functions, looping, and decision statements are all present among other important features. We will discuss these features and how they function in this post. Read More…

Workflow, as far as I can tell at this point, is one of the most overhauled functionalities from SharePoint 2010 to SharePoint 2013. The first major difference is that it’s no longer contained within SharePoint. Workflow is now handled by Windows Azure Workflow (WAW).

“Whoa, whoa, whoa! Does that mean I’m going to have to pay Microsoft some hefty usage fees to have Workflow in my 2013 environment? I really don’t see how that’s going to fly with the bosses,” you say.

Fortunately, no, that’s not the case. While I’m sure there will be a model for this, it’s not the only one. You can host WAW on-premises just like SharePoint. We’ll delve into that momentarily. Windows Azure Workflow is built on Windows Workflow Foundation 4.5 (WF4.5). WF4.5 introduces several new features as detailed in the MSDN article “What’s New in Windows Workflow Foundation.” This will require a separate install that can run on a SharePoint server or its own environment. We’ll also look at the architectural implications in a bit.

The “meat” of this post is going to focus on three areas: Architecture, Development, and how these would affect the design of an existing SharePoint 2010 Workflow project. The architecture changes only start with WAW and WF4.5. We’ll discuss the installation requirements, how it’s hosted, security, and the Pros and Cons of WAW. The development story has changed, at least to me, far more. I’ll explain the changes to coding (Hint: You can’t…directly), how web services can remedy the last statement, and custom actions. Finally, we’ll take a look at the project I discussed in my last post “Developing Multi-Tiered Solutions in SharePoint” and how that design would be changed for a SharePoint 2013 environment.

Read More…

Windows 8 Desktop

Microsoft has been a busy company this year with refreshes on most of its biggest solutions. Not only has SharePoint gone through a massive update, but so has Windows. If you’re still unfamiliar with the changes in Windows 8, then be prepared for a shocker. In the new UI, applications have been stripped of chrome and are full-screen solutions. Windows 8 was designed with touch as a first-class input method.

SharePoint 2013 brings several new features, but the two that will empower client application development the most are the greatly expanded Client-Side Object Model (CSOM) and the REST APIs. While the maturity of these features is important for Microsoft’s push to SharePoint Online and client-side development, it also opens up complex functionality for Windows, mobile, and external web applications. Read More…

N-tier development is not a new methodology. I remember learning about it in 200-level courses back in 2000, and I used it in ASP.NET development before I jumped on the SharePoint bandwagon. However, one of the things I’ve noticed over the years as a SharePoint developer is that most project development is done in the SharePoint object’s code behind or a few helper classes. This isn’t always the case —sometimes the solution isn’t complex enough to warrant a tiered approach (i.e. a single Event Receiver). But a recent project highlighted the power behind N-tiered architecture.

The client has a custom solution that they provide as a service: A master document (Microsoft Word) is split into section documents (also Word) by a project manager. Each section is assigned to a person to be modified in Word (the client also provides a Word plug-in for this modification). Once the sections are properly marked up, the master document is recreated from the sections. We were brought in to implement this solution in SharePoint 2010. Read More…