At AIS, our Account Teams work with our clients every day to produce IT solutions that solve business problems. We work closely with our CTO organization to ensure that we are researching the latest technology and services in a manner that is applicable to our clients and prospective clients.

We recently applied this to a business problem that required an organization to quickly — and with no notice — stand up a website to collect hundreds, or potentially millions, of submissions from the general public.  Our use case focused on law enforcement and the sorts of emergency response situations we’ve seen all too often in the news, such as the Boston Marathon bombing.  When local, state or federal authorities respond to criminal acts, they seek to quickly collect vast amounts of input from the public.  This input can be in the form of tips, photos, videos or any untold number of observations.  Agencies need the capability to surge their IT tools and applications to collect the data, store it, and run analysis tools against the collected content to harvest information. Read More…

I recently needed to make several web.config changes to our production SharePoint 2010 web farm. Making all of these modifications manually would have been tedious and would have left a lot of room for error. After doing some research to find a better way, I discovered the SPWebConfigModification class in the Microsoft.SharePoint.Administration namespace.

This is basically  a collection of changes to be made to web.config files that can be stored and then applied to all web front-end servers in a farm. This class is available with SharePoint 2010 and 2013. Unfortunately, the class is poorly documented so I had some trouble figuring out how to use it. You could write a console application to use the SPWebConfigModification class or you could use a feature receiver, but I found that the easiest approach was to just use it with PowerShell. After some trial and error, I came up with the following four PowerShell scripts that can be reused to read, add, remove, and completely clear the SPWebConfigModifications on the server. Read More…

An Enterprise Service Bus (ESB) is a shared messaging layer that gives you a consistent, scalable and flexible means of coordinating across disparate, loosely-connected services to execute business processes. Over the years, Microsoft has developed several service bus technologies:

BizTalk: A messaging and workflow orchestration platform to build ESB behaviors and capabilities. The BizTalk ESB toolkit provides a set of guidelines, patterns and tools.

Windows Azure Service Bus (ASB): This provides the messaging infrastructure for applications that live in the cloud, in your data center, and crosses all devices and PCs.

Service Bus for Windows Server (SBWS):  SBWS is based on ASB and shares many of the same capabilities, such as queue, topic and subscription support.  A distinct design goal is to ensure symmetry between SBWS and ASB and allow for a single set of code to be leveraged across both deployment environments.

Read More…

Have you ever wanted a fresh SharePoint development environment? Have you ever needed to quickly create a test box, or wanted to prototype something specifically for a customer? In the past, in all of these scenarios, you’d face a very time-consuming process and quite honestly, one that has likely been a deterrent. In this blog post, I’m going to walk you through creating a SharePoint 2013 development environment, on Azure, utilizing the Visual Studio 2013 RC.

Thanks to the good people at Microsoft, there is now a developer image on Azure that comes with SharePoint 2013 and Visual Studio 2013 Ultimate RC, already installed. Before we get too far along, I do have to warn you that you’ll need either an Azure or MSDN subscription. If you don’t have an Azure subscription, you can activate your MSDN Azure benefit and receive up to $150 USD in free, monthly Azure credits. If you are careful to shut down your VM at the end of each work day, then you should be able to use this VM as your day-to-day development machine without eating up all of your credits. Read More…

Introduction by Vishwas Lele:

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.” In other words, automate every aspect of your (cloud-based) infrastructure. There are significant benefits in following Werner Vogels’ advice:

  1. You can build systems that are cost aware by only keeping the parts of the system that are needed and turning off everything else .
  2. Capacity planning is hard. It is much better to dynamically build capacity based on the need.
  3. Failures are not an exception but a rule. Rather than building complex logic to handle exceptions, make your systems fault resilient by provisioning failover resources as needed.
  4. Make your systems more agile – systems that can scale in the direction of business vs. a design time scaling criterion.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the aforementioned benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it is the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault resilient.

But it all begins with developing robust automation scripts to provision and manage a SharePoint 2013 farm. This brings us back to the purpose of this blog post by Abhijit Kumar. Abhijit discusses an automated approach for provisioning a SharePoint 2013 farm using Amazon Web Services. It is noteworthy that the automation approach we describe below is based solely on PowerShell. This might come as a surprise given that AWS offers services like CloudFormation, which enables creation of AWS resources, combined with open source tools such as Opcode Chef and AWS Puppet, which enable the installation and configuration of applications. We chose to rely solely on PowerShell for the following reasons:

  1. PowerShell is Microsoft’s canonical task automation framework, consisting of a command-line shell and a scripting language that has full access to COM and WMI, giving Windows administrators control over every aspect of Windows OS-based machines.
  2. PowerShell scripting language is based on the .NET framework. This means a PowerShell script can take advantage of .NET framework enhancements such as Workflow Foundation (WF). We use WF extensively to manage long-running automation scripts.
  3. AWS Cloud Formation is not available on AWS Gov Cloud. AWS Gov Cloud is an isolated AWS region designed to allow U.S. government agencies and customers with sensitive workloads to address their specific regulatory and compliance requirements. Given that AIS services a large number of customers with stringent regulatory and compliance requirements, we needed an automation approach that worked on AWS Gov Cloud.
  4. If you read our earlier blog post about SharePoint 2013 automation on Windows Azure, you will notice that we have been able to achieve a high level of reuse between Windows Azure and AWS scripts for SharePoint 2013 scripts. While the WF-based provisioning logic is largely the same, Azure Service Management SDK calls are replaced with AWS Tools for Windows PowerShell. This reuse allows us the flexibility to offer our customers a choice between the industry leading IaaS platforms – AWS and Windows Azure.

Abhijth’s post below walks you through the script to deploy SharePoint 2013 Farm on AWS in an automated manner. I am confident that you will it useful. Please give the scripts a try and let us know.

The recent announcement about the general availability of Windows Azure IaaS comes with the following key enhancements:

  1. Remote PowerShell is enabled by default when deploying Virtual Machine using PowerShell.
  2. Availability of trial images such as SharePoint in the image gallery.

These enhancements make it easy to deploy a SharePoint Farm in an automated manner using PowerShell scripts.

The goal of this blog post is to walk you through such a script. Read More…

Today I want to talk about a process we created for building out machines using Virtual Machine Manager (VMM) as part of our daily build process within Team Foundation Server (TFS).

As part of our nightly build process, we actually recreate the entire environment from scratch. We don’t use snapshots; we actually delete and provision a series of VMs. This may sound like overkill and I’ve seen other approaches that use snapshots and revert each night…and I think that’s great. Use what works for you. However, we wanted something that could not only exercise our code base, but also our scripts that we use for building our environment. In a way, this allows us to test both pieces at the same time.

At this point I should throw in the disclaimer that this blog post builds on one written by my colleague David Baber: Driving PowerShell With XML. We use the same XML-driven framework to build out our machines. In reality the process of removing and creating VMs is treated as just one “step” in our build-out process. Executions of other steps obviously follow, but this post is primarily concerned with standing up that environment. What happens next is up to you. Read More…

We recently deployed a five-node CRM 2011 topology using Windows Azure IaaS with the following objectives:

  • Understand how a multiple node CRM setup can be provisioned using Windows Azure IaaS. Specifically, how the networking capabilities offered by the Windows Azure platform (i.e. stateless load balancing) map to the CRM requirements.
  • Develop an automated way to provision and de-provision a CRM setup. This is not only useful for dev and test scenarios, but also for production scenarios where it is notoriously difficult to conduct capacity planning before acquiring the necessary hardware. For example, it is hard to know upfront what CRM functional building blocks (aka CRM roles) the business stakeholders will want to focus on, such as async processes, sandbox, reports, etc. By dynamically scaling out the “needed” features on demand, we can enhance the business agility of the CRM.
  • Offer our customers an educated choice between CRM Online (no setup costs but less control) and CRM On-Premises (extensive setup costs but complete control).
  • Take advantage of hybrid apps that combine CRM capabilities with Windows Azure services, such as Windows Azure Active Directory, mobile services, etc.

Read More…

We’ve reached the end of this series.  In part one, we discussed the basics of PowerShellPart two showed some of the ways to interact with SharePoint via PowerShell.  Today we’ll look at parts of a script I compiled to build out a SharePoint 2013 development virtual machine.

Environment and Build Notes

I want to start off with some notes about the assumptions I took and the configuration I used. First, this VM is running in Hyper-V on Windows 8 and uses Windows Server 2012 which was installed through the GUI. (I’ll try to figure out PowerShell remoting and Hyper-V at a later date, but that wasn’t in the cards for this post.) Second, I’ve configured two virtual networks, one internal with a static IP and one external with a dynamic IP. I configured those through the GUI as well. However, almost everything else has been built using PowerShell. While we’ll only highlight some of the script in this post, you can find the full script at my CodePlex Project: Useful PowerShell Cmdlets.

Read More…

PowerShell is great. It’s a powerful tool/programming language that can help you automate and solve any number of challenges. However, PowerShell files can also get quite ugly. With functions, parameters, inclusions (of other files and libraries) and all sorts of comments, making necessary changes can feel like looking for a needle in a haystack.

I worked on a project recently and saw this ugly side firsthand, through thousands of lines of PowerShell script. Scripting spread out over dozens of files. I fought to learn the pattern of execution, and discovered that making even simple changes had side effects in processes that I didn’t even know were related. To overcome this, my team began using an XML file to maintain the list and order of commands to execute, and then had a simpler, generic PowerShell file to execute everything.

Today, our configuration has grown complex, supporting the installation and configuration of several enterprise software components that need to exist in concert with each other.  Even with the complexities in the XML, it is easy to trace an error to the problem step, make corrections, and continue.

In this post we will lay down a foundation for anyone to build upon and organize their own PowerShell execution process cleanly inside of XML.

Read More…