22106868_sYou’re an enterprise. You’ve done your research. You’ve read the whitepapers. You’ve heard all the success stories (along with a few cautionary tales). Perhaps you’ve already taken your first steps into the cloud, but want to embark on a larger-scale public cloud adoption strategy.

But what does that look like for your enterprise? The journey is different for you – for everyone, really. And you certainly don’t want to make it up as you go along.

Here are five important things you need to map out before you start your public cloud journey. We’re confident in this roadmap because we’ve been along for the ride before. We’ve helped many large enterprises and agencies successfully adopt and implement their own unique cloud strategies. Read More…

2013 was a great year for AIS — we worked on exciting projects for our terrific clients, built some cool apps and won some cool awards. We were honored with the 2013 Microsoft Mid-Atlantic Cloud Practice Award and are among the first Amazon Web Services partners to earn a “SharePoint on AWS” competency. And throughout the year, we wrote and blogged about our passion for cloud computing, SharePoint, going mobile, and doing “more with less” for our government and commercial clients.

Here’s a round-up of 2013’s most popular posts and series, in case you missed them:

We have big plans for the blog for 2014 — more posts, more events and more compelling content from the entire AIS team. Stay connected with us on Facebook, Twitter and LinkedIn, and check out our Events page for details on our free presentations and webinars.

Happy holidays, and thanks for reading!

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.”

In other words, automate every aspect of your (cloud-based) infrastructure.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it’s the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault-resilient.

We’ve written about two automated deployment approaches to SharePoint 2013; one for Amazon Web Services and one for Azure. In case you missed them…

Our AWS-based SharePoint 2013 script and source code can be found here.

Our Windows Azure-based SharePoint 2013 script and source code can be found here.

In this blog I’ll discuss some post-release reporting issues that we faced for one of our projects and the solutions we implemented. On the technology side, we had SQL Server 2008 R2 and MVC 4.0 application (which were hosted in Amazon Web Services) in our production environment.

The Problem

The post-production release reporting system was not responding as per the user expectations. For most of the high-volume reports (50K rows to 200K rows in report output), we were getting request timeout error. Client SLA for response time was two minutes; hence any report (big or small) must return data within two minutes. All the reports were designed using SQL Server Reporting Services 2008 R2. In all there were close to 40 reports with such timeout issues. Read More…

I’ve been reading a lot about the sweeping organizational changes at Microsoft. It’s always interesting to analyze and attempt to interpret their strategy and internal politics. (For example, why is the Dynamics business still separate? Is it being positioned to be sold? Probably not, but fun to consider.)

However, I am more drawn to the larger changes the re-org is enabling. The external press always seems to be negative about the actions of Microsoft’s executive leadership ever since Bill Gates left.  While I may not agree with every choice Steve Ballmer has made, when you really stop and think about how they have transformed themselves over the past six years, it’s pretty amazing — especially when set in juxtaposition to the lack of change at other lumbering IT giants. Microsoft is well on their way to transforming from a worldwide monopoly of “Windows and Office” to a “devices and services” business. Read More…

At AIS, we’re committed to delivering transformative solutions and outcomes, and that’s exactly what we’re doing for Healthcare Receivable Specialists, Inc. (HRSI). Everyone knows that data redundancy can be a productivity killer, so we’re utilizing a cloud-based hosting solution via Amazon Web Services (AWS) for their new custom application that will help eliminate that problem. The AWS hosting option will also keep costs down without sacrificing performance and positioning for HIPAA compliance. Additionally, our solution provides a self-service means to access the status of their applications throughout their entire business process.

Keep reading to learn more about this exciting project!

Introduction by Vishwas Lele:

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.” In other words, automate every aspect of your (cloud-based) infrastructure. There are significant benefits in following Werner Vogels’ advice:

  1. You can build systems that are cost aware by only keeping the parts of the system that are needed and turning off everything else .
  2. Capacity planning is hard. It is much better to dynamically build capacity based on the need.
  3. Failures are not an exception but a rule. Rather than building complex logic to handle exceptions, make your systems fault resilient by provisioning failover resources as needed.
  4. Make your systems more agile – systems that can scale in the direction of business vs. a design time scaling criterion.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the aforementioned benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it is the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault resilient.

But it all begins with developing robust automation scripts to provision and manage a SharePoint 2013 farm. This brings us back to the purpose of this blog post by Abhijit Kumar. Abhijit discusses an automated approach for provisioning a SharePoint 2013 farm using Amazon Web Services. It is noteworthy that the automation approach we describe below is based solely on PowerShell. This might come as a surprise given that AWS offers services like CloudFormation, which enables creation of AWS resources, combined with open source tools such as Opcode Chef and AWS Puppet, which enable the installation and configuration of applications. We chose to rely solely on PowerShell for the following reasons:

  1. PowerShell is Microsoft’s canonical task automation framework, consisting of a command-line shell and a scripting language that has full access to COM and WMI, giving Windows administrators control over every aspect of Windows OS-based machines.
  2. PowerShell scripting language is based on the .NET framework. This means a PowerShell script can take advantage of .NET framework enhancements such as Workflow Foundation (WF). We use WF extensively to manage long-running automation scripts.
  3. AWS Cloud Formation is not available on AWS Gov Cloud. AWS Gov Cloud is an isolated AWS region designed to allow U.S. government agencies and customers with sensitive workloads to address their specific regulatory and compliance requirements. Given that AIS services a large number of customers with stringent regulatory and compliance requirements, we needed an automation approach that worked on AWS Gov Cloud.
  4. If you read our earlier blog post about SharePoint 2013 automation on Windows Azure, you will notice that we have been able to achieve a high level of reuse between Windows Azure and AWS scripts for SharePoint 2013 scripts. While the WF-based provisioning logic is largely the same, Azure Service Management SDK calls are replaced with AWS Tools for Windows PowerShell. This reuse allows us the flexibility to offer our customers a choice between the industry leading IaaS platforms – AWS and Windows Azure.

Abhijth’s post below walks you through the script to deploy SharePoint 2013 Farm on AWS in an automated manner. I am confident that you will it useful. Please give the scripts a try and let us know.

Because of our broad knowledge in building web applications, AIS decided to develop a prototype that highlights the features and capabilities of open standards for geospatial processing and data sharing through web applications.

We chose the Visible Infrared Imaging Radiometer Suite (VIIRS) as our data source for the demonstration. VIIRS collects visible and infrared imagery and radiometric data for civil and military Earth monitoring. (The Day/Night Band (DNB) datasets available from NOAA’s Comprehensive Large Array-Data Stewardship System are not quite in the format we need for our application, since they are sensor data records stored within an HDF5 container.)

Read More…

Amazon Web Services (AWS) provides an extremely flexible set of services for hosting web applications in the cloud with a web-based console for selecting options to quickly provision a set of IT resources. This post will explore the various aspects of hosting a custom .NET web application in AWS, focusing on high-availability options and disaster recovery scenarios and how to do so under cost constraints.

When building a solution in AWS you have to understand the difference between affinity and availability, and the terminology that Amazon uses. We will define affinity as the physical location of resources within a data center and availability as the isolation of resources for disaster recovery scenarios.

The AIS solution I’ll reference throughout this post uses a Virtual Private Cloud (VPC) to enable our engineers to develop a multi-tiered application within a private sub-netted network split across two availability zones. The VPC as defined by Amazon allows for “… you to create a virtual network topology—including subnets and route tables—for your Amazon Elastic Compute Cloud (Amazon EC2) resources.” The use of a VPC ensures you have high affinity for your EC2 resources. Availability zones refer to the geographically separated data centers Amazon uses for hosting EC2 resources.  Therefore, your disaster recovery architecture must account for placement of EC2 resources in different Availability Zones.

High Availability vs. Disaster Recovery in AWS

To muddy the terminology waters further, we have the concept of high availability, where the effects of hardware or software failure are essentially masked and downtime for end users reduced, such as in Microsoft’s SQL Server. Our solution had to address the issue of high availability and allow for a swift recovery in the event of a disaster or failure of EC2 resources in one availability zone. A high-level picture of what we needed to achieve is shown below.

Read More…

Swiss army knife

Here at AIS, we’ve found Windows Azure Blob Storage to be an inexpensive, fast hosting solution for non-text or server-side loaded resources. But what if we want to use client-side JavaScript to load HTML fragments or JSON data directly from blobs? Under normal circumstances this is prevented by JavaScript’s Same Origin Policy; that is, you can’t load HTML fragments or JSON from another domain, subdomain, port or protocol.

One commonly used solution to this restriction is JSONP, but this is not available with Azure Blob Storage. Another modern option is Cross-Origin Resource Sharing (CORS), but it is also unavailable on Azure Blob Storage and not supported in some legacy browsers.

We could consider a server-side solution, such as employing an Azure Web Role to read text-based content from blob storage and serve it up from the original server. But this approach can be both wasteful and performance inhibiting.

Read More…