These disciplines can play a significant role in building stable release processes that help ensure project milestones are met.
Continuous Integration (CI) and Continuous Delivery (DC) are rapidly becoming an integral part of software development. These disciplines can play a significant role in building stable release processes that help ensure project milestones are met. And in addition to simply performing compilation tasks, CI systems can be extended to execute unit testing, functional testing, UI testing, and many other tasks. This walkthrough demonstrates the creation of a simple CI/CD deployment pipeline with an integrated unit test.
There are many ways of implementing CI/CD, but for this blog, I will use Jenkins and GiHub to deploy the simple CI/CD pipeline. A Docker container will be used to host the application. The GitHub repository hosts the application including a Dockerfile for creating an application node. Jenkins is configured with GitHub and Docker Plugin. Read More…
Inflexible customer solutions and business unit silos are the bane of any organization’s existence. So how does a large, multi-billion dollar insurance organization, with numerous lines of business, create a customer-centric business model while implementing configurable, agile systems for faster business transactions?
The solution is not so simple, but with our assistance, we’ve managed to point our large insurance client in the right direction. What began as a plan to develop a 360-degree customer profile and connect the disparate information silos between business units ultimately became the first step towards a more customer-centric organization.
A major multi-year initiative to modernize the organization’s mainframe systems onto the Microsoft technology platform will now provide significant cost savings over current systems and enable years of future business innovation. Read More…
In the world of SharePoint upgrades and migrations, a number of terms are thrown around and often used interchangeably. This post outlines several key terms that will be surfaced throughout a three-part series on upgrade/migration strategies for SharePoint 2013. If you would like to jump to another post, use the links below:
- Part 1 – Definitions (this post)
- Part 2 – Considerations Outside of SharePoint (Coming soon)
- Part 3 – Diving into Database Attach (Coming soon)
In past revisions of SharePoint, we had multiple ways to upgrade our farms (and the content within them) to the latest version using the tooling Microsoft provides. Over the years, Microsoft used a number of terms related to the types of upgrade available:
- In-place upgrade – Often considered the easiest approach, but the most risky. The setup of the new system is performed on existing hardware and servers.
- Gradual upgrade – Allows for a side-by-side installation of the old and new versions of SharePoint.
- Database attach/migration – Allows for the installation and configuration of an entirely new environment where content is first migrated, and then upgraded to the desired state.
As SharePoint matured, the number of available upgrade options dwindled. For instance, in an upgrade from SharePoint Portal Server 2003 to Office SharePoint Server 2007, we could follow any one of the three upgrade paths noted above to reach our desired end state. In an upgrade of Office SharePoint Server 2007 to SharePoint Server 2010 we still had two paths available: the in-place upgrade and the database attach approach. For SharePoint 2013, we’re left with just the database attach approach.
Before we dive further into the database attach upgrade scenario, it’s helpful to take a step back and establish a common language as we discuss the upgrade process. Read More…
Despite the terms Dependency Inversion Principle (DIP), Inversion of Control (IoC), Dependency Injection (DI) and Service Locator existing for many years now, developers are often still confused about their meaning and use. When discussing software decoupling, many developers view the terms as interchangeable or synonymous, which is hardly the case. Aside from misunderstandings when discussing software decoupling with colleagues, this misinterpretation of terms can lead to confusion when tasked with designing and implementing a decoupled software solution. Hence, it’s important to differentiate between what is principle, pattern and technique.
System Center 2012 is all about cloud computing — it provides IT as a Service, so it offers support for heterogeneous environments extending from a private cloud to the public cloud.
Trying to describe what you can accomplish with Microsoft System Center 2012 is akin to defining what a carpenter can build when he opens his toolbox. The possibilities are virtually limitless. When all of the System Center 2012 management components are deployed, administrators and decision makers have access to a truly integrated lifecycle management platform. The seven core applications can each be deployed independently to provide specific capabilities to an organization, but are also tightly integrated with each other to form a comprehensive set of tools.
System Center 2012 is offered in two editions, Standard and Datacenter, with virtualization rights being the only difference. The simplified licensing structure is identical to that of Windows Server 2012. Further simplifying the licensing, SQL Server Standard is included and no longer needs to be licensed separately. The applications that make up the System Center 2012 suite, however, cannot be licensed individually, so it makes sense to have an idea of what each application can do, and how it fits into your environment. Read More…
Today I want to talk about a process we created for building out machines using Virtual Machine Manager (VMM) as part of our daily build process within Team Foundation Server (TFS).
As part of our nightly build process, we actually recreate the entire environment from scratch. We don’t use snapshots; we actually delete and provision a series of VMs. This may sound like overkill and I’ve seen other approaches that use snapshots and revert each night…and I think that’s great. Use what works for you. However, we wanted something that could not only exercise our code base, but also our scripts that we use for building our environment. In a way, this allows us to test both pieces at the same time.
At this point I should throw in the disclaimer that this blog post builds on one written by my colleague David Baber: Driving PowerShell With XML. We use the same XML-driven framework to build out our machines. In reality the process of removing and creating VMs is treated as just one “step” in our build-out process. Executions of other steps obviously follow, but this post is primarily concerned with standing up that environment. What happens next is up to you. Read More…
I recently attended SPC12 with many of my colleagues from AIS. One of the sessions I really enjoyed was High Availability Solutions with SharePoint Server 2013 delivered by Bill Baer. This sessions was geared toward the ITPro (admin) audience and detailed the options when making SharePoint Highly Available.
During this session I found it interesting how much time was spent talking about mirroring. Mirroring is now considered a deprecated technology but is still supported by SharePoint 2013. Today I’d like to break down the session and talk about my thoughts on each point.
This is the third in a multipart series on the exciting new features of SQL Server 2012. Currently AIS is assisting a performing arts center with an upgrade to SQL Server 2012. During the research for this project I’ve had a chance to deploy many of these new features. These posts highlight the best of what SQL Server 2012 has to offer.
We’ve already discussed AlwaysOn High Availability, and integrated SSRS. Today I want to talk about the new contained database feature of SQL Server 2012. Contained databases make the life of the DBA and developer much easier. They also streamline the deployment of high availability scenarios.
So what are some of the new offerings of this feature?
The Extract, Transform & Load (ETL) process typically accounts for 70% of an entire data warehousing/data migration effort (according to The Data Warehouse ETL Toolkit). And this proportion represents a major element of the project in terms of risk: the implementation could make or break the project. This fact should be taken into consideration while designing and implementing an ETL solution.
Every staging project has two elements: the ETL itself and a staging data structure. So today I’d like to talk about best practices for standing up a staging area using SQL Server Integration Services [ETL] and hosting a staging database in SQL Server 2012 [DB]. The ‘best practices’ are across three areas: Architecture, Development, and Implementation & Maintenance of the solution. Each area represents key patterns and practices (not a comprehensive list) for the ETL component and the data structure of the staging database.
And since I presume readers are looking for best practices silver bullets, that is what I will try to deliver. What you will not find in this post are turnkey ETL performance optimizations.