sharepoint 2013 logo

Featured Post

A Simple Content Enrichment Service

By: Chris Hettinger

A Primer on Pattern Libraries

chimp(1)For many, developing and delivering the front-end of a website or application is often centered on the development of pages within the project. But with that approach, it doesn’t take very long for your CSS file(s) to become overgrown and difficult to maintain. Before you know it rules are duplicated, important tags are cluttering declarations and regression issues are flooding the bug list. It can go from bad to frustrating pretty quickly.

So what’s a Front-End Dev to do? Instead of focusing on the development of elements for each page, shift your focus and consider the entire project, building components that can easily be reused throughout the site or application. And that’s where Pattern Libraries step in: They provide a structured visual reference that presents and organizes the components of the project, streamlines development and helps deliver a cohesive, maintainable product.

Read More…

MicrosoftAzure

Join Microsoft and AIS to learn how Azure Government Cloud can help Justice & Public Safety officials quickly respond to events at scale as needed.
 


The cloud platform designed to meet U.S. government demands Azure Government delivers cloud speed, scale, and economics while addressing the security and compliance needs of U.S. federal agencies including the Department of Defense, plus state and local governments and their solution providers. Azure Government offers physical and network isolation from non-U.S. government deployments and requires specialized personnel screening. Azure Government will address government regulatory and compliance requirements such as FedRAMP, CJIS, and HIPAA.

When local, state or federal authorities respond to criminal acts, they seek to quickly collect vast amount of inputs from the public.  This input can be in the form of tips, photos, videos or any untold observations.  Agencies need the capability to surge their IT tools and applications to collect the data, store it, and run big data analysis tools against the collected content to harvest information.

Register now for an exclusive Microsoft & AIS webinar to learn how you can:

  • Through a few clicks publish internet or extranet facing web or SharePoint site to organize with first responders to an incident or catastrophe
  • Provide a cost effective approach to development on larger scale environments
  • Extend the On-Prem footprint without exposing critical data
  • Reduce the burden of implementing back-up and disaster recovery for your applications.

RSVP TODAY!

javasciptAre you writing a lot of JavaScript as part of your web application? Spending a lot of time debugging that JavaScript? I want to discuss a debugging technique today using that old standby, console.log and overcoming one of its deficiencies.

Using console.log can be useful for printing out application state without interrupting your workflow. Contrast that with using a JavaScript breakpoint where you are forced to break your workflow to check the application state and then resume the execution of your application.

So, now that I’ve refreshed your memory on the upside of console.log over a breakpoint, let’s talk about the downside compared to breakpoints. To add a console.log statement, you have to edit the source code of your application. And editing the source code of your application is an even bigger break in your debugging workflow. You have to go back to the editor, add the log statement, refresh the page, and then go back to where you were. And don’t forget to remove that log statement before you commit your source code.

Well what if you could avoid those limitations? Read More…

sharepoint 2013 logoRecently, I encountered an issue with SharePoint 2013 search crawls where .pdf files smaller than 1 MB reported a warning: “The item has been truncated in the index because it exceeds the maximum size”. The default MaxDownLoadSize for documents in SharePoint is 64MB, which was more than enough the handle these relatively small .pdf files.

After I reached out to some co-workers; one suggested that the error might be a false-positive and the entire document had been crawled. I tested this by first searching for words at the end of the document and no matches were found; this would be expected if it were truncated. Next, I tried searching for text in the middle of the document, no matches were found either. I thought it must have truncated a lot of text and tried searching for text contained at the very beginning of the document. No results were found! So when the warning said it truncated the item, it had truncated the whole document. Read More…

Part 4: Load testing the messaging integration style

In this four part series we have been looking at how different application integration styles handle spikes in load. In Part 1 we created and deployed a distributed system that used an RPC-based integration style. Our inventory application communicated with our purchasing application via a web service. In Part 2 we simulated a spike in load and caused the system fail. In Part 3 we updated the architecture from an RPC-based integration style to a messaging-based integration style. In this post, we are going to simulate the same spike in load and see how the messaging-based architecture copes.

Where are we now? We have updated our distributed system to use messaging as the communication mechanism between the applications. We have created an integration test that causes the inventory application to request stock replenishment from the purchasing application and we have created a load test that executes the integration test a thousand times and records the results. We have already tested our previous, RPC-based architecture and seen that it doesn’t hold up when there is more load than the hardware can handle. Read More…

Part 3: Re-architecting the system to use a messaging integration style

In this series of posts I am taking a practical look at how a messaging architecture can mitigate the risks associated with a spike in load if a server doesn’t have enough resources to handle the spike. In Part 1 I created a distributed system for a fictitious company. The system consisted of two nodes: an inventory node and a purchasing node. These nodes were integrated using an RPC-style architecture. In Part 2 I put the system under stress using a Visual Studio Load Test and saw how it failed when the virtual machine on which the purchasing system was deployed didn’t have enough resources to handle the load. In this third post I am going to use a messaging integration style over RabbitMQ to allow this distributed system to effectively handle spikes in load. Finally, in Part 4 I am going to simulate the same spike in load and see how the messaging architecture comfortably handles the spike. Read More…

Welcome to part ten of our blog series based on our latest Plural Sight course: Applied Azure. Previously, we’ve discussed Mobile Services, Big Compute, Azure Web SitesAzure Worker RolesIdentity and Access with Azure Active DirectoryAzure Service Bus and MongoDBHIPPA Compliant Apps in Azure and Offloading SharePoint Customizations to Azure and “Big Data” with Windows Azure HDInsight.

Motivation

A majority of organizations, especially e-commerce and m-commerce shops face the following challenges in support of business-to-business messaging across various platforms. Read More…

Welcome to part nine of the blog series based on Vishwas Lele’s PluralSight course: Applied Azure. Previously, we’ve discussed Big Compute, Azure Web SitesAzure Worker RolesIdentity and Access with Azure Active DirectoryAzure Service Bus and MongoDBHIPPA Compliant Apps in Azure and Offloading SharePoint Customizations to Azure and “Big Data” with Windows Azure HDInsight.

Motivation

Windows Azure Mobile Services is a powerful building block in the Windows Azure platform. It brings together a set of services that enables you to create a versatile backend API very quickly. Moreover, it is supported by all major platforms such as Windows 8, Windows Phone, Android, iOS and HTML5. Read More…

Satya Nadella introduces Azure Government Cloud General Availability

photo (7)Today, AIS participated in the Microsoft One Government Cloud event and Expo introducing the General Availability (GA) of Azure Government Cloud at the Mayflower Hotel in Washington, DC.  Microsoft CEO, Satya Nadella, participated in a fireside chat and shared his vision for the industry and discussed how government organizations can thrive in the mobile-first, cloud-first world.

Read More…

Welcome to part eight of the blog series based on Vishwas Lele’s PluralSight course: Applied Azure. Previously, we’ve discussed Azure Web Sites, Azure Worker Roles, Identity and Access with Azure Active Directory, Azure Service Bus and MongoDB, HIPPA Compliant Apps in Azure and Offloading SharePoint Customizations to Azure and “Big Data” with Windows Azure HDInsight.

Motivation

Big Compute refers to running large scale applications which utilize large amounts of CPU and/or memory resources. These resources are provided by using a cluster of computers and the applications are distributed across the cluster. The key concept is to distribute the application to run on multiple machines so as to execute computations simultaneously in parallel. Problems in the financial, scientific and engineering fields often require computations which would take several days or longer if executed on a single computer. Big Compute solutions significantly reduce the solution time dramatically from days to hours or less, depending on how many machines are added to the compute cluster. Big Compute differs subtly from “Big Data” in that the latter is more about using disk capacity and IO performance of a cluster of computers in order to analyze large volumes of data, whereas Big Compute is primarily about utilizing CPU power in a cluster to perform computations. In order to harness the resources of multiple machines, a Big Compute solution also requires components to handle the configuration and scheduling of the individual component computations – this is usually the role of a ‘head node’ in the compute cluster. Microsoft’s HPC (High Performance Computing) platform is a key aspect of their Big Compute offerings. HPC provides all the components necessary to configure, schedule and execute computations in a distributed cluster. Microsoft’s HPC solution is supported in on-premises environments as well as in the Azure cloud, both in an IaaS configuration as well as via an Azure HPC scheduler. Since the publishing of the Pluralsight course, there have been continued developments from Microsoft on the Big Compute offerings in Azure, in particular the new Azure Batch offering which is currently in preview mode. Read More…