Data is all around us, but it’s becoming increasingly difficult to abstract relevant insights from our data that lead to intelligent action. Last night’s #AzureGovMeetup dove deep into advanced analytics capabilities, and attendees got a firsthand look at how raw data is transformed into actionable insight.

It included highly interactive and demonstration-focused sessions by Microsoft data and cloud experts, as well as AIS’ own CTO and Microsoft MVP, Vishwas Lele. Esri also presented a demo of advanced spatial analytics in the cloud.

          

In case you missed it, AIS livestreamed the demos on Twitter. Watch the videos below and be sure to follow us @AISTeam. (And check out the #AzureGovMeetup hashtag for more photos and insights from the outstanding lineup of experts.) Next month’s Meetup will be Wednesday, May 31 at 6 p.m. RSVP today!

Early in my web development career, I always tried to avoid deployment work. It made me uneasy to watch other developers constantly bang their heads against their desks, frustrated with getting our app deployed to whatever cloud service we were using at the time. Deployment work became the “short straw” assignment because it was always a long, unpredictable and thankless task. It wasn’t until I advanced in my tech career that I realized why I felt this way.

My experience with deployment activities, up to this point, always involved a manual process. I thought that the time it took to set up an automated deployment mechanism was a lot of unnecessary overhead – I’d much rather spend my time developing the actual application and spend just a few hours every so often on a manual deployment process when I was ready. However, as I got to work with more and more experienced developers, I began to understand that a manual deployment process is slow, unreliable, unrepeatable, and rarely ever consistent across environments. A manual deployment process also requires detailed documentation that can be hard to follow and in constant need of updating.

As a result, the deployment process becomes this mysterious beast that only a few experts on your development team can tame. This will ultimately isolate the members of your development team, who could be spending more time working on features or fixing bugs related to your application or software. Although there is some initial overhead involved when creating a fully automated deployment pipeline, subsequent deployments of the same infrastructure can be done in a matter of seconds. And since validation is also baked into the automated process, your developers will only have to devote time to application deployment if something fails or goes wrong.

This three-part blog series will serve to provide a general set of instructions on how to build an automated deployment pipeline using Azure cloud services and Octopus Deploy, a user-friendly automation tool that integrates well with Azure. It might not detail out every step you need, but it will point you in the right direction, and show you the value of utilizing automated deployment mechanisms. Let’s get started. Read More…

The central focus of DevOps has been the continuous delivery (CD) pipeline: A single, traceable path for any new or updated version of software to move through lower environments to a higher environment using automated promotion. However, in my recent experience, DevOps is also serving as the bridge between the “expectations chasm” — the gap between the three personas in the above diagram.

Each persona (CIO, Ops and App Teams) have varying expectations for the move to public cloud. For CIO, the motivation to move to the public cloud is based on key selling points — dealing with capacity constraints, mounting on-premises data center costs, reducing the Time to Value (TtV), and increasing innovation. The Ops Team is expecting a tooling maturity on par with on-premises including Capacity Planning, HA, compliance and monitoring. The Apps team is expecting to use the languages, tools, and CI process that they are already using, but in the context of new PaaS services. They also expect the same level of compliance and resilience from the underlying infrastructure services.

Unfortunately, as we will see in a moment, these expectations are hard to meet, despite the rapid innovation and cadence of releases in the cloud.

Consider these examples: Read More…

In this video blog, I’ll walk you through building a continuous integration and continuous delivery (CI/CD) pipeline using the latest tools from Microsoft, including Visual Studio Team Services (VSTS) and Azure. The pipeline is built to support a .NET core application, and the walkthrough includes the following steps:

  1. Configuring Continuous Integration (CI) with VSTS Build services
  2. Adding unit testing and validation to the CI process
  3. Adding Continuous Deployment (CD) with VSTS Release Management & Azure PaaS
  4. Adding automated performance testing to the pipeline
  5. Promotion of the deployment to production once validated
  6. Sending feedback on completion of the process to Slack

In a previous blog post, we discussed a quick overview of Continuous Integration and Deployment of .NET applications using Visual Studio Team Services (VSTS). This involved building and deploying regular old .NET applications with VSTS—something that we would definitely expect a Microsoft service to handle. However, there is some lesser-known support that VSTS has for other frameworks, including Java. The Microsoft VSTS website even has a portal page proclaiming their Java support: “Love Java? So do we!

VSTS support for Java build frameworks such as Maven and Ant came in handy for AIS recently, as we were tasked with developing some new features for an older Java desktop application for a federal client. And I will have to say that all of the VSTS tools for Java applications worked flawlessly. We were able to easily add the Java project source code to a Team Foundation Version Control (TFVC) repository hosted online in VSTS. Oracle even has an extension for integrating with a TFVC workspace—allowing us to check in changes right from the JDeveloper IDE. Read More…

If you need managed services to maintain peak IT network operations, consider us here at Applied Information Sciences. We’ll manage all your IT services for a predictable cost so you can focus on more strategic investments. AIS’ Managed Services Practice provides ongoing responsibility for monitoring, patching and problem resolution for specific IT systems on your company’s behalf.

Capabilities

  • Patching
  • Monitoring
  • Alerting
  • Backup and Restore
  • Incident Response

AIS’ Managed Service Practice has up to 24×7 coverage for initial responses to incidents through a combination of dedicated, part- and full-time staff, both onshore and offshore. AIS prides itself in being on the leading edge of managed services support. Our collaborative, disciplined approach is committed to quality, value, time and budget. Read More…

The emerging technology of cognitive services teaches computers to understand the world as we humans do. Last night’s #AzureGovMeetup in DC focused on exactly how the exciting and powerful capabilities of cognitive services can help government agencies better meet their unique missions and improve citizen services and engagement.

Our speakers included Justin “Doc” Herman, program lead of Emerging Citizen Technology, Technology Transformation Service, General Services Administration, who shared how government is advancing the use of emerging technologies. The night also featured compelling presentations and demos on Microsoft Cognitive Services by Steve Michelotti, senior program manager, Azure Government; and Ken Hausman, data platform solution architect of Microsoft.

In case you missed it, AIS livestreamed a few portions of the event on Twitter. Watch the videos below and be sure to follow us @AISTeam. (And check out the #AzureGovMeetup hashtag for more photos and insights from the outstanding lineup of experts.) For future DC AzureGov Meetup dates and details, go here. We hope to see you next month! Read More…

When you read about the Internet of Things, you often hear about connected cars, connected kitchen appliances, small devices that let you order things quickly, or other consumer-grade applications. In this post, I will quickly describe a recent IoT project I worked on where the devices are not small consumer-grade sensors…they are large industrial manufacturing machines.

In this case, machines arranged on a shop floor are responsible for cutting metal into various shapes. These machines must be both very powerful and very precise, and they have robotic arms that are programmed to grip specialized tools for this activity. These machines use the MT Connect protocol as the language for communicating their operational status and the results of any action taken. On collection, the data is streamed to a collection point for analysis. In older machines, adapters are installed to stream the machine’s data using the common language.

Our work on this project helped the engineers identify optimal cut times by analyzing the machine activity data. First, we needed to enhance the collection process so that all data was readily available, then apply the appropriate business rules to identify cut time, and finally provide quick, actionable feedback on the outcome. Read More…

The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.

Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.

That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.

Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…

 

AIS recently completed work on a complete revamp of the Texas Workforce Commission’s “Texas Reality Check” website. Texas Reality Check is an Internet-available, fully accessible, responsive, mobile-first and browser-agnostic design. This website was tested for accessibility, performance, vulnerability scans, and usability.

Background

Texas Reality Check (TRC) is targeted at students on a statewide basis, ranging from middle school to high school (with some colleges and universities making use of the tool for “life skills” classes). The goal is to inspire students to think about occupations, and prepare for educational requirements so they can achieve the income level that meets their lifestyle expectations.

This tool walks students through different areas of life, on a step-by step-basis, identifying budgets associated with living essentials such as housing, transportation, food, clothing, etc. Students make selections and then calculate a corresponding monthly income that would afford the selections they make. From here, the students are directed to another page and connected to a database on careers and associated salaries.

However, the existing site was dated and in need of improvements in three core areas: UX, Accessibility, and overall performance. Here’s how AIS delivered:

Read More…