With cloud adoption continuing to grow and many enterprises making use of multi-cloud infrastructures, today’s IT organizations need to quickly adapt their IT infrastructure to manage and monitor both public cloud infrastructure and existing on-premises resources. Traditionally, this leads to the implementation of multiple tools in separate environments, managed by separate teams for a wide range of functionalities, such as:
- Server monitoring
- Network monitoring
- Configuration management
- Log analytics
- Network Threat Detection
It can quickly become a challenge to gain a holistic view of your enterprise health; to troubleshoot fast and remediate quickly in a unified manner. But now some good news: These capabilities (and more!) have all been integrated as part of the Azure Operations Management Suite (OMS).
What’s the advantage of Azure Operations Management Suite? Aside from the above mentioned items, it was designed in the cloud and all components are entirely hosted in Azure. So not only does it excel at native cloud capabilities such as PaaS, it also provides full functionality for on-premises resources, offering a single pane of glass for managing and monitoring diverse infrastructures and resources. There is no setup and configuration is minimal, which gives you the ability to manage and monitor your entire enterprise in a matter of minutes rather than hours. Read on for even more benefits to using Azure OMS. Read More…
As part of AIS Managed Services, we provide proactive management and reactive support of infrastructure and applications at a predictable monthly cost. Recently, during a routine infrastructure health check, we noticed that Azure was failing to take backups for a particular virtual machine. Why?
The client is a medium-sized outdoor equipment vendor. For this enterprise customer, we have configured Azure Recovery Services to take a daily backup of all the virtual machines in the production environment. The environment is set up with four domain controllers. Two of them are hosted in Azure while the other two are hosted on-premises. All domain controllers are running Windows Server 2008 R2. Both domain controllers hosted in Azure have 120GB System Drives attached to them, with only Active Directory Domain Services and DNS Server roles present on the server. Read More…
Citizens today are more connected that ever before; increasingly expecting anytime, anywhere access to government services and information – via online, mobile and social media. To continue to best serve and engage ever-connected citizens, government and industry must deliver innovative apps and mobile services that are highly secure and provide a user-friendly experience.
Last week’s #AzureGov Meetup was jam-packed with industry experts ready to tell attendees exactly HOW to achieve those goals and go above and beyond with their connected citizen services. Read More…
AIS’ principal solutions architect Brent Wodicka stopped by Federal News Radio for a discussion on DevOps with Federal Tech Talk’s John Gilroy.
They were joined by Nathen Harvey, VP of Community Development at Chef Software, and David Bock, DevOps Services Lead at Excella Consulting. Each offered a unique and practical perspective on the concept of DevOps and how it’s working for federal government IT.
You can listen to the full show over at Federal News Radio!
I had the opportunity to attend the first Azure Government HackFest & Training on June 7 and June 8, 2017 with several of my AIS colleagues (Jonathan Eckman, Nicolas Mark, and Brian Rudolph) and it did not disappoint. This event was a great opportunity for me personally to learn more about Azure and spend some time applying that new information to work on an interesting problem. I know that many of you might be considering attending another HackFest, so I wanted to take some time to tell you about the event and what I learned. I also wanted to give you a few tips if you attend one of these in the future.
Day One started off with a number of training/knowledge-sharing sessions with the Microsoft Azure Government Engineering Team, providing an overview of Azure Gov, Security, Lift and Shift, Azure HDInsight, and Cognitive Services. The information provided was detailed enough that it wasn’t marketing material, but not so deep to be too difficult for general IT pros to grasp. Kudos to those that presented from the Microsoft Azure Engineering Team! Read More…
In an earlier blog post, we talked about Excel as custom calculation engine. In a nutshell, a developer or power user can author the calculation logic inside an Excel workbook and then execute the workbook programmatically via either Excel Services or HPC Services for Excel. You can read about this approach in detail in our MSDN article. This approach has been successfully used by our customer on a large scale for many years now.
Lately though, we’ve been thinking about Jupyter Notebooks as another potential option for building custom calculation engines.
But before we make the case, let’s review some background information on Jupyter Notebooks. Read More…
An open-source initiative needed a solution to add Azure IaaS support for their existing cross-cloud library to support bioinformatic research.
Genomics Virtual Laboratory provides cloud-based analysis tools that help in genomics research. As a part of this tool suite, they created an open-source Python library called CloudBridge that provides a uniform and extensible API layer for supporting multiple clouds. The library supported only AWS and Open Stack. AIS was approached to provide Microsoft Azure support to the library with limited changes to their existing interfaces.
Challenges: With all the cloud providers having their own proprietary vendor APIs/approach (and not having common standards remains an issue in this modern era of cloud usage), it is becoming more common nowadays to utilize multiple cloud providers to support application deployments, and it is left to developers to author (ex: conditional code) the different infrastructure deployments and testing to support each of the providers.
In order mitigate the mentioned issues, CloudBridge came with a simple consistent interface depicted below.
Solution: Azure Python SDK was used to interface with Azure, and the necessary to and fro mapping to the CloudBridge and Azure models was done in the resource layer. The high-level architecture is depicted in the image below.
The API revolves around three concepts: (1) providers; (2) services; and (3) resources. The providers encapsulate connection properties for a given cloud provider and manages the required connection. Services expose the IaaS provider functionality, offering the ability to create, query and manipulate resources (e.g., images, instance types, key pairs, etc.). Resources represent a remote cloud resource, such as an individual machine instance (Instance) or a security group (Security Group). (Read more here.)
- Jet brains Pycharm community version
- Azure Python SDK
- Python 3.6 and 2.7
AIS is proud to announce we’ve officially joined the Microsoft FastTrack for Azure program! Microsoft FastTrack for Azure provides direct assistance from Microsoft and a Microsoft partner to help customers build their desired cloud-based solutions with maximum speed and confidence. AIS will work side-by-side with Microsoft engineers to guide our mutual customers from setup, configuration, and development to production, focusing on the following Azure solutions:
- Development and test
- Backup and archive
- Disaster recovery
- Line of business applications (database migration, app modernization, app “lift and shift”)
The FastTrack program will guide you through the three key phases of a successful cloud journey: Envisioning, onboarding, and deployment to quickly realize the business benefits of moving to Azure. It’s a process we here at AIS know very well, so we’re looking forward to helping even more customers take their first steps into the cloud.
Ready to get started?
FastTrack for Azure is available to select Azure customers in the United States, Canada and Australia. You can find out more here or contact us for more information.
Previously in another blog post, I laid out a quick summary of Continuous Integration (CI) and Continuous Delivery (CD) in Visual Studio Team Services (VSTS). Today we’re going to expand a bit on those DevOps processes to better suit your (or your clients’!) needs.
With CI and CD, a build agent is required—that is, a place where your code is sent to be compiled and then subsequently deployed. By default, VSTS gives you the option to use a hosted agent. This is an entirely a cloud solution; you can just choose one of the hosted agents to build and deploy your code and you’re all set. But there are a couple of drawbacks with this…
Recently we collaborated with Microsoft and Prospect Silicon Valley (ProspectSV) on a project to assess the viability and value of several Azure services. Specifically, we were asked to demonstrate how the cloud-based platform could be used to retrieve, store, visualize and predict trends based on data from multiple sources. In order to demonstrate these capabilities, we built an ASP.NET MVC application leveraging the following Azure components:
- Azure App Services
- Azure Machine Learning
- Azure Power BI Embedded
- Azure Storage
Figure 1: ProspectSV Application Architecture depicts how the system uses these four Azure components. This diagram also describes which external data sources are used and where that data is stored.