Wednesday, December 20, 2017

Merry Christmas & Happy New Year

With 5 days to go it will be Christmas. Therefore I wish you all a Merry Christmas & a Happy New Year. See you all back in 2018!

Image result for merry christmas and happy new year

Friday, December 15, 2017

Azure Archive Storage General Available

For a few days now Archive Blob Storage, AKA Azure Archive Storage is general available. As Microsoft states: ‘…(it) is designed to provide organizations with a low cost means of delivering durable, highly available, secure cloud storage for rarely accessed data with flexible latency requirements (on the order of hours)…

Please know that for now the tier selection has to be set on blob level, because you can’t select the required tier on account nor on container level. Hopefully this will change in the future.

Want to know more? Go here and also read the comments as well.

Wednesday, December 13, 2017

Retirement Date Classic Azure Portal–Set & Confirmed

Finally the classic Azure portal ( will be retired. The retirement date (also referred to as the ‘sunsetting date’) is set for January 8, 2018.

So from that date on, only the current Azure portal ( will be available.

This is good news, since it will streamline the overall Azure management experience. Want to know more?

  • Go here, to read about the ‘sunsetting’ of the classic Azure portal;
  • Go here, to read an earlier posting about the same retirement.

Free Ebook: Azure Strategy & Implementation Guide - For Azure users

Microsoft has recently published a new free ebook, all about implementing Azure.

As Microsoft describes this ebook: ‘…(it) offers both a high-level overview of topics and specific tactics. Regardless of where you are personally focused in infrastructure, data or application arena, there are important concepts and learnings here for you…’.

Go here to download your free copy today.

Free Ebook: Enterprise Cloud Strategy, 2nd Edition

For some time now Microsoft has published a free ebook all about how to establish a strategy and execute the migration to Azure.

This book covers topics like:

  1. What to consider when choosing between hybrid, public, or private cloud environments;
  2. How cloud reduces costs, and even benefits on-premises computing;
  3. New analytics and IoT capabilities;
  4. Security and compliance considerations.

For anyone involved with Azure, this ebook is a must read. Go here to download your free copy today.

Mobile First–Cloud First’ Strategy – How About System Center – 07 – SCOM

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM
05 – SCSM
06 – SCVMM

In the last posting of this series I’ll write about how System Center Operations Manager  (SCOM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch, SCSM & SCVMM, SCOM isn’t going to the cloud…

This product has covered many miles, first started as Microsoft Operations Manager (MOM). Mind you, Microsoft didn’t develop it themselves, instead they bought the rights of it in early 2000 from NetIQ.

Even though MOM had a refresh in 2005 (branded MOM 2005), the product had some serious issues. As such Microsoft rewrote MOM from the ground up, resulting in the release of System Center Operations Manager (SCOM) 2007 in the same year as the name implies.

From that year on SCOM got an ever growing install base, driving sales resulting in huge investments in SCOM. SCOM 2007 had some serious bugs and soon SCOM 2007 R2 was released, followed by SCOM 2012, SCOM 2012 SP1 and SCOM 2012 R2. This release cadence ran from 2007 (with the release of SCOM 2007 RTM) up to the end of 2013 (release of SCOM 2012 R2 RTM).

During this release cadence, every updated version of SCOM contained many improvements, like more speed, extended monitoring depth and breadth and better visualizations. One noteworthy feat is the integration of monitoring of non-Microsoft based workloads (Linux\Unix). The story goes that this decision was finally made by Microsoft’s former CEO, Steve Ballmer…

Azure and SCOM 2016
Up to SCOM 2012 R2 there was a big budget and good resource allocation, driving SCOM to new heights. In 2010 I went to my first MVP Summit ever, and visited Microsoft Building 44. Back then it was THE place to be, because it housed all Microsoft employees working on SCOM:

However, in the later years, Microsoft’s new future started to take more shape, moving away from software developer to the role of service provider with Azure at the center of its new focus.

Up to the release of the System Center 2012 R2 stack this new focus didn’t cause too many side effects on the on-premise product line of Microsoft.

Things started to get in overdrive however when Satya Nadella succeeded Steve Ballmer in 2014. With an impressive track record as the senior vice-president of Research and Development for the Online Services Division and vice-president of the Microsoft Business Division, he knew the power of the cloud.

Satya Nadella enfolded the Mobile First – Cloud First strategy, making clear that anything else come second (at it’s best…).

The result of this new strategy clearly shows in the release of System Center 2016, containing SCOM 2016. Whereas other new releases (SCOM 2007 > SCOM 2007 R2 > SCOM 2012 > SCOM 2012 SP1 > SCOM 2012 R2) were really upGRADES, the SCOM 2016 release is actually nothing more but an upDATE.

Window dressing?
Like removing the SCOM 2012 R2 boiler plates and replacing them with the SCOM 2016 boiler plates. Sure, some SCOM 2012 R2 components got better, but it’s more like ‘work in progress’, like the SCOM 2016 Web Console, which partially dropped the Silverlight dependence…

On top of it all, the development of SCOM has moved to India. Please don’t get me wrong, since the people in India working on the development of SCOM are very smart and bright. But they are working with fewer people compared to the ‘old days’ and do have a far lesser budget available.

And this shows. Already there is Update Rollup #4 available but still the SCOM Web Console has the so much *loved* (cough) Silverlight dependency. And SCOM 2016 is already out for more than a year…

Still going strong?
For sure, SCOM 2016 still has a lot to offer. None the less, it’s based mostly on previous investments. Perhaps the new release cadence for System Center 2016 (as such for SCOM 2016 as well), to be expected in 2018, will bring relief and a clearer vision.

This new release cadence will align more to the Windows Server semi-annual channel. Hopefully Microsoft will deliver on its promise that the first release wave will focus on SCDPM, SCVMM and SCOM.

Until then, the roadmap of SCOM is unsure, as is it with the rest of the System Center stack, SCCM excluded.

For now: OMS isn’t SCOM. OMS is all about (enhanced) log analytics, enriched with certain solutions enabling web service application monitoring. Yet, OMS is still a far cry from the enriched monitoring offered by SCOM.

For instance, alerting in OMS is quite a challenge. Also monitoring in OMS is stateless, simply because it doesn’t detect objects and doesn’t contain anything like a health model.

Sure, OMS could/should deliver monitoring in a different manner, thus making objects obsolete, but until now there are no signs of this new approach.

Therefore, based on todays world, OMS isn’t SCOM. Sure you can combine both but they can’t replace one another.

Back burner = possibilities for non-Microsoft solutions
Sure, I would love to see otherwise. But the world is moving on and Microsoft has decided to put SCOM on the back burner without offering other real monitoring alternatives by themselves. This creates a gap which other companies are more than happy to fill.

Of course, Microsoft’s marketing department tries to sell OMS as the ‘one-size-fits-all’ solution, covering everything. But reality tells us a different story all together. Combined with the ever changing pricing and licensing schemes for OMS, makes it even a harder sell.

Perhaps I am missing the bigger picture here, but this is what I see and experience from my perspective. Don’t be afraid to share your thoughts/experiences here. Feel free to comment on this posting.

Verdict for SCOM
SCOM isn’t going to the cloud at all. Sure you could install SCOM on Azure based VMs. But that isn’t the point. SCOM won’t be ported into a cloud based version. Nor is OMS at this moment capable of replacing the monitoring functionality of SCOM.

And this makes me wonder. Feels like Microsoft is turning away from a good product, without offering a real cloud based alternative. OMS doesn’t cut it yet as a monitoring solution. Perhaps later on this functionality will be added, but even then it’s important to see how it works out and what one has to pay for it.

All this doesn’t mean SCOM is dead in the water either. SCOM is still supported by Microsoft and new releases are in the pipe line. 2018 will show what the earlier mentioned release cadence is really like. Hopefully Microsoft is truly going to deliver here with TRUE upgrades instead of shameless boiler plate replacements…

Despite all of this it’s clear that SCOM isn’t going to be around for tens of years. Sure like the rest of the System Center 2016 stack it has Mainstream Support till 11th of January 2022. Until then updates, patches and the lot will come out. But after that? I have no idea.

Running SCOM 2012x? Upgrade to SCOM 2016.
When you’ve got a SCOM 2012x environment in place, changes are your company has already paid for the System Center 2016 licenses. In situations like this it always pays off to upgrade to SCOM 2016 and later.

SCOM is still a strong monitoring solution, capable of covering heterogeneous and hybrid environments, with a strong capability of customized monitoring.

Not running SCOM but looking for a monitoring solution?
However, when not running SCOM and looking for a monitoring solution, I recommend to compare SCOM 2016 with alternatives which have clearer road maps.

While you’re at it, make sure the monitoring solution offers coverage of hybrid workloads, meaning cloud and on-premise. With the shift to the hybrid world, network connections do get even more important. Therefore comprehensive network monitoring (not only limited to the device, but the flow as well) is crucial as well.

Many times companies end up with heterogeneous monitoring solutions in order to cover all their monitoring requirements. And most of the time, those solutions aren’t Microsoft based.

Recap of previous System Center stack verdicts
For more details, read the related postings.

- SCCM: Alive & kicking
- SCOrch: Dead in the water
- SCDPM: Moving into Azure
- SCSCM: Abandon ship!
- SCVMM: For now okay, but in time moving to Azure.

Thursday, December 7, 2017

Free Azure Webinars: Cloud Architecture Whiteboard Webinar Series

Microsoft offers many FREE webinars all Azure, to be found here. These webinars are a good source of information about the power of Azure.

Recently Microsoft added a new series of Azure webinars, titled: Cloud Architecture Whiteboard Webinar Series:

As Microsoft describes this webinar series:

This webinar series will address common challenges that engineers and developers face when designing cloud-based solutions. Each webinar in the series will focus on a set of design patterns that address a fundamental design challenge.

Join our senior engineers as they discuss:

  • Common cloud development challenges
  • Cloud architectures and considerations for applying the pattern in a variety of application domains
  • Q&A with engineering

Webinar On Demand – Cloud Architecture for Availability
Our speakers will address a set of cloud design patterns that can help improve the uptime of your applications. We will discuss health endpoint monitoring, queue-based load leveling, and throttling.

Webinar On Demand – Cloud Architecture for Resiliency
Resiliency is the ability of a system to gracefully handle and recover from failures. This webinar will feature a conversation talking through key patterns including: Retry, Circuit Breaker, Compensating Transaction, and Bulkhead.

Webinar On Demand - Cloud Architecture for Scalability
Scalability is ability of a system to process increased throughput proportionally to the capacity added. Cloud applications typically encounter variable workloads and peaks in activity and this webinar will focus on the smart use of patterns to mitigate issues and deliver an excellent experience.

You can register for FREE and watch these three webinars on demand. Go here to register.

Wednesday, December 6, 2017

Office 365 Monitoring & Dashboarding Webinar

NiCE & Savision are going to present a joint webinar on Wednesday 13th December at 16:00 CET (10:00 EST), all about the Active O365 MP for SCOM, combined with Savision Live Maps.

This webinar is free and allows for a good impression of the capabilities of this MP, combined with some smart dashboarding. Presenters are Christian Heitkamp (NiCE) and Justin Boerrigter (Savision).

So when running SCOM and using Office 365, this webinar is worthwhile to attend. Want to know more and register? Go here.

Friday, December 1, 2017

SCOM & SquaredUp & Community Power

For some time already SquaredUp has a strong focus on supporting the SCOM community. And they go about it as they attend to their business: Straightforward, no small print in their contracts/agreements, or any other BS (excuse my French Smile) for that matter.

As such, the community has grown even stronger. As SquaredUp puts it: ‘…SCOM is an amazingly powerful platform, but it’s the management packs that do all the heavy lifting. Thanks to the extensibility, maturity, and huge install base of SCOM, there’s a plethora of freely-available community management packs out there, covering everything…’

I totally agree with them. However, the same SCOM community also poses an unforeseen ‘risk’ of some kind. Not a bad one that is, but still one which needs to be addressed.

The ‘risk’ of the powerful SCOM community
Because of the SCOM community there are MANY good Management Packs (MPs) out there, enriching SCOM and the monitoring breadth and depth.

But WHERE to find those MPs? Well, about EVERYWHERE on the internet? And herein lies the ‘risk’: Unintentionally you’re missing out on the best SCOM community MPs, thus making your life too hard.

Or worse, you’ve got a certain awesome SCOM community MP imported in your environment. But how are you sure it’s ‘the latest & greatest’?

Wouldn’t it be totally awesome to have an overview of all the BEST SCOM community MPs out there, right in your SCOM Console?

This is exactly what SquaredUp delivers!
Wow! SquaredUp has started a whole new completely open & transparent, community project which EXTENDS your SCOM Console (2012, 2012 R2 & 2016) to simplify the discovery and life-cycle management of community MPs, including;

  • Rapid discovery of the best SCOM community MPs, including searchability by type, technology, author and more.
  • A view of all the SCOM community MPs you have installed, including details of your current version, the latest version available and the download location.
  • Configurable notifications, allowing you to be alerted on the availability of new versions of your community MPs.

And true to the nature of the SCOM community, this MP is available for FREE. Sure you can register yourself, but it isn’t required!

Some screenshots of this SCOM Console extension in my own SCOM environment:

After import of the MP, the Console extension is to be found under Administration > Management Packs (Community):

The menu Discover Community Packs shows many community MPs, free and paid ones:

You can even search for a MP, by name, tags, author and the lot:

The menu Installed Community Packs shows the community MPs already present in your environment AND whether they’re up to date:

And last but not least, you can be alerted as well when an update comes available or check manually for updates for already imported community MPs. In order for this to work, a new MP must be created (done by the extension itself, you only have to ‘okay’ it):

An awesome SCOM Console extension and therefore an absolute MUST HAVE for any SCOM environment.

For the MP and all the details go here. A BIG thanks and thumbs up for SquaredUp!

Thursday, November 30, 2017

Office 365 Monitoring Done Right – NiCE Active O365 MP

Office 365 is a great example of a full blown SaaS (Software as a Service) offering. The cloud provider (in this case Microsoft) provides and manages all the related services (Exchange, Skype for Business, SharePoint etc etc), the end user consumes it.

So one could state: ‘Why should I monitor it? After all, Microsoft takes care of it already, so no need to do the same job twice.

However, in real life things are a bit more complex. For instance, many companies use Office 365 in a hybrid scenario. In cases like that, there are on-premise Exchange servers still up & running, deeply integrated with Office 365. Wouldn’t it be nice to have this ‘single pane of glass’, completely covering your hybrid scenario?

But even when you don’t have a hybrid scenario, additional monitoring is still required, because:

  • Office 365 has SLAs to meet, like any other cloud service offering. But how do you know that without proper monitoring?
  • Even though the IT department consumes Office 365 like the end users they service, it makes them look bad when an end user has to tell them something has broken, while the Office 365 admin portal tells them all is okay. What or whom to believe?
  • When an end user can’t obtain an Office 365 license, they can’t use Office 365. So the Office 365 license pool requires additional attention.
  • Reports are a hard requirement, in order to know whether the SLAs are met, how many users are consuming Office 365, how many mailbox are migrated to Office 365 and the lot.

Okay, I am convinced. But what tools do I use?
For starters there is the Office 365 Service Health Dashboard, part of Microsoft’s Office 365 offering. Every company using Office 365 has access to it (admin access required). However, this dashboard is limited in it’s functionality. For instance, it doesn’t provide reports, nor covers it hybrid scenario’s. And many times it states all is okay, while end users can’t access Office 365, because somewhere down the chain is an issue.

When running SCOM, there is also Microsoft’s Office 365 MP. However, this MP is flawed from the beginning, and doesn’t deliver any added value. Instead it creates a lot of noise, since it relays all the information present in the O365 Service Health Dashboard. Nothing about your on-premise Exchange environment to be found here…

In order to enrich this MP, the community has provided the Office 365 Supplemental Management Pack V1. This MP adds additional monitoring for the mail flow and verifies whether a user can obtain an O365 license. But when you’ve a hybrid Exchange environment, this MP won’t help you either here…

On top of it for serious SLA monitoring, reports are a hard requirement. And both MPs don’t deliver here. So there is still a requirement for a MP which covers it ALL: hybrid environments, usable reports

Meet the NiCE Active O365 MP
Gladly, a new MP is about to arrive. NiCE IT Management Solutions is about to launch the NiCE Active O365 Management Pack for SCOM! First they will launch the BETA program for it, for which you can subscribe for free. This allows your organization to test drive this MP, in order to see whether this MP delivers.

Some features of this MP:

  • Hybrid approach: It collects & processes data from both Exchange online and on-prem (2010/2013/2016);
  • Comprehensive discovery of hybrid Office 365 deployments;
  • Active probing for user verification;
  • Detailed reports on license usage, SLAs, Cloud adoption & mailbox migrations.

For hybrid scenario’s it monitors:

  • Calendar synchronization between Exchange on-prem and Exchange online mailboxes;
  • Mail flow between servers in different datacenters;
  • Mailbox migrations.

So now there is finally a solution out there, enabling complete coverage of Office 365 monitoring, SLAs and hybrid scenarios included!

Wednesday, November 29, 2017

Microsoft/VMware On-prem and Cloud? When Running SCOM Go For Veeam

Monitoring can be done with many different toolsets. And besides the tooling, the process of monitoring can be done quite different as well. One can simply monitor and respond when something breaks. Or, one can try to predict failures before they take place, and act in advance, like failover to another location.

The latter is also known as pro-active monitoring. For a long time it has been a marketing slogan only, but with the right tools, it can be done. And in todays world I dare say it has become a hard requirement. Why?

Nowadays, the IT environment has become a mix of on-premise solutions, combined with other IT assets residing in the cloud, like (but not limited to) Azure or VMware vCloud Air. Workloads are running on top of it all, and many times the end user doesn’t even know where. They just expect it work and perform accordingly.

This creates opportunities and challenges for the IT departments. Opportunities because the old barriers (buying & installing hardware for instance) are gone, because in the cloud it can be coded. Challenges, because the workloads are many times hybrid, resulting in a multi-tier effort in order to keep things running smoothly.

Say hello to pro-active monitoring
As such, monitoring has become even more crucial but has to be done in a different manner. Instead of simply focussing on the ‘now’ of the IT environment, it has become paramount to gain a peek into the future. And not only on the level of the workloads themselves, but also down to the hardware level, like CPU, networking & storage.

Sure, when EVERYTHING is in the cloud, those things are covered by the cloud provider. But many times workloads are hybrid, with one or more ‘legs’ in your on-premise environment.

Wouldn’t it be a shame when a failover goes wrong, because the hosts are over-committed? Not enough storage? Not enough CPU? Ouch!

Gone are the days that monitoring, capacity planning and modelling were different entities. In order to enable pro-active monitoring, capacity planning and modelling are hard requirements. Without it, it’s back to the old days of monitoring, where one waited until an Alert popped up and responded. Only putting out fires as they happen…

That’s why I recommend Veeam
That’s why I recommend the Veeam MP. It enables true pro-active monitoring. On top of the ‘plain vanilla’ monitoring (which goes pretty far already), it also delivers on capacity planning & modelling, thus enabling organizations to pro-actively monitor their hybrid workloads, whether running on-prem on Hyper-V or VMware and in the cloud (Azure and/or VMware vCloud Air).

Also it enables organizations in their ever ongoing move/migration to the cloud. Many times organizations are in the process of ‘lift & shift’, meaning that on-premise hosted workloads are migrated fully to the cloud.

The Veeam MP aids organizations here as well by analysing on-premises virtual workloads and map them out against their equivalent in Azure or VMware vCloud Air. This enables a smoother transition to the cloud.

Compared to other MP solutions in order to monitor hyper-visor based workloads, the Veeam MP adds much more to the mix. Other solutions only deliver on the ‘putting out fires’ scenario, which is outdated and can be easily enriched, when the right tools are being used.

But the costs…
Yeah I know. The Veeam MP doesn’t come cheap. But just do some math. How much euro’s/dollars would your company loose when a core application breaks down, for a few hours during a normal working day?

I know for sure those costs are a multitude of the costs of the Veeam MP. And know that the Veeam MP delivers an enriched toolset, enabling pro-active monitoring in order to prevent the breaking of your core applications.

In a setting like that, the investment in the Veeam MP makes sense, and has a solid business case.

That’s why I always recommend the Veeam MP to my customers, whether they run Hyper-V or VMware and use Azure and/or VMware vCloud Air.

Tuesday, November 21, 2017

UR#4 SCOM 2016 Is Out!

For some weeks now, Update Rollup #4 for SCOM 2016 is available. KB4024941 tells you what’s fixed and known issues with this UR#4.

Even though the same KB contains installation instructions, I highly recommend to read Kevin Holman’s UR#4 installation instructions instead.

This UR#4 doesn’t add many new features. The SCOM Web Console still requires Silverlight for some parts of it in order to function properly. And the SCOM Console itself still has some serious (performance) issues.

None the less, this UR#4 should be installed in any SCOM 2016 environment if only to keep it on a well maintained level.

Can’t wait until Microsoft finally starts delivering on the so much promised frequent continuous releases for the rest of the System Center stack, SCOM included. Hopefully by then the SCOM Web Console will outgrow the so much *loved* Silverlight dependency and will SCOM show the so much asked for (Console) performance enhancement…

Until then, any new Update Rollup won’t be that special at all…

I Am Back!

Partially that is, but I am getting there. So soon enough new posting will follow.

Monday, August 28, 2017

Out of order...

While mountain biking I had an accident during which I broke my clavicle. As such this blog will be silent for a while.

Rest assured, 'I'll be back' to quote a famous line of an equal famous movie.

Thursday, August 24, 2017

Mobile First–Cloud First’ Strategy – How About System Center – 06 – SCVMM

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM
05 – SCSM
07 – SCOM

In the sixth posting of this series I’ll write about how System Center Virtual Machine Manager (SCVMM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch & SCSM I don’t think that SCVMM is going to the cloud…

First released in 2007 to enterprise customers only, it was targeted for managing large numbers of virtual servers based on Microsoft Virtual Server (yikes!) and later (Q2 2008), hypervisors based on Hyper-V. 

Since then VMM has grown into a product of its own, with every release new functionalities being added, whereas others were removed, for instance:

  1. P2V migration removed from SCVMM 2012 R2 onwards;
  2. Support for Citrix XenServer removed form SCVMM 2016;
  3. Creation and management for private clouds, added in SCCM 2012 R2.

Private Cloud
Let’s dwell a bit longer on the last item of the previous list, the creation and management of Private Clouds.

On October 18th 2013, Microsoft announced the General Availability of Windows Server 2012 R2 (Cloud OS) and System Center 2012 R2. All new here was the Private Cloud vision of Microsoft.

Back then Azure was still branded Windows Azure and offered IaaS mostly (VMs, storage) and PaaS (websites, SQL, Python SDK and Traffic Manager).

As such, the public cloud was limited in its reach and capabilities. None the less, Microsoft top brass envisioned everyone going to the cloud. First everyone would build their own cloud (Private Cloud) and later on, move it into the public cloud, like Azure.

In order to enable the private cloud based on Microsoft technologies, System Center 2012 R2 had to make it happen. And SCVMM 2012 R2 would be the enabler of everything, bringing compute, storage and networking together, abstracting it and offering it as ready to use/consume building blocks for the (internal) customers of the IT department.

Instead of having to worry about compute, storage, networking, connectivity, middle tiers (like SQL and web for instance), a business unit could provision itself with just as many web/SharePoint servers (for instance) as required, as long it fit into the amount of resources assigned to them, hence the private cloud.

All through a portal. In the back SCVMM would initiate the required workloads, using a library of images, additional software and configuration items. Also with deep integration with SCOM (to monitor the provisioned VMs and the underlying hypervisors) and SCOrch, SCVMM could rollout almost anything, no matter how many tiers required.

Simply because the moment a certain type of installation was out of the reach of SCVMM, one or more SCOrch runbook could take care of it, complete with the registration and handling of the required tickets in the service system like SCSM.

The promise and the reality
On itself it sounded awesome. Finally the fully automated data center was there. Just roll out a bunch of servers, network components and loads of storage and SCVMM would take care of the rest. Even bare metal deployment of new Hyper-V hosts could be handled by SCVMM!

So say goodbye to rogue IT and – as IT department – be back in control in such a way by really helping the organization forward and not by frustrating it. Now a business unit could allocate workloads as required, all with ‘nothing’ more of a self-service portal and the click of a mouse button…

However, reality turned out to be a ‘bit’ harsher. For instance, the maintenance of SCVMM can be quite a challenge, especially when the SCVMM ‘fabric’ (all the components and servers used for the SCVMM environment) consists out a lot of servers, combined with a huge SCVMM library.

Especially the latter can be a real pain to maintain and to keep healthy. Orphaned resources in the SCVMM library are a ‘special’ treat. And yet, without the library SCVMM is dead in the water. Other challenges with the library are (but not limited to…): The migration of the library to a new set of servers, or making it high available (and keeping it like that!), or upgrading it to the latest version.

All in all it made SCVMM quite a challenge to run and maintain. However, the end user experience wasn’t that good either. Like any other software, a good GUI is crucial. And somehow, Microsoft seems to have issues with providing good user interfaces, whether full blown or web based. SCVMM isn’t an exception here unfortunately…

Trial & error. Self-service portal SCVMM
From the beginning the self-service portal of SCVMM turned out to be a challenge to say the least. Slow, buggy, not covering all required aspects of a full blown self-service portal, you name it. The SCVMM self-service portal had it all.

SCVMM 2012 R2 ditched it completely and had it replaced by System Center 2012 R2 App Controller. This replacement also delivered (basic) integration with Azure, enabling moving VMs to Azure. Actually, you had to store the VM in App Controller, and then copy it to Azure using the Azure Transfer Wizard, incorporated in App Controller. Afterwards additional steps in Azure were also required. As a result the ‘integration’ between your private cloud and Azure became a joke.

Due to the lack of success App Controller is dropped in the System Center family. When still requiring a self-service portal wrapped around SCVMM 2016, Microsoft recommends Windows Azure Pack instead, which is another challenge on itself.

However, between System Center 2012 R2 and System Center 2016 the world evolved quickly. Also Microsoft. So the private cloud mantra was scuttled because the world didn’t embrace it to the extend as the Microsoft top brass back then anticipated. Time for another approach!

Say hello to hybrid cloud (and goodbye to private cloud)
Instead Microsoft noticed that companies weren’t ready to roll out many different hard to manage and maintain System Center components in order to build their own private cloud.

Sure, the SDD (software defined datacenter) is still something wanted by many companies, but it’s easier to build and to maintain using other hardware/software incorporated solutions. On top of it, many companies chose all together not to board the private cloud train since it didn’t bring them to where they wanted to get.

Instead companies started to look for more hybrid solutions where their data, applications and workloads are running in different environments, whether on-premise (100% datacenter or 20% private cloud, who cares?) and in the public cloud. As a result their workforce can work anytime, anywhere with any device (when the apps are right that is).

Hence, the hybrid cloud was born, with a far bigger life expectancy and future ahead then the (Microsoft) private cloud ever had. Main reason here is that the hybrid cloud is based on the needs and requirements of the customers themselves, whereas Microsoft’s private cloud was forced upon the very same customers by Microsoft HQ. And even Microsoft HQ can’t dictate the world what to do or what to use. I guess their blew a different wind back then at Microsoft HQ compared to the more recent years.

Hybrid cloud: Nail to the coffin of SCVMM
As a direct result the role of SCVMM has dwindled big time. It’s back to it’s original function as intended when introduced back in 2007: Managing large numbers of hypervisors, primarily based on Hyper-V Sure you can also manage – a bit that is – VMware hosts through vCenter with it, but you really don’t want to go there, believe me.

Which is good, because SCVMM 2012 R2 never really delivered in enabling the private cloud. To much of a pain to maintain, configure and the lot. Also limited use cases since per type of configured server you have to go through the process of building it and save the related configurations as profiles. Only doable and justifiable when you roll out tens to hundreds of servers like that. However, in the smaller IT shop undoable and unrealistic.

Sure, in a hybrid scenario there are still valid use cases for SCVMM, as long as you’ve got a considerable amount of Hyper-V hosts running locally. However, when migrating/moving to the cloud, the use case of SCVMM lessens.

Yeah I know that Microsoft has released more information about delivering features and enhancements on a faster cadence for some System Center components, SCVMM included.

None the less, workloads will move more and more into the cloud. Whether VM based (IaaS) or service based (PaaS). As such the role of SCVMM will diminish over time. It will never get the role it had once with the GA of System Center 2012 R2 (enabling the private cloud).

Therefore with the move to Azure, SCVMM won’t stick and will shrink (at its best!) into a tool to manage Hyper-V based workloads running locally.

But when looking into how the Azure portal is growing into reach – in conjunction with Azure Automation Webhooks and Hybrid Workers – changes are that with the Azure Portal you’ll end up managing your local Hyper-V and perhaps even VMware hosts from the Azure portal.

With that the role of SCVMM will be downplayed even more and put into the role of an emergency tool when there is no internet connection, or the starting point of rolling out new Hyper-V hosts and the moment they’re online, management will taken over by the Azure portal.

Coming up next
In the seventh and last posting of this series I’ll write about SCOM (System Center Operations Manager). See you all next time.

Wednesday, August 23, 2017

Azure Under The Hood – 01 – A New Series

How stuff works
My whole life I want to know ‘how stuff works’. Just hitting a button and use a vacuum cleaner, dish washer, laptop, RC car, mobile or whatever, won’t ‘fly’ with me for a long time. Soon I’ll be prying, investigating and ‘researching’ the WHY something works and based on what principals.

Sure, it has cost me some childhood birthday presents (a radio I once got for my birthday was dismantled within a day and beyond repair…), but I always LEARNED from it. That attitude didn’t change when PCs came into my life, or better my father’s professional life.

Of course, the first months I kept a respectful distance and only used the PC (IBM PS/2) as allowed by my father, all the while keeping a keen eye at the big white case, wondering what magic was happening right there under my nose. So you can imagine how thrilled and happy I was when the PC broke down and the technician had to be called! Somehow my father wasn’t that happy about it…
Personal System 2 Series of Computers.png

I ascertained to be there when the technician came around to fix it. So when he opened the PC case I was just a few inches away, firing of questions, pointing out all the different parts in order to learn as much as possible. The PC got repaired and I learned a lot that day. Some years later I started to assemble my own PCs…

Azure & me
This attitude/curiosity hasn’t really changed over the years. No, I won’t break anything apart anymore in order to try to learn from it. Today there is Google, YouTube, Wikipedia and the lot. Saves me a lot of hassle and money as well. Sure, it takes a lot of the investigation fun away, but keeps me out of trouble as well.

But still. It gnaws at me when I use something without having a deeper understanding of it all. Same goes with Azure. Yes, I know what a computer is, what a network does, what a datacenter is for. But Azure is WAY MORE THAN THAT!

As such I’ve done a lot of investigation. Read a lot of books, online articles, watched many video’s and so on. Simply to gain a deeper understanding of what’s happening under the hood of Azure, or better when you’re clicking around in the Azure portal.

The funny thing is that Microsoft is quite secretive about it. Even towards MVPs they don’t share a lot. And when I found some information, I had to double check it, in order to know for a full 100% that I am not violating any NDA. When in doubt, I don’t share it.

The new series
As a result I’ve collected a lot of interesting non-NDA information all about Azure under the hood, to be shared with you out there. No, it won’t make you (nor me for that matter) an ‘Azure-Under-The-Hood-Expert’, but at least it will give you a better understanding of how Azure works.
Image result for cloud under the hood

In the time to come I’ll share that information. But please feel free to comment and send your own findings. I’ll use that as well and name you as well of course as the source.

See you all next time!

Microsoft By The Numbers

Bumped into this website by accident.

It shows how many people are using Microsoft products and services. The numbers are VERY impressive… And NO, the presentation of it all isn’t dull, like an Excel sheet (boring!) or long list (yawn!).

Instead it’s more like an animated infographic. Go here to see what I mean and be amazed. You can even download the related PowerPoint slide deck and use it.

Tuesday, August 22, 2017

Azure Managed Disks: How Azure VMs Are Moving To PaaS

When implementing Azure VMs one is using Azure as an IaaS solution offering. At least this is how Microsoft introduced Azure VMs back in 2012. However, things are moving with a fast pace in IT and in todays cloud things are moving with lighting speed.

As such it’s time to take a new look at Azure VMs in order to know whether they still adhere to the IaaS cloud delivery model only, or that things have changed a ‘bit’.

Azure VMs as IaaS
Sure, when you opt for the ‘classic’ approach to roll out an Azure based VM, it’s IaaS at its best. You need to provision a Storage Account, perhaps even Diagnostics storage account for monitoring, a Virtual Network and so on. Let’s focus on the Storage Accounts here.

When rolling out Azure VMs in the classic manner you have to think about your Azure subscription limits, since per subscription one is only allowed a certain amount services and resources. For instance per Azure subscription one is ‘only’ allowed 200 Storage Accounts (default) with a maximum of 250 (requires contacting Microsoft Support).

Of course, you could use only ONE Storage Account for all your Azure based VMs. But that approach isn’t going to ‘fly’ since per Azure Storage Account there are limits as well, like 20,000 IOPS per Azure Storage Account. So when you ‘hook up’ too many Azure VMs to the same Azure Storage Account, the available IOPS per Azure VM will drop dramaticly, resulting in under performing VMs.

In an ideal world one would prefer to facilitate one Storage Account per Azure VM. However, when requiring 250+ VMs, this approach isn’t viable. Even when the total amount of Azure VMs stays well below the 250 mark, there are still quite a few reasons why not to use 1:1 (VM:Storage Account) approach.

As a result, deploying an Azure VM requires planning, preparations, guidance and administration afterwards. Without it, sooner or later your company will have serious problems with Azure VM resource allocation and the lot…

Azure VMs as IaaS++
How nice would it be to roll out Azure VMs without  the headache of managing storage accounts? Instead, Azure manages storage for you! In this case you only have to think about the type & size of the disks.

All of the above (and much more) is delivered by Azure Managed Disks.

So now we’re talking about a new kind of Azure VMs. Sure the Azure VMs themselves are still adhering to the IaaS cloud delivery model, BUT a very important component of that same Azure VM (the disks and underlying storage) has become a different ball game all together.

Instead of doing it all yourself, Azure manages it for you. So the disks – when using Azure Managed Disks that is – have become IaaS++ at least, perhaps even more like a PaaS solution? Of course, this ‘statement’ could result in a never ending discussion on semantics. Let’s not go there please.

But no matter how you look at it, Azure VMs with Azure Managed Disks have evolved the cloud IaaS delivery model in that respect to a whole new level.

Azure VMs with Azure Managed Disks are the next level of how Azure can enlighten the regular burden of VM management and administration as a whole. It also brings Azure VMs as IaaS to a new level. One might say IaaS++ or even – the storage management that is when Azure Managed Disks is being used – as a PaaS cloud delivery model.

Should my company use Azure Managed Disks?
Good question! Before you make any decision it’s vital to know what Azure Managed Disks deliver and how their costs are stuctured.

For instance, Azure Managed Disks deliver better high availability out of the box. Simply because these disks are automatically placed in different storage units. So when one storage unit goes down, it won’t affect many VMs but one or a subset instead.

Also with Azure Managed Disks it’s much easier to copy an image across multiple storage accounts and so on.

On top of it all, there are two types of ‘flavors’ (AKA performance tiers) for Azure Managed Disks: Premium (SSD based) and Standard (HDD based).

Also good to know: With Azure Managed Disks you can create 10,000 Azure Managed Disks per subscription, per region and per storage type! For example, you can create up to 10,000 standard managed disks and also 10,000 premium managed disks in a subscription and in a region. As a result you can create thousands of VMs in a single subscription.

As you can see, there is much more to Azure Managed Disks, all of which has to be taken into account when making a decision.

Recommended resources
For a better understanding of this article I recommend to read these resources:

Monday, August 21, 2017

Azure Active Directory (AAD): Where Is My Data Stored?

A customer wants to use Azure Active Directory (AAD) but needs to know where the data (like user name, credentials and attributes) is stored. On itself a solid question. However, the answer wasn’t easily found. Or better, quite obscure.

The basics
Before the answer is found (and clarified) one most familiarize him/herself with some Azure ‘slang’. In this posting I limit myself to the ones related to this article.

  • Geo: Abbreviation for geography. At this  moment Azure is to be found in 13 geo’s and two more are announced (France & South Africa).
  • Region: Can be looked upon as one HUGE data center, hosting many Azure services. For instance, there is an Azure region in Amsterdam (Netherlands) and one in Dublin (Ireland)
  • Region Pair: Two directly connected Azure regions, placed within the same geography BUT located greater then 300 miles apart (when possible). An Azure Region Pair offers benefits like data residency (except for AAD…), Azure system update isolation, platform provided replication, physical isolation and region recovery order.

Example of a Geo, with its Azure Regions and Region Pair is Geo Europe. This Geo has two Azure Regions: one in Amsterdam (Netherlands), named West Europe and the other Azure Region located in Dublin (Ireland), named North Europe. Together they make up the Region Pair for Geo Europe.

Azure data storage location by default
By default most Azure services are deployed regionally, enabling the customer to specify the Azure Region where their customer data will be stored. This is the case for VMs, storage and Azure SQL databases.

So when you deploy a set of VMs in the Region West Europe with related storage, that data will be stored in Amsterdam (Netherlands). And yes, and some parts of that data will be replicated to North Europe as well since both Regions are part of the same Region Pair. Reasons for this replication might be of an operational nature and/or of data redundancy options selected by the customer.

This is as expected. However it get’s trickier…

USDS (United States of Data Storage)?
However, there ARE exceptions to the above. In quite a few cases customer data will be stored outside by the customer selected Region (and Region Pair as such).

For instance there are some Azure regional services like Azure RemoteApp, Microsoft Cognitive Services, Preview, beta, or other prerelease services and Azure Security Center which data may be transferred and stored globally by Microsoft. And many times it will end up (in some form) in the USA, or United States of Data Storage…

How about AAD?
AAD isn’t an Azure service offered locally, but is designed to run globally. Any Azure service designed to run globally, it doesn’t allow the customer to specify a certain Region where to store the data related to that same Azure service.

And again, Microsoft isn’t very clear about where that data is exactly stored: ‘…Azure Active Directory, which may store Active Directory data globally…’.

To make it even more confusing the same website states: ‘…This does not apply to Active Directory deployments in the United States (where Active Directory data is stored solely in the United States) and in Europe (where Active Directory data is stored in Europe or the United States)…’

Azure services which operate globally are:

  • Content Delivery Network (CDN);
  • Azure Active Directory (AAD);
  • Azure Multi-Factor Authentication (AMFA);
  • Services that provide global routing functions and do not themselves process or store customer data (Eg: Traffic Manager, Azure DNS).

Still not sure where AAD stores its data…
Because Microsoft is a bit elusive about where EXACTLY AAD data is stored, it’s better to look how AAD is made up technically. Many times the technicians don’t do politics Smile.

The article Understand Azure Active Directory architecture is quite recent and very informative. It tells about primary and secondary replicas used for storing AAD data. And the latter ones make it interesting: ‘…which (the secondary replicas) are at data centers that are physically located across different geographies...’.

Basically it tells me that AAD data is replicated globally. It will turn up in the USA (USDS) as well. As the matter of a fact, it will turn up in every Region servicing Office 365. Simply because without AAD there is no Office 365 consumption.

And for sure, the same article clarifies it even more with the header Data centers: ‘…Azure AD’s replicas are stored in datacenters located throughout the world…’.

When using AAD you know for certain that user data (user names, credentials and meta data for instance) ARE replicated globally.

Do I need to worry?
That depends. Know however, that Microsoft goes to extreme lengths to secure your data. Physical access to their data centers is limited to a subset of highly screened people. On top of it all, Microsoft doesn’t allow governments and agencies to access customer data that easily.

And yes, Microsoft offers the Trusted Cloud. Looking at the sheer amount of certifications and data residency guarantees, you can rest assured that Microsoft does its outmost best to offer the most secure cloud services platform ever built.

Sure, you can look for alternatives. Like Amazon AWS S3. However, the meta data related to those ‘buckets’, also containing customer data, isn’t guaranteed to stay at a certain location either…

Another approach could be using Azure Geo Azure Germany. Because of VERY strict privacy laws, the exceptions for data storage for regional and global Azure services DON’T apply…

Recommended resources
For a better understanding of this article I recommend to read these resources:

Cross Post: Speeding up OpsMgr Dashboards Based On The SQL Visualization Library

Dirk Brinkmann (Microsoft SCOM PFE, based in Germany) has posted an excellent article all about an easy (and undocumented) way to speed up the SCOM/OpsMgr dashboards bases on the SQL Visualization Library MP.

Go here to read all about it.

Thank you Dirk for sharing!

Largest Microsoft Ebook Giveaway!

Ever wanted to know anything about the latest Microsoft technologies, but were afraid to BUY an ebook because todays technologies are changing too fast? So what you buy today is outdated tomorrow?

Fear no longer! Simply download a FREE Microsoft ebook on the topic you want to know more about it and be done with it. Oh, and because it’s FREE why not download many more Microsoft ebooks?

Want to know more? Hunger for more knowledge? Looking for FREE ebooks, reference guides, Step-By-Step Guides, and other informational resources? Go here and be AMAZED, just like me.

A BIG thanks to Microsoft!

PDF: Overview of Microsoft Azure compliance

When you’re about to use Azure and want to know whether it’s compliant with the regulations your company has to met, I strongly advise you to download the PDF Microsoft Azure Compliance Offerings.

As Microsoft describes: ‘…Azure compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. Each offering description in this document provides an up to date scope statement indicating which Azure customer-facing services are in scope for the assessment, as well as links to downloadable resources to assist customers with their own compliance obligations. Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific…’

Wednesday, July 26, 2017


This blog will be silent for the next few weeks because I am going on holiday, enjoying my family to the fullest.
Image result for national lampoon's european vacation
(Picture from the movie ‘National Lampoon's European Vacation’)

After the holiday ‘I’ll be back’ with quite a few postings, like (but not limited to):

  • The 2 last postings for the series about the future of the System Center stack related to Microsoft’s  ‘Cloud & Mobile First’ strategy;
  • Quite a few postings about Azure (IaaS & management);
  • SCOM updates and the lot.

I wish everybody a nice holiday (if not already enjoying it) and see you all later.


Thursday, July 20, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 05 – SCSM

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM
06 – SCVMM
07 – SCOM

In the fifth posting of this series I’ll write about how System Center Service Manager (SCSM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch I think that SCSM isn’t going to make it to the cloud…

Ever heard of Service Desk?
The very start of SCSM was a bumpy ride. Originally it was code-named Service Desk and was tested back in 2006, with the release scheduled somewhere in 2008. The beta release ran on 32-bits(!) version of Windows Server 2003, with IIS 6.0, some .NET Frame work versions (of course), SQL Server 2005 and SharePoint Server 2007 Enterprise.

Service Desk was really a beast. Terrible to install, a disaster to ‘run’ (it was slooooooooooooooow) and filled to the brim with bugs. Totally unworkable. Back then I was part of a test team which put the ‘latest & greatest’ of Microsoft’s products through its paces. The whole team was amazed about the pre-beta level of it. Never ever before we bumped into such crappy software. We even wondered whether we had received the proper beta bits…

So none of us was surprised when Microsoft pulled the plug on it and sent the developers back to their drawing boards. In the beginning of 2008 Microsoft officially announced it was delaying the release until 2010, because the beta release had performance and scalability issues. Duh!

Meanwhile a new name was agreed upon: Service Manager.

2010: Say hello to SCSM 2010
In 2010 the totally rewritten SCSM 2010 was publicly released at MMS, Las Vegas. For sure, the code base for SCSM 2010 was totally new, but somehow the developers had succeeded in bringing back some of the issues which plagued Service Desk: performance and scalability issues… Ouch!

Because SCSM 2010 was really the first version (totally rewritten code remember?) it missed out on a lot of functionality. As a result Microsoft quickly brought out Service Pack 1 for it, somewhere in the end of 2010. For SCSM 2010 SP1 in total 4 cumulative updates were published, alongside a few hotfixes.

From 2012x to 2016 in a nutshell
Sure with every new version (2012, 2012 SP1, 2012 R2 and 2016) the performance and scalability issues were partially addressed, but never they really disappeared. As a result SCSM has a track record for being slow and resource hungry. For SCSM 2016 Microsoft claims that data processing throughout has been increased by 4 times.

None the less, the requirements for SCSM 2016 are still something to be taken seriously. For instance, Microsoft recommends physical hosts, 8-core CPU’s and so on. The number of required systems can run over 10+(!), especially when you want to use Data Warehouse cubes and the lot. Even for enterprises this is quite an investment for just ONE tool.

Also with every new version, additional functionality was added. For instance, SCSM 2016 introduced a HTML based Self Service Portal. Unfortunately, the first version of that portal had some serious issues, most of them addressed in Update Rollup #2.

All in all, the evolution from SCSM 2010 up to SCSM 2016 UR#2 has been quite a bumpy ride with many challenges and issues.

Deep integration
Of course, SCSM offers a lot of good stuff as well. It’s just that SCSM is – IMHO – the component of the SC stack with the most challenges. One of the things I like about SCSM is the out-of-the-box integration with other tools and environments.

SCSM can integrate with AD, other System Center stack components (SCOM, SCCM, SCVMM and SCOrch). And – still in preview – you can use the IT Service Management Connector (ITSMC) in OMS Log Analytics to centrally monitor and manage work items in SCSM. As a result, the underlying CMDB is enriched with tons of additional information for the contained CI’s.

SCSM & Azure
At this moment – besides the earlier mentioned ITSMC in OMS – there are no other Azure Connectors available, made by Microsoft that is. There are some open source efforts, like the Azure Automation SCSM Connector on GitHub. But as far as I know, it isn’t fully functional.

Other companies like Gridpro and Cireson, are offering their solutions. But since these companies do have to earn a living as well, their solutions don’t come for free, adding additional costs to your SCSM investment. Still, some of their solutions resolve SCSM pain points for once and for all. So in many cases these products deserve at least a POC.

But still the Azure integration is limited. On top of it all, Microsoft itself doesn’t offer any Azure based SCSM alternatives. Azure Marketplace offers a few third party Service Management solutions (like Avensoft nService for instance) but none of them Microsoft based.

Of course, you could  install SCSM on Azure VMs, but shouldn’t since it’s a resource savvy product, which would bump up Azure consumption (and thus the monthly bill) BIG time.

No Roadmap?!
Until now Microsoft is pretty unclear about their future investments in SCSM. There is nowhere a roadmap to be found. So no one knows – outside Microsoft that is – what will happen with SCSM in the near future. Will there ever be a new version after SCSM 2016? I don’t know for sure. But the signs are the tell tale sign their won’t be…

In the last years the online service management solution ServiceNow has seen an enormous push and growth. Not just in numbers but also in products and services.

Basically ServiceNow delivers – among tons of other things – SCSM functionality in the cloud. Fast, and reliable. It just works. Also it integrates with many environments, tools and the lot.

SCSM has a troublesome codebase which isn’t easily converted to Azure without (again Smile) a required rewrite. When looking at where SCSM stands today, the reputation it has, I dare to say it’s end of the line for SCSM. No follow up in the cloud, nor a phased migration (like SCDPM or SCCM) to it.

Instead Microsoft is silent about the future of SCSM which on itself says a lot. One doesn’t need to speak in order to get the message across.

Combined with the power of ServiceNow, fully cloud based, it’s time to move on. When you don’t run SCSM now, stay away from it. Because anything you put into that CMDB must be migrated to another Service Management solution sooner or later. Instead it’s better to look for alternatives, using todays technologies to the fullest, like ServiceNow or Avensoft nService. For sure, there are other offerings as well. POC them and when they adhere to your company’s standards, use them.

When already running SCSM, upgrade it to the 2016 version. It has Mainstream Support till 11th of January 2022. Time enough to look out for alternatives, whether on-premise or in the cloud. Because SCSM won’t move to the cloud nor will Microsoft invest heavily in it like it did before it adopted their Mobile First – Cloud First strategy.

So don’t wait until it’s 2022, but move away from SCSM before that year, so you can do things on your own terms and speed, not dictated by an end-of-life date set for an already diminishing System Center stack component.

Coming up next
In the sixth posting of this series I’ll write about SCVMM (System Center Virtual Machine Manager). See you all next time.

Monday, July 17, 2017

Azure Stack and Azure Stack Development Kit Q&A

Since Azure Stack is GA, many questions have come forward. Not only about Azure Stack but also about Azure Stack Development Kit. I’ll do my best to answer most questions and refer to the online resources as well.

01: What’s Azure Stack?
As Microsoft states: ‘Microsoft Azure Stack is a hybrid cloud platform that lets you deliver Azure services from your organization’s datacenter…’. Still it sounds like marketing mumbo jumbo.

Basically it means that with Azure Stack your organization has the same Azure technology on-premise available, deeply integrated with the public Azure. Of course, Azure Stack doesn’t offer the same breadth and depth of services as the public Azure, but still it packs awesome cloud power. It’s to be expected that with future updates Azure Stack will offer more and more public Azure based services and technologies, based on the use cases and demands of existing Azure Stack customers.

And because Azure Stack and the public Azure use the same technologies, the end user experience is fully transparent. The same goes for the administration experience. So basically Azure Stack can be looked upon as an extension of Azure.

So yes, one could look at Azure Stack as a kind of private cloud which can be heavily tied into the public Azure, thus creating a super powered hybrid cloud. But there is more.

02: Does Azure Stack require a permanent connection with public Azure?
No, it doesn’t. You can run Azure Stack either in a Connected scenario or Disconnected scenario. In a Connected scenario Azure Stack has a permanent connection with the public Azure. In a Disconnected scenario, Azure Stack doesn’t have a permanent connection.

Even though the first scenario – Connected – makes the most sense, there are enough valid use cases for the Disconnected scenario as well. Think about area’s with a low internet connection density combined with a far away public Azure region. Or how about hospitals, embassies, military installations and bases? The kind of information kept and processed in places like those are valid use cases for the Disconnected scenario.

03: Why should companies use Azure Stack while public Azure offers more services and is more powerful?
Good question! Suppose you’ve got a production facility which generates HUGE amount of data. That data is processed, and the result sets are used further down the production line. In a public Azure setup it would require an enormous data pipeline to Azure in order to get that data across. And when processed, the result sets have to send back as well. Which is egress traffic = money. On top of it all there is latency since the data travels between the factories and Azure.

With Azure Stack, that data is processed locally (no data traffic costs since it’s local LAN, no WAN) and there is no to very small latency.

Another valid use case is app development. Here public Azure is used for development and Azure Stack is used for production, or vice versa.

Or how about sensitive data which – based on regulations and law – isn’t allowed to live in the public cloud? Now you can keep the data onsite (Azure Stack) and use apps living in the public Azure.

And these are just some of the valid use cases for Azure Stack. There are many more, believe me.

04: Does Azure Stack offer the same services as the public Azure?
No, it doesn’t. Which makes sense when you compare the size of an average Azure region compared to an Azure Stack Smile. However, as stated before, the amount of services offered by Azure Stack will grow in the future, based on customer demand and use-/business cases for Azure Stack.

For now(*) Azure Stack offers these foundational services:

  • Compute;
  • Storage;
  • Networking;
  • Key Vault.

On top of it, Azure Stack offers these PaaS services(*):

  • App Service
  • Azure Functions
  • SQL and MySQL databases

(*: This is per 10th of July 2017. Since Azure Stack is in constant development, changes are that the amount of services offered by Azure Stack will have changed over time. Please check Microsoft for the most recent updates and overview of services offered by Azure Stack.)

05: Can I download Azure Stack and install it on spare hardware I’ve got?
No, you can’t. Because Microsoft invests hard to offer you the same Azure experience (pay as you go, consume with no worries about the hardware and so on) with Azure Stack, they had to lock down the hardware on which Azure Stack runs.

Therefore Azure Stack is delivered as a whole package, hardware and software integrated into one. For now HPE, Dell EMC and Lenovo deliver Azure Stack with their own hardware. Soon other hardware vendors will follow suit.

06: So I can’t test drive it? How do I know whether Azure Stack works for me?
Sure you can test drive Azure Stack, POC it or use it as a developer environment. For this Microsoft has specifically developed Azure Stack Development Kit.

You can download it for free and install it on hardware of your choice. Of course there are some requirements to be met for this hardware, but still it’s up to you what vendor to use.

07: What’s Azure Stack Development Kit? Can I use it for production?
As Microsoft states: ‘…It’s a single-node version of Azure Stack, which you can use to evaluate and learn about Azure Stack. You can also use Azure Stack Development Kit as a developer environment, where you can develop using consistent APIs and tooling…’

As such Azure Stack Development Kit isn’t meant for production. It’s meant for POCs and stuff like that. Go here to learn more about it.

08: Do I need to pay for Azure Stack?
Sure you do. But the prices are lower compared to using the public Azure. Which makes sense because your company pays the hardware and operating costs. Check out this Microsoft Azure Packaging & Pricing Sheet (*) for more information.

(*: Please know this sheet will be updated in the future. As such, just Google for Microsoft Azure Packaging and Pricing Sheet and you’ll find the latest version of it.)

09: Is Azure Stack Development Kit free?
Yes, Azure Stack Development Kit itself is free. However, the moment you connect it to (one of) your Azure subscriptions and start moving on-premise workloads to the public Azure, you will be charged for it.

10: Do have some useful links for me?
Sure, hang on. Here are some useful links, all about Azure Stack and/or Azure Stack Development Kit:

Thursday, July 13, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 04 – SCDPM

Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
05 – SCSM
06 – SCVMM
07 – SCOM

In the fourth posting of this series I’ll write about how System Center Data Protection Manager (SCDPM) relates to Microsoft’s Mobile First – Cloud First strategy. Even though it’s a bit ‘clouded’ it’s pretty sure SCDPM will move to the cloud, one way or the other. But before I go there, let’s take a few steps back and take a look at SCDPM itself.

From the very first day it saw the light SCDPM was different compared to other backup products. For instance, Microsoft positioned it as a RESTORE product, not a backup product. By this Microsoft meant to say that as a SCDPM admin you could easily restore any Microsoft based workload, like SQL, Exchange, SharePoint and so on, WITHOUT having any (deep) understanding of the products involved.

Even though SCDPM’s usability was limited to Microsoft workloads, it offered a solution to the ever growing amount of data to be backed up with a never growing backup window: continuous backup!

Therefore SCDPM offered something new, if only a refreshed approach to the backup challenges faced by many companies back then.

Unfortunately Microsoft dropped the ball on SCDPM some years later on, because further development of new functionalities and capabilities was stopped.  As such it was overtaken by many other backup vendors, delivering improved implementations of continuous backup and easiness of restore jobs.

On top of it all, SCDPM kept it’s focus on Microsoft based workloads. Only for a short period SCDPM was capable of backing up VMware based VMs (SCDPM 2012 R2 UR#11), to be abandoned when SCDPM 2016 went RTM. Sure, one of the reasons being that the VMware components required to be installed on the SCDPM server to support VMware backup isn’t yet supported on Windows server 2016. None the less, the result is the same: SCDPM covers Microsoft based workloads only.

Combined it has led to an ever shrinking market for SCDPM. With Microsoft’s strong focus on Azure it looks like SCDPM is going to the cloud, one way or the other.

SCDPM & Azure
Valid backup strategies are vital for any company, whether working on-premise, in the cloud or hybrid. Therefore Azure offers different backup services, which are potentially confusing. Even more confusing because the starting point for consuming Azure backup services is the same.

It all starts with creating a Recovery Services Vault which is an online storage entity in Azure used to hold data such as backup copies, recovery points and backup policies. From there one can configure the backup of Azure or on-premise based workloads.

When choosing to backup on-premise based workloads there are three options to choose from:

  1. When you’re already using SCDPM, you have to download and install the Microsoft Azure Recovery Services (MARS) Agent:
    The MARS Agent is installed on the SCDPM server. Now SCDPM will be extended from disk-2-disk backup to disk-2-disk-2-cloud backup. The on-premise backup will be used for short-term retention and Azure will be used for long-term retention.

  2. Of course, the MARS Agent can be used outside SCDPM as well, in which case you have to install and configure it separately on every server/workstation you want to protect. In bigger environments this creates enormous overhead.

    As such this approach should be avoided and is only viable in smaller environments where you have just a few on-premise laptops/workstations to protect and run everything else in the cloud (Azure/AWS).

  3. When you don’t use SCDPM, you have to download and install Microsoft Azure Backup Server (MABS) v2:

    MABS is actually a FREE and customized version of SCDPM with support for both disk-2-disk backup for local copies and disk-2-disk-2-cloud backup for long term retention. And contrary to SCDPM, MABS supports the backup of VMware based VMs!

    Of course, the moment you start using Azure for long term retention, you’ve to pay for the storage used by your backups. And the moment you restore from Azure to on-premise or to Azure in another region, you have to pay for the egress traffic.

    On top of it, MABS requires a live Azure subscription. The moment the subscription is deactivated, MABS will stop functioning.

When using a Recovery Services Vault to backup Azure based workloads you can only backup Azure VMs, which is an extension to an Azure VM. This will cover the whole VM and all related disks to that VM. The backup will run only once a day and a restore can only be done at disk level.

Azure Site Recovery
And no, this isn’t everything there is. Another option is Azure Site Recovery.

As Microsoft states: ‘… (it) ensures business continuity by keeping your apps running on VMs and physical servers available if a site goes down. Site Recovery replicates workloads running on VMs and physical servers so that they remain available in a secondary location if the primary site isn't available. It recovers workloads to the primary site when it's up and running again…’

Too many choices to choose from?
As you can see, Azure offers different backup services, aimed at different scenario’s. Also SCDPM can be used together with Azure backup, turning SCDPM into a hybrid solution.

And SCDPM can be installed on an Azure VM and the same goes for MABS, enabling you backup cloud based workloads running on Azure based VMs.

Even more options to choose from! To make it even more confusing, Azure is in an ever state of (r)evolution. What’s lacking today, is in preview tomorrow and next week in production. The same goes for Azure backup services and Site Recovery.

SCDPM is moving to the cloud. Or better, it has already arrived there. One way is using SCDPM in conjunction with the MARS Agent, another way is installing SCDPM on Azure based VMs. Or instead, using the revamped and customized free version of SCDPM, branded MABS. Which can be installed on-premise or on Azure based VMs.

So there are choices to be made. The right choice depends much on the type of workloads your company is running, combined with the location (on-premise, cloud or hybrid) and the Business Continuity and Disaster Recovery (BCDR) strategy in place.

On top of it, the moment of your decision is also important. Simply because Azure backup services are just like Azure itself, changing and growing by the month. This Microsoft Azure document webpage might aid you in making the right decision.

But no matter what the future might bring, one thing is for sure: SCDPM as a local on-premise entity is transforming more and more into a cloud based solution. Of course, when running on-premise or hybrid workloads, there will be a hard requirement for a small on-premise footprint. But more and more the logic, storage and management of it all will move into the cloud.

On top of it all, many backup options will be integrated more and more into specific services. As a result there won’t be 100% coverage offered by SCDPM or the Azure based backup services. In other cases there won’t be a out-of-the-box backup solution available at all. As a result third parties will jump into that gap, created by Microsoft.

A ‘shiny’ example is the backup of Office 365. Lacking by default and not in Microsoft’s pipeline, Veeam jumped into that gap by offering a solution made by them.

So at the end, the technical solution to your company’s BCDR strategy might turn into a hard to manage landscape of different point solutions instead of the ultimate Set & Forget single backup solution…

Coming up next
In the fifth posting of this series I’ll write about SCSM (System Center Service Manager). See you all next time.