Monitoring can be done with many different toolsets. And besides the tooling, the process of monitoring can be done quite different as well. One can simply monitor and respond when something breaks. Or, one can try to predict failures before they take place, and act in advance, like failover to another location.
The latter is also known as pro-active monitoring. For a long time it has been a marketing slogan only, but with the right tools, it can be done. And in todays world I dare say it has become a hard requirement. Why?
Nowadays, the IT environment has become a mix of on-premise solutions, combined with other IT assets residing in the cloud, like (but not limited to) Azure or VMware vCloud Air. Workloads are running on top of it all, and many times the end user doesn’t even know where. They just expect it work and perform accordingly.
This creates opportunities and challenges for the IT departments. Opportunities because the old barriers (buying & installing hardware for instance) are gone, because in the cloud it can be coded. Challenges, because the workloads are many times hybrid, resulting in a multi-tier effort in order to keep things running smoothly.
Say hello to pro-active monitoring
As such, monitoring has become even more crucial but has to be done in a different manner. Instead of simply focussing on the ‘now’ of the IT environment, it has become paramount to gain a peek into the future. And not only on the level of the workloads themselves, but also down to the hardware level, like CPU, networking & storage.
Sure, when EVERYTHING is in the cloud, those things are covered by the cloud provider. But many times workloads are hybrid, with one or more ‘legs’ in your on-premise environment.
Wouldn’t it be a shame when a failover goes wrong, because the hosts are over-committed? Not enough storage? Not enough CPU? Ouch!
Gone are the days that monitoring, capacity planning and modelling were different entities. In order to enable pro-active monitoring, capacity planning and modelling are hard requirements. Without it, it’s back to the old days of monitoring, where one waited until an Alert popped up and responded. Only putting out fires as they happen…
That’s why I recommend Veeam
That’s why I recommend the Veeam MP. It enables true pro-active monitoring. On top of the ‘plain vanilla’ monitoring (which goes pretty far already), it also delivers on capacity planning & modelling, thus enabling organizations to pro-actively monitor their hybrid workloads, whether running on-prem on Hyper-V or VMware and in the cloud (Azure and/or VMware vCloud Air).
Also it enables organizations in their ever ongoing move/migration to the cloud. Many times organizations are in the process of ‘lift & shift’, meaning that on-premise hosted workloads are migrated fully to the cloud.
The Veeam MP aids organizations here as well by analysing on-premises virtual workloads and map them out against their equivalent in Azure or VMware vCloud Air. This enables a smoother transition to the cloud.
Compared to other MP solutions in order to monitor hyper-visor based workloads, the Veeam MP adds much more to the mix. Other solutions only deliver on the ‘putting out fires’ scenario, which is outdated and can be easily enriched, when the right tools are being used.
But the costs…
Yeah I know. The Veeam MP doesn’t come cheap. But just do some math. How much euro’s/dollars would your company loose when a core application breaks down, for a few hours during a normal working day?
I know for sure those costs are a multitude of the costs of the Veeam MP. And know that the Veeam MP delivers an enriched toolset, enabling pro-active monitoring in order to prevent the breaking of your core applications.
In a setting like that, the investment in the Veeam MP makes sense, and has a solid business case.
That’s why I always recommend the Veeam MP to my customers, whether they run Hyper-V or VMware and use Azure and/or VMware vCloud Air.
No comments:
Post a Comment