Monday, September 7, 2015

Comparing SCOM And OMS = Comparing Apples And Oranges

Okay. Running a blog is something I really like. But with it do come certain responsibilities. Like keeping the blog clean of anything based on assumptions and lacking good investigation.

Until recently I succeeded in this approach. However, last week I posted an article which fell below that standard. This posting was about the newest feature in OMS, near real-time performance data collection.

In this posting I assumed this kind of near real-time performance data collection would have a noticeable impact on the performance of the monitored servers. Also I compared it to some performance collection Rules present in the Windows Server OS MP, used by SCOM.

As it turned out I was wrong on both accounts. Both assumptions were based on my SCOM experiences. However, as it turns out OMS is a whole different kind of beast (no pun intended!), even though it runs a Microsoft Monitoring Agent (MMA) and uses Intelligence Packs. So the look & feel might be a bit like SCOM but under the covers it works totally different compared to an on-prem SCOM solution.

I want to say sorry to all the readers of this blog, Microsoft included. Simply because you expect here to find information, based on facts and not on assumptions. This particular posting failed on that account.

So I’ve pulled the old posting and will replace it soon by a new one, all about the footprint of the OMS Agent on a server, collecting near real-time performance data using the default interval of 10 seconds. This posting won’t be based on assumptions but on some serious testing.

During the week-end I had more time to put things to the test. This way I’ve found out that OMS has a significant smaller footprint on the monitored servers than I previously assumed.

Spoiler Alert
In the week-end I rolled out in my own lab two identical servers (NTR01 and NTR02), both running Windows Server 2012 R2. Same disk, CPU and RAM configuration.

In OMS I created a new Workspace (NRTLab), especially for this test. From this new OMS Workspace I downloaded the Microsoft Monitoring Agent (MMA) and installed it ONLY on the NRT01 server. The NRT02 server is purely a reference server. It has NO MMA what so ever.

In the new OMS Workspace I configured ONLY the collection of the OpsMgr event logs (error and warnings), the default set of performance counters WITH their default sample interval of 10 seconds and last (but not least), the System Update Assessment Solution.

On both servers I defined a new Data Collector Set in Performance Monitor, all aimed at collecting specific performance data (CPU, memory, NIC and process related items) in order to get a better and detailed understanding of the footprint of the OMS MMA in general, and the collection of real time performance data specifically.

And I must say that I am really IMPRESSED about how small that footprint is. About an hour ago I restarted the Data Collectors on both servers for the last time so I’ve got multiple test results to ‘read’ and translate into a new blog posting.

So stay tuned!

1 comment:

Pete Zerger said...

Marnix, if you look at the MP source, MS appears to be doing a good job of implementing cookdown, so performance impact is minimized (I think I remember Tao or Stan may have blogged on this).