Monday, November 30, 2015

New Challenges!

Probably you’ve noticed the silence moments on my blog. Some weeks went by without not too many postings or – even worse – no postings at all. All this has a reason.

The main reason is that life takes it’s directions, whether it’s private or work. And you just got to follow the ‘flow’. Some of these shifts in direction are self initiated, whereas other are more like ‘facts of nature’ and have to be dealt with as well.

As such much of my time – used for blogging – was consumed by many other activities. And when not, I simply didn’t have the right mind setting to write a good posting. That way many drafts were made, but never made it to my blog because they failed the final ‘reader test’.

Gladly, the bumpiest parts of this ride are over and a new dawn is about to start!

One of the self initiated new directions is my move to another company. Tomorrow - December the 1st – I’ll start at another company, Didacticum, in the role of senior consultant. I am looking forward to meet my new colleagues and to start this new career tomorrow.

I want to say thanks to Insight24 for the time we spent together and everything I’ve learned. I also wish them the best. And I am sure we’ll meet again since it’s a small world after all.

The moment I’ve found my new balance my blog will be back on track again. So stay tuned for many new postings to come out soon!

Friday, November 20, 2015

System Center 2016 TP & Nano Server Monitoring

One of the new many new items launched by Microsoft is Nano Server. This is a very small server with many capabilities. However, monitoring it with System Center 2016 TP might be a challenge.

So it’s good news that Microsoft has published a TechNet article, all about monitoring Nano Server with System Center 2016 TP, titled: Monitoring Nano Server.

OMS In Full Swing: The Return Of BlueStripe!

Microsoft means business with OMS and is pushing it’s development BIG time. This picture I got from a tweet from Stefan Stanger (@sstranger) which tells it all:

As you can see I’ve highlighted APP DISCOVERY & MAPPING. To me this sounds like BlueStripe FactFinder which Microsoft acquired a few months back.

So OMS is getting bigger & better. Can’t wait to see all the new features in OMS. Awesome!

Technical Preview 4 System Center 2016 Evaluation VHDs Available

Yesterday Microsoft released Technical Preview 4 for System Center 2016, among them SCOM TP4 as well.

The TP4 VHDs download locations:

Updated MP: Windows Server 2012x DHCP

Yesterday Microsoft released an update for the Windows Server 2012 DHCP MP, version 6.0.7299.0.

This MP contains a bug fix: ‘…In a DHCP server with multiple scopes, if the first scope exceeds the threshold for minimum available addresses, alerts are sent for all scopes on the DHCP server even for the ones that don't violate the threshold. The fix ensures that alerts are sent only for the scope exceeding the threshold…’

MP can be downloaded from here.

Monday, November 16, 2015

Troubleshooting Empty OM12x Reports – Part 1: Exchange Server 2010 Report MP

When I started out to write a blog post about how to troubleshoot empty OM12x Reports, I quickly found myself writing about the not so much appreciated (this is a BIG understatement…) Exchange Server 2010 Reports MP.

So I decided to ditch that posting, start all over again and make it a two part series posting since there is much to tell. So this first posting will be all about the Exchange Server 2010 Reports MP. Simply because this MP is a SNAFU as pure as they can get.

Mind you, I am talking about the REPORTING MP, I know the other part is also SNAFU, but this posting is all about the

Yes, I blogged quite a few times about this MP and the ‘challenges’ it creates. Also wrote some postings about how to fix some issues, like the empty reports. But as it turned out, that fix doesn’t work with OM12x. Hence this posting.

Exchange Server 2010 Reports MP = Empty Reports?
Not ALL the times, but MANY times this MP is the culprit of empty OM12x Reports. The cause is the poorly written SQL queries who must aggregate the data present in the Exchange Server 2010 datasets in the Data Warehouse database.

But because the underlying queries for these aggregations are poorly written, they time out. So OTHER aggregation jobs don’t get the time anymore to run! Resulting in the RAW tables being loaded with tons of information without being aggregated so that data never ever goes into other datasets like HOURLY or DAILY. As a result: EMPTY reports.

Read further about how to solve that issue. Of course, you can rewrite the aggregation query as Daniele Grandini once did, but when you don’t use the Exchange Server 2010 Reports, it’s better to remove that MP and the related datasets.

Hey wait! Exchange Server 2010?! That’s so 2010 man!
I agree totally. But quite a few companies still have Exchange Server 2010 running. Yes, they’re moving away from it, to either Exchange Server 2016 (which became GA first half of October) or a combination with Office 365 in the mix.

But until that time, Exchange Server 2010 is production so monitoring is required.

Remove those Exchange Server 2010 datasets
Like stated before, the Exchange Server 2010 Reports MP creates a bunch of new datasets in the Data Warehouse database. In order to see all the datasets, run this query against the Data Warehouse database:

Use OperationsManagerDW
SELECT DataSetDefaultName,
FROM StandardDatasetAggregation sda
INNER JOIN dataset ds on ds.datasetid = sda.datasetid
ORDER BY DataSetDefaultName

Now you’ll see all the datasets in your Data Warehouse database. Except for ONE Exchange Server 2010 dataset (Microsoft.Exchange.2010.Dataset.AlertImpact) you can remove them all. The earlier mentioned dataset is created by the Exchange Server 2010 MP, not the Exchange Server 2010 Reports MP.

Okay, since we have now a full view of the Exchange Server 2010 datasets to be removed, it’s time to continue.

Please make a BACKUP of BOTH SCOM databases (OperationsManager & OperationsManagerDW). This way there is always a way back when things turn sour.

Also good to know:
When you continue from here, be aware all the steps are UNSUPPORTED by Microsoft. So you’re on your own!

Now follow these steps:

  1. Have you made a verified backup of BOTH SCOM databases?
  2. You KNOW that what you’re about to do is UNSUPPORTED & that you can’t blame me either?
  3. Remove the Exchange Server 2010 Reports MP ( from SCOM, like you always do through the Console. Mind you, this is SUPPORTED by Microsoft Smile;
  4. Start SQL Server Management Studio with an account that has sufficient permissions to modify the OperationsManagerDW database;
  5. Run the second query of this blog posting in order to remove the related Exchange Server 2010 Reports MP data sets, one by one;
  6. Don’t touch the Microsoft.Exchange.2010.Dataset.AlertImpact, see previous explanation;
  7. When you’ve removed ALL Exchange Server 2010 Reports data sets (except the one mentioned in Step 6!), close SQL Server Management Studio and go to the SCOM Console > Tools > Search > Rules;
  8. In the Search box, type Data Set > hit Search > one Rule will be found: Standard Data Warehouse Data Set maintenance rule > click the blue link View Knowledge so the properties of this Rule will be shown;
  9. Go to the tab Overrides > Disable > For a specific object of class: Standard Data Set > select the  Microsoft.Exchange.2010.Dataset.AlertImpact data set > OK and save the Override in a dedicated unsealed MP.
  10. Now the default grooming won’t run anymore for this particular data set. Isn’t necessary anymore since the Exchange Server 2010 Reports MP is already removed (Step 3).

Now all data should be processed (aggregated) step by step, resulting in FILLED reports. Sweet!

Close but no cigar?
Please know that the aggregation of data to catch up takes time (from a few hours to days depending on the size of your Data Warehouse database and the amount of data present in the RAW tables) so you won’t get filled reports right away.

However, when you still have empty reports, and nothing seems to be moving for days, it might be that you’ve got other issues with your data aggregation. In the next posting I’ll write about how to see whether you’re affected and even better, how to crack it (most of the times).

See you all next time!

Friday, November 13, 2015

Updated MP: SQL Server

November 13th, Update:
I’ve got confirmation from Microsoft the issue with the previous version of the SQL Server MP is solved with the release of version On the SQL Release Services Blog there is even a posting about the latest release of this MP with this additional statement:

Want to know more? Go here.

Yesterday Microsoft released an update for the SQL Server MP, version As we all know, the previous version ( had some serious issues so it was pulled.

The related MP guide states (page 6):

Since the bug was present in the visualization component, I expect all is okay now. But – like all other MPs - test it thoroughly before putting it into production.

MP can be downloaded from here.

New KB Article: OM12x Configuration Isn't Updated & Event ID 29181 Is Logged

A few days ago Microsoft published KB3092452 all about OM12x environments not updating configuration changes on one or more OM12x Management Servers.

Also Event ID 29181 is logged in the OpsMgr event log.

KB3092452 is all about this issue, the cause AND the solution.

Data Warehouse: Don’t Forget To Modify The Retention

Even though quite a few postings have been written about it, still many people don’t realize the full impact of the default retention settings of the SCOM/OM12x Data Warehouse database. Okay, they know the data is kept for 400 days before it’s groomed out. But many times this is where it stops. And that’s too bad. Why? Just keep on reading Smile.

No need to repeat others…
Since Kevin Holman has written an excellent posting about the Data Warehouse retention and grooming, there is no need to repost/rewrite it here. And yes, the posting is written for SCOM 2007x but applies to OM12x as well.

For me the 3 key takeaways of that posting are:

  1. The Data Warehouse database will be one of the LARGEST databases supported by a company;
  2. You should give STRONG consideration to reducing your warehouse retention to your reporting REQUIREMENTS;
  3. If you don't have any – MAKE SOME!

I would like to follow up on the third item but I don’t know what dataset to ‘touch’…
Since many people/companies don’t follow up on those advises, the Data Warehouse becomes very big. As a result, backups/restores of the same database becomes a challenge.

So what to do? Easy. Modify the retention settings in the Data Warehouse database so it adheres to your company policy and compliancy requirements.

But as it turns out, all of the sudden some road blocks do pop up. Because people are having a difficult time by deciding what kind of data should be kept for 400 days or not.

Some suggestions of what kind of datasets to modify
Suppose your company has the requirement to run reports which show a maximum of ONE year of data for performance and availability.

I’ve seen many cases like these where the SCOM admins say: Okay, based on those requirements there is no need to modify the retention settings since the difference between 365 days (a year) and 400 is way too small to go through the hassle.

So far so good. BUT there is one HUGE downside. YES, your company wants to run reports back to a maximum of ONE year. So you must keep that data there for at least ONE year. BUT does your company require reports dating a year back containing HOURLY data?

Because that’s the default! HOURLY data (performance & availability) are kept for 400 days. And it goes without saying those very same datasets get VERY big.

Until now I haven’t met one company with that requirement. Yes, they require reports going one year back. But in most cases those reports are allowed to use DAILY aggregations, not HOURLY.

So when you’re going to ask your company what kind of policy they adhere to when it comes to data retention, make sure they not only say for how many days, but also to what kind of data (daily or hourly) it applies to.

And until now, this is what most of the companies I’ve met require:

  • DAILY data: 1 year (leave it to the default of 400 days);
  • HOURLY data: 100 days (a difference of 300 days with the default setting of 400 days!)

And now you’ve got a WHOLE different kind of story. And while you’re at it, don’t forget the Alerting dataset as well. Since here the data is kept for 400 days as well. And most of the times the half or even less is good enough.

Back to Kevin it is
With that information read Kevin’s posting about how to modify the retention settings on the related data sets and your Data Warehouse database will shrink considerably in size!

In another blog posting I’ll share some tips and queries about how to troubleshoot data aggregation issues in the Data Warehouse database.

Tuesday, November 10, 2015

New MP: Team Foundation Server 2015 MP Version 14.0.24622.0 Released

Yesterday Microsoft released TFS 2015 MP, version 14.0.24622.0. MP can be downloaded from here.

SCOrch: Integration Modules Converted From Microsoft Supported Integration IPs For Orchestrator

Yesterday Microsoft released the Integration Modules converted from Microsoft supported Integration Packs for Orchestrator. They support the migration of Orchestrator runbooks to Azure Automation and Service Management Automation.

The modules include activities to connect against:

  • System Center Virtual Machine Manager
  • System Center Data Protection Manager
  • FTP
  • Exchange Admin
  • Exchange User
  • SharePoint
  • REST
  • Active Directory
  • Azure
  • VMware vSphere
  • HP Operations Manager

Please know these are in BETA! So test them before putting them into production. They can be downloaded from here.

SCOrch Migration Toolkit

Yesterday Microsoft released the System Center Orchestrator Migration Toolkit, enabling you to migrate integration packs, standard activities, and runbooks from System Center 2012 – Orchestrator to Azure Automation and Service Management Automation.  

Tools included:

  • Integration Pack Converter
    This tool converts integration packs that were created using the Orchestrator Integration Toolkit to integration modules based on Windows PowerShell that can be imported into Azure Automation or Service Management Automation. Using the tool’s wizard, you can select the activities in the integration pack that will be converted to cmdlets in the integration module. Placeholders are created for monitor activities that are not supported in Azure Automation or Service Management Automation.

  • Standard Activities module
    Integration module that contains all of the standard activities used in Orchestrator Runbooks that can be imported into Azure Automation or Service Management Automation. This module must be installed in your environment prior to importing any runbook converted with the Runbook Converter.

  • Runbook Converter
    This tool converts Orchestrator runbooks into graphical runbooks that can be imported into Azure Automation.

This toolkit is still in beta, so test it before you put it into production. Toolkit can be downloaded from here.

Monday, November 9, 2015

OM12x Tuning: Battle Those State Changes!

A little bit of history
Before SCOM 2007 became RTM, there were NO Monitors. Only Rules. So Alerts were fired but the monitored objects didn’t have a real ‘status’. That changed when SCOM 2007 went RTM. Rewritten from the ground up, many new things were introduced, among them the real life status of all monitored objects.

Those very same statuses came from the Monitors, a whole new concept in SCOM. You can look upon Monitors as ‘State Machines’, allowing an object to reflect the real-time health status. So primarily the function of a Monitor is to ‘change state’. Firing an Alert comes secondary. So when a Monitor changes State from healthy to unhealthy, an Alert can be fired. And when the status of the same object changes back to healthy, the Monitor will close the Alert.

As you can see, Monitors do play a significant role in the way SCOM works. This hasn’t changes at all in OM12x for that matter.

So far for the little bit of history.

Too many State Changes are BAD…
Like all good things in life, too much of anything isn’t good. The same thing goes for OM12x. So too many State Changes aren’t good at all. This is also known as ‘flip-flopping’. A certain Monitor for a certain object of a Class changes state for way too many times, like 20 (or more) times per hour.

Think about it. Every single State Change has to be calculated by one of the OM12x Management Servers, participating in the All Management Servers Resource Pool. Every State Change has to be written to the OpsMgr database AND the Data Warehouse database.

For ONE object of a Class with ONE flip-flopping Monitor that’s not too big of an issue. But imagine a Management Group with 1000+ monitored servers, 500+ MPs (many among them custom made) where MANY Monitors on MANY objects of MANY Classes are flip-flopping…

OUCH! That’s going to hurt, that’s going to bite you! So in cases like these you’ve got to know what’s going on in your environment. Because changes are your SCOM environment is affected by it, even when you don’t think it is.

How do I know I am affected?
Simple. Look for Event IDs like 31551, 31552,  & 31553 in the OpsMgr event log of your OM12x Management Servers. Events like these are many times the tell tale sign that something isn’t okay in your environment since data can’t be written into the Data Warehouse database in a timely fashion.

Yes, there can be a multitude of reasons for these Event IDs, but many times it boils down to too many State Changes taken place in your SCOM environment.

The best way to go about it is to run some queries against the operational database (OperationsManager) in order to get an idea of what kind of data and how many for the State Changes is coming in. These queries enable you to pin point step by step whether you’ve got an issue going on with too many State Changes, and when so, what objects and what Monitors are causing them.

Again, all these queries are run against the OpsMgr database (OperationsManager).

Also know these queries have been posted before by other people on other blogs. So all credits for these queries should go to them or their sources (many times Microsoft CSS).

Query 1: How many State Changes are happening on a day to day basis?
This query shows you how many State Changes are taking place on a day to day basis. This way you can compare days with each other in order to identify potential issues.

THEN 'All Days' ELSE CONVERT(VARCHAR(20), TimeGenerated, 102)
END AS DayGenerated, COUNT(*) AS StateChangesPerDay
FROM StateChangeEvent WITH (NOLOCK)
ORDER BY DayGenerated DESC

Example of the output:
Please know that in a SCOM environment which isn’t heavily tuned, the amount of State Changes per day can differ big time. Also keep in mind what kind of day it was. Was it a smooth ride or were there some outages?

Query 2: Noisiest Monitors changing state in the last 7 days
This query will show you the Monitors which generated the most State Changes in the last 7 days.

select distinct top 50 count(sce.StateId) as NumStateChanges,
m.DisplayName as MonitorDisplayName,
m.Name as MonitorIdName,
mt.typename AS TargetClass
from StateChangeEvent sce with (nolock)
join state s with (nolock) on sce.StateId = s.StateId
join monitorview m with (nolock) on s.MonitorId = m.Id
join managedtype mt with (nolock) on m.TargetMonitoringClassId = mt.ManagedTypeId
where m.IsUnitMonitor = 1
  -- Scoped to within last 7 days
AND sce.TimeGenerated > dateadd(dd,-7,getutcdate())
group by m.DisplayName, m.Name,mt.typename
order by NumStateChanges desc

Example of the output:
So now you know what Monitors are causing the most State Changes in your SCOM environment. Mind you, this can be a whole different set of Monitors depending on the configuration of your SCOM MG.

Query 3: What’s the noisiest Monitor PER object/computer in the last 7 days?
This query shows you what Monitor PER object/computer is generating the most State Changes. I’ve seen environments where a single server was good for THOUSANDS of State Changes because the server itself was badly configured…

select distinct top 50 count(sce.StateId) as NumStateChanges,
bme.DisplayName AS ObjectName,
m.DisplayName as MonitorDisplayName,
m.Name as MonitorIdName,
mt.typename AS TargetClass
from StateChangeEvent sce with (nolock)
join state s with (nolock) on sce.StateId = s.StateId
join BaseManagedEntity bme with (nolock) on s.BasemanagedEntityId = bme.BasemanagedEntityId
join MonitorView m with (nolock) on s.MonitorId = m.Id
join managedtype mt with (nolock) on m.TargetMonitoringClassId = mt.ManagedTypeId
where m.IsUnitMonitor = 1
   -- Scoped to specific Monitor (remove the "--" below):
   -- AND m.MonitorName like ('%HealthService%')
   -- Scoped to specific Computer (remove the "--" below):
   -- AND bme.Path like ('%sql%')
   -- Scoped to within last 7 days
AND sce.TimeGenerated > dateadd(dd,-7,getutcdate())
group by s.BasemanagedEntityId,bme.DisplayName,bme.Path,m.DisplayName,m.Name,mt.typename
order by NumStateChanges desc

Example of the output:
So now you know which Monitors on what objects/computers are flip-flopping, enabling you to exercise some good old fashioned tuning and troubleshooting with a maximum set of results!

When to run those queries?
Even when you think you’re in the clear (and you could be!), run these queries at least once per month. This way you know what’s going on in your SCOM MG. This allows you to sniff out potential issues and squash them before they get out of control.

Oh, and while you’re at it, run these queries as well allowing you an even deeper insight into the SCOM database and what’s going on.

A: Performance insertions per day
This query shows you how many performance insertions happen on a day to day basis.

THEN 'All Days' ELSE CONVERT(VARCHAR(20), TimeSampled, 102)
END AS DaySampled, COUNT(*) AS PerfInsertPerDay
FROM PerformanceDataAllView with (NOLOCK)

B: Top 20 performance insertions per object/computer per day
This query shows you what objects/computers generate the most performance insertions per day.

select top 20 pcv.ObjectName, pcv.CounterName, count (pcv.countername) as Total
from performancedataallview as pdv, performancecounterview as pcv
where (pdv.performancesourceinternalid = pcv.performancesourceinternalid)
group by pcv.objectname, pcv.countername
order by count (pcv.countername) desc

C: How many Console Alerts are fired per day?
This query shows you how many Console Alerts per day are fired.

SELECT CONVERT(VARCHAR(20), TimeAdded, 102) AS DayAdded, COUNT(*) AS NumAlertsPerDay
WHERE TimeRaised is not NULL

D: Top 20 of Alerts
This query shows you the Top 20 of the most Alerts in the OpsMgr database.

SELECT TOP 20 SUM(1) AS AlertCount, AlertStringName, AlertStringDescription, MonitoringRuleId, Name
WHERE TimeRaised is not NULL
GROUP BY AlertStringName, AlertStringDescription, MonitoringRuleId, Name

Help! I am NOT allowed to run those queries against the OpsMgr database OR I want to use Reports!
And right you are! In many environments the SCOM databases are only accessible for the DBA’s and not for the SCOM admins. Or you want to have reports which show you exactly that kind of information…

Gladly, there are Reports which do just that! Guess you already know the SCOM Health Check Reports V3? Even though ALL reports here are really good and very helpful, these reports in particular will cover the previous mentioned queries:

  • Alerts – Top 20 Alerts
  • Alerts – Total Daily
  • Misc – Config Churn Overview (drill trough)
  • Monitors – Noisiest Monitors
  • Monitors – State Changes per Day
  • Performance – Performance Inserts Daily per Counter
  • Performance – Performance Inserts Daily Total

The message is clear (I hope): Run at least once per month these queries and/or Reports in order to know what’s happening in your SCOM environment. And when you’ve imported new MPs or updated existing ones, run those queries a few days before and after, so you get a better understanding of those new/updated MPs.

This way you’re in control and stay on top of it all.

Friday, November 6, 2015

OMS Goes Xplat!

As we all know is Microsoft ‘all in’ for their cloud services. This is not only an ever expanding service offering but also the coverage, depth and breadth of it is growing fast.

As of this week OMS supports Linux and Docker container management. These Linux distributions are supported by OMS for now. I am pretty sure more will added along the way:

With OMS you can collect this kind of data:

  • Syslog: Collect your choice of syslog events from rsyslog and syslog-ng
  • Performance Metrics:  We can collect 70+ performance metrics at a 30 second granularity using our new Near Real Time Performance data pipelineGet metrics from the following objects:  System, Processor, Memory & Swap space, Process, Logical Disk (File System) and Physical Disk.  Full list of Performance Counters.
  • Docker container logs, metrics & inventory: We show information about where your containers and container hosts are, which containers are running or failed, and Docker dameon and container logs sent to stdout and stderr. We also show performance metrics such as CPU, memory, network and storage for the container and hosts to help you troubleshoot and find noisy neighbor containersWe support Docker version 1.8+.
  • Alerts from Nagios + Zabbix: The agent can collect alerts from your most popular monitoring tools.  This allows you to view all your alerts from all your tools in a single pain of glass!  Combine this with our existing support for collection of alerts from Operations Manager.  We currently support Nagios 3+ and Zabbix 2.x.
  • Apache & MySQL performance metrics: Collect performance metrics about your MySQL/MariaDB server performance and databases as well as Apache HTTP Servers and Virtual Hosts. 

A feature which is really awesome is to connect Nagios to OMS in order to create a single pane of glass. So no more ‘battles’ between different monitoring platforms, instead it’s all connected! Awesome!

Monday, November 2, 2015

Cross Post: OMS & Near Real Time Performance Monitoring Exchange

Priscilla 'Nini' Ikhena, (Program Manager - Microsoft OMS Log Analytic team) wrote an excellent posting about how to monitor key metrics of your Exchange environment using OMS NRT.

This posting shows how fast OMS is growing in it’s capabilities and use case scenario’s. Want to know more? Go here.


Many years ago the previous CEO of Microsoft had a rant on stage, shouting Developers, Developers, Developers, Developers! Apparently he wanted to get the message across that developers were key to the success of the business and as such were recognized in the force the represented.

On-prem SCOM and the UI…
The same rant goes for the user interface (UI) of any application, whether it’s on-prem, cloud based or a hybrid one. The presentation of the data has to be relevant and a smooth experience as well. Of course there are tons of other requirements it has to live up to, but these two are the two most important ones.

We all know that the on-prem SCOM UI (Console) has some challenges here. And the SCOM Web Console? I even don’t want to go there.

From Azure Operational Insights to OMS. How about the UI?
So when Microsoft started out on the Azure Operational Insights adventure, now rebranded to Operations Management Suite (OMS) I was wondering what it would bring for the UI experience.

And I must say that I am deeply impressed. The OMS web UI runs on a plethora of web browsers AND platforms. And most of the times it’s a very smooth ride as well. The information is relevant AND fast.

Back then I ended that same posting with the comment that OMS lacked apps for Android and iOS. About two weeks ago Microsoft ‘fixed’ that as well. Awesome!

Want to know more? Go here.

Soon I’ll write a posting about my personal experiences with the iOS app.

The Power Of The Community & PowerShell

Some history
A long time ago Jason Rydstrand wrote an excellent PS script for creating a HTML based Health Check Report for a SCOM 2007x environment, to be found here. This PS script enabled one to have a quick overview of the health of their SCOM 2007x environment. It worked like a charm and used quite a few times.

When SCOM 2012 saw the light, fellow MVP Scott Moss rewrote that PS script for SCOM 2012x. And again, it worked like a charm and I used it on many occasions.

Time to add my own ‘ingredients’…
However, I personally thought some additional information was lacking. So I decided to add those sections to the existing PS script in order to obtain a SCOM 2012x Health Check Report with even more information.

Mind you, this isn’t any kind of criticism. It’s always easier to add some lines of code to an existing PS script than to write it from the ground. So all respect AND credits go to Jason Rydstrand and Scott Moss for that matter.

What I added/modified
First off, this PS script has a section which mails it to one or more recipients by using a Gmail account. However, many customers of mine don’t have many internal servers with an internet connection, so this section has been commented out. It’s still available, but it won’t work without some help from you.

Instead this PS script creates a HTML file and saves it to a location as defined in the PS code. In this PS script the location is set to ‘C:\Server Management’. In many occasions it’s better to replace it with a UNC path…

Also the Report gets a file name based on the date (YYYYMMDD. example: 20151102) AND time (HH-MM-SS example: 19-31-52) this Report is run. So when run multiple times, every report file will get a unique name, so nothing gets overwritten. On top of it all, the file name will also reflect the name of the Management Group this Report is about. Example of a report name: 20151102_19-31-52 _SCOM Health Check Report MG OM12.html.

Therefore this PS script is ideal for scheduling it using Task Scheduler or similar tools.

Other stuff I added:

  1. Generic Information section
    This section contains the name of the report file, the location and when it’s created. Also it contains more information about the Management Group: the name of the MG, the number and names of the MS & GW servers, and the number of SCOM Agents. Also the name of the company is shown (replace XYZ in the PS code with the correct company name). Example:
  2. Top 10 Rule Based Closed Alerts
    Personally not only the non-closed Alerts help me tuning a SCOM environment, but also the CLOSED ones. Therefore an additional section has been added showing the Top 10 of the Closed Alerts, based on Rules. Example:
  3. Top 10 Monitor Based Closed Alerts
    Same story, but now for Monitor based closed Alerts. Example:
  4. Report Creation Time
    I want to know how long it took to have this HTML report created. So I use the .NET Stopwatch class in order to obtain that information. Example:

Where to obtain this PS script?
You can download it from my OneDrive.

Again, this PS script hasn’t been created by myself. Instead Jason Rydstrand wrote the PS script for SCOM 2007 and Scott Moss rewrote it for SCOM 2012x. I just added some additional stuff, that’s all. So all credits go to Jason Rydstrand and Scott Moss.

When you think you can add some use full code as well, feel free to do so. Contact me and I will update this posting accordingly. Sharing is Caring!