Want to know more? Go here.
A few days ago Anders Bengtsson posted an PS script which exports the list of SCOM Agents to an Excel file. To be found here.
Want to know more? Go here.
A few days ago Anders Bengtsson posted an PS script which exports the list of SCOM Agents to an Excel file. To be found here.
With this posting the way Tasks work will be explained.
As described before, there are two different kind of Tasks: Console Tasks and Agent Tasks. The way they operate is also different and important to know.
Console Tasks
These run locally on the system where the Console is being run from and use functionality and/or UI’s which aren’t typically SCOM based like the SQL Management Studio UI for instance. Of course, in order for this to work the required applications/features need to be present on the system. Otherwise these Task will not run. Also the output created by these very same Tasks aren’t piped back into SCOM.
So Console Tasks extend the SCOM interface in such a manner that the SCOM Console becomes a jumping board to other UI’s or functionality which aren’t typically SCOM based.
Another thing to reckon with are the way authorizations are being handled. As stated before the SCOM Console launches another UI and passes on the credentials which were used to start the SCOM Console. Depending on what UI is started, the authorizations set for the account used for launching the SCOM Console and the way security within the other application has been set and configured, additional logon might be required.
Huh? What am I talking about? Let show an example in order to clarify it. Lets say I started the SCOM Console with an account which has no permissions in the SQL environment (systemcenter\test). I am in the Database Engine View of the SQL MP in the Monitoring Pane of the SCOM R2 Console:
and select a server where the SQL Engine has been detected on by the SQL MP. In the Action Pane under SQL DB Engine Tasks part the Console Task SQL Management Studio is being displayed:
When I click this link SQL Management Studio is started but this message is displayed:
So in order to have this UI connect to a certain SQL DB Engine, I need other authorizations since the test account will not do.
Agent Tasks
Where as Console Tasks launch UI’s or functionality which reside outside the SCOM Console and as such the output created afterwards isn’t piped back into SCOM, Agent Tasks launch processes/scripts defined in SCOM (the MPs as such), which output is piped back into SCOM. The strength here is that everything is kept within a single UI, the SCOM Console. In order for these Tasks to run, credentials are required. By default the credentials used by the SCOM Agent are passed on to that Task. However, one can run Agent Tasks with other credentials as well.
But how does it work exactly? What kind of processes are spawned and where? Let’s take a deeper look into how it works.
For starters, the Health Service process plays a crucial role here (for more detailed information about that process, read this posting of mine). In order to illustrate it, lets run an Agent Task and go through the nuts and bolts as it happens. In this example I run an Agent Task against a test server of mine, the SV02.
I am in the SCOM Console, the Monitoring Pane in the Windows Computer part:
I select the server (SV02) and check the Action Pane. Under the header Windows Computer Tasks there are multiple Tasks available. Among them the Agent Task, Display Local Users.
When I click this link the Run Task screen is displayed:
I have highlighted the Task Credentials area since this part plays a very important role in the Agents Tasks. The first option ‘Use the predefined Run As Account’ is always selected by default. Even though it seems self explanatory enough, some extra explanation is needed here because I have noticed that there is some confusion about it.
Why? Many times people tend to think that the Local System account is being used here. But that isn’t the case however. Lets take a few steps back and look at how the SCOM Agent operates.
Normally the SCOM Agent runs under the Local System account. When I say SCOM Agent, I actually mean the related Health Service, which process name is HealthService.exe. Taken from my earlier mentioned blog posting:
Typically – you will see a couple MonitoringHost processes executing under the Default Agent Action Account. In addition, the HealthService will launch MonitoringHost processes under any preconfigured Run-As accounts that are executing workflows on the agents, using those credentials. Thus ‘giving’ the HealthService the credential management capability to support the execution of modules running as different users.
So by default, the credentials defined in the Run As Profile ‘Default Action Account’ will be used to run the Agent Task when the default option ‘Use the predefined Run As Account’ is chosen and not the Local System account.
However, certain MPs require additional authorizations in order to function (also depending on how tight the security is set in your environment of course). For instance the SQL MP. When this MP is imported, three additional Run As Profiles are added to the list of available Run As Profiles: ‘SQL Server Default Action Account’, ‘SQL Server Discovery Account’ and ‘SQL Server Monitoring Account’.
In this case, when these Profiles do have Run As Accounts configured, an Agent Task based on the SQL MP will use the Run As Account defined in the first Run As Profile, ‘SQL Server Default Action Account’. When this Run As Profile doesn’t have a Run As Account configured, the account defined in the Run As Profile ‘Default Action Account’ will be used instead.
So depending on which MP the Agent Task comes from, the Default Action Account will be used or the Run As Account as defined in the related Run As Profile.
But as you know, you might even choose another set of credentials as well. For this select the option Other in the Run Task screen and type in the required User name, Password and select the Domain where the account resides.
When you hit the Run button, a flow of processes starts. The SCOM Agent is being notified to run a certain Task as defined within the related MP. In order to do this it will spawn an additional MonitoringHost.exe process, using the credentials as selected in the Run Task screen. In this example I have entered the credentials for the Test account in order to make it more visible:
When I check the running MonitoringHost.exe processes on the targeted server BEFORE hitting the RUN button, this is what I see:
Now I hit the RUN button and check the running processes again. Now an additional MonitoringHost.exe process is spawn and as you can see, it runs under the credentials of the test account:
This process runs only a couple of seconds. When the Task is finished the process will be automatically ended. The Task Output is collected and piped back to SCOM:
When an Agent Task is running the Run Task screen can be closed any time. It will not interrupt the Running Task however. Its results are to be found back in the Task Status part of the SCOM Console:
The Details Pane will display the details of the selected Task:
The next posting in this series will be about how to scope the Tasks to the correct group of SCOM Operators.
For now an ‘add-on’ MP has been created by Jimmy Harper which disables these rules and replace them with the fixed versions. This MP needs to be imported in your SCOM environment, alongside the original MP.
Want to know more and download this MP? Go here.
However, since the monitored Ex2010 environment was still under construction, we ignored it. But as soon as Exchange 2010 was about to go life, we had to take a deeper dive and solve it. Gladly the team responsible for the Ex2010 implementation found a KB article which described the issue we were experiencing.
As it turned out, ASP.net impersonation on the RPC and RPC had to be enabled. Want to know more? Read KB2022687.
Cameron Fuller has posted an excellent article, about how to achieve this without taking a deep dive into XML. Of course, one has to adjust some XML code but is just as ‘complex’ as checking the engine oil of your car, whereas the original approach is just as challenging as building your own car :).
or:
Thanks Cameron for sharing! Much appreciated! Posting to be found here.
Even though Silect offers tools for MP Life Cycle Management (among other things) it is still a valuable Whitepaper when you don’t use their tooling. Many companies forget about it while MPs make or break any SCOM environment.
Want to know more? The Whitepaper is to be found here.
Tasks are a feature in SCOM which are a bit underestimated. Many times organizations do not utilize it to the fullest extend. Sometimes they forget to look into the Actions Pane under the header “… Tasks”. Or they are a bit frightened because some Tasks are not to be taken lightly and can cause some serious issues when these are run by persons who do not understand fully what they are doing.
This series of postings will be about Tasks, where they are to be found, why they are present in SCOM, where they come from, what differences there are in Tasks and how to use them. Some Tips and Tricks will be shared as well. Also an approach will be described where people only get to see the Tasks which are directly related to their field of work and responsibilities. And, on top of it all, a simple but handy Task will be authored which enables you run any Alert shown in the Console against Google as a query. So lets start!
Q01: Where are the Tasks to be found?
Hmm, anywhere. Or almost anywhere. Always to be found in the Actions Pane, which resides on the left side of the SCOM Console.
A nice feature about the Task View is that it adepts itself. That is why I wrote “… Tasks” in the introduction of this posting. These three dots are there for a purpose. Depending on where you are in the Monitoring Pane of SCOM, the Tasks header adjust itself accordingly. So the Tasks are always relevant. No Task to stop a SQL Server service while you are viewing a server in a DNS folder of the Monitoring Pane will be shown. This enables one to scope the Tasks to the people who know what they are doing. (More about that in a later posting.) Some existing Task Views are: Windows Computer Tasks, Alert Tasks and SQL DB Engine Tasks.
As the header name suggests, all these Tasks are directly related to that topic:
Q02: Nice! But why are they present in SCOM?
Good question! Never take anything for granted. Always keep asking questions. This way you will learn something. SCOM is not just a product which tells you something is broken and ends there. It will also help you finding out why it broke and refer to KB articles which might be the answer to the issue(s) you are experiencing. And the help doesn’t end there. No!
It offers you also some functionality right from the Console which will help you to start troubleshooting. Like pinging a server, starting a RDP session, opening SQL Management Studio for instance. All these actions are Tasks. So Tasks are here to help you and to use the SCOM Console as a jumping board, enabling you to work faster and keep you targeted as well.
Of course, I know that some Tasks require a bit of attention from Microsoft and that some Alerts do not display all the required information. But… Microsoft listens and takes feedback seriously. You only have to tell them. How? Go to Connect as a described in another blog posting of mine and follow the instructions.
Q03: OK, I see. But where do these Tasks come from?
From the MPs you have imported into your SCOM environment. Many MPs do contain Tasks which are directly related to the product/service/application the MP is targeted against. So the DNS MP contains DNS related Tasks where as the SQL MP contains SQL related Tasks and so on. The guide of the related MP will tell you what Tasks are to be found in that MP. So RTFM is the credo here :).
Q04: Are there differences between Tasks?
Yes, there are. The main differences are Console Tasks and Agent Tasks. Console Tasks run locally on the computer where the Console runs from AND (VERY IMPORTANT TO KNOW!!!) these tasks do run under the credentials which are used to run the Console…
Agent Tasks run remotely on the Agent or Management Server AND (VERY IMPORTANT TO KNOW!!!) these Tasks use the credentials which the SCOM Agent uses or one can enter other credentials in the Run Task screen:
Huh? How you can see whether a Task is Agent or Console based? The icon will tell you more about it:
This icon tells you it is an Agent Task:
and this icon tells you it is a Console Task:
The next posting in this series will be about how Tasks work. So stay tuned!
Want to see it your self? Go here.
The Case – The Monitored Cluster
Suppose you run a file server based on a Failover Cluster configuration, consisting out of Cluster Node A and Cluster Node B. Cluster Node B is idle and Cluster Node A is the owner of all resources, among them Disks P1 and P2.
This configuration is being monitored by SCOM (R2 ideally). For this the Server OS MP and the Cluster MP (among others) have been imported and configured. Also is the Proxy on the SCOM R2 Agents running on both Cluster Nodes enabled. So far so good. The Cluster is being monitored and performance collection also runs.
The Case – Disaster Strikes
Cluster Node A runs well from Monday till Wednesday morning but dies on Wednesday afternoon. Cluster Node B kicks in and becomes the new owner of all the resources, among them Disks P1 and P2.
The Case – The Report and the missing data
After a week someone runs a Report in order to find out more about the % of disk space being used on disks P1 and P2. The Report is targeted at server level. At a first glance the Report seems to be just fine. But wait! From Monday till the beginning of Wednesday data is neatly shown, but after that the graph drops to zero! Huh?
The Question
What? Where is it gone to? The disks are still in place and available. So why does the graph suddenly drop to zero or better, nothingness? Has the Cluster MP turned sour?
The Explanation – Part 1
First of all, the Cluster MP does not collect any performance metrics at all. This is done by the Server OS MP. The Cluster MP covers many health and configuration aspects of the Cluster itself and Alerts when something is not OK.
Time to move on.
The catch here is that Cluster Node B has become the new owner of the disks. So that server will run the collection rules from the moment (*) it became the owner. So when you run a new Report targeted against that server, the graph will start from Wednesday. (* There is a pitfall to reckon with!)
So you end up with two Graphs? One for Cluster Node A and another for Cluster Node B? Yes, you could…
The graph for Cluster Node A displays normal graph from Monday till Wednesday and after that a flat line. Same goes for the Report when targeted against Cluster Node B: a flat line from Monday till Wednesday and a valid graph from early Thursday till Friday.
What about the pitfall?
Good question! As we all know, monitoring and/or performance collection can only start AFTER the discovery has run and ended successfully. The latter is no issue, but the first one is. Why? Well, the discovery of the Logical Disks runs once every 24 hours:
So in a ‘worst-case’ scenario you miss out on monitoring and performance collection for a maximum of 24 hours! Of course, an override could be used here, targeted against the Group ‘Cluster Roles’ in order to reduce that time. But use it smart here. Discoveries running too many times can cause other issues…
The Explanation – Part II, the Smart Approach
When you are running two Node Clusters, above mentioned approach should do. But suppose you are running a plus two Node Cluster? So when a failover occurs, there are multiple possible new owners available. So when a Report is to be created, one must know exactly what Cluster Node was the owner of the Resource the Report is about. And not just that, but also when…
This is not viable at all. It would take way too much time. So another approach is required.
The idea here is that you do not target the Cluster Node, owning the Resource, but the resource itself instead. When you select the disk instead of the Cluster Node, you will find two or more paths, related to this object. Which is logical when a failover has occurred. When referring to the above mentioned example you could see something like this in the Add Group screen for the Performance Report when adding a new Series to a Chart:
Name | Class | Path |
P1 | Windows Server 200x Logical Disk | FQDN of Cluster Node A |
P1 | Windows Server 200x Logical Disk | FQDN of Cluster Node B |
P2 | Windows Server 200x Logical Disk | FQDN of Cluster Node A |
P2 | Windows Server 200x Logical Disk | FQDN of Cluster Node B |
Add one series per path into the same Graph. This way you will get a graph which shows all the collected performance data, across the different Nodes, without having the need to take a deep dive into what Cluster Node owned what Resource and when…
Of course, this graph can have a gap of a maximum of 24 hours…
What it is? A Super Flow is a collection of help files, pictures, diagrams, videos and online resources all combined in one file/application. This particular Super Flow introduces SCOM to people who are new to the product and have to work with it in an Operator User Role.
Much of the information in this Super Flow is based upon the SCOM Documentation. The strength of it is that all is to be found in one place without taking a real deep dive. When people still have questions they can go to the Resources tab which shows them where to get additional information.
The Super Flow is to be found here.
The MP is to be found here. Some instructions are needed though, otherwise it could be easily overlooked:
The Release History describes the additional SQL 20000 MP:
The Download section offers these two sources:
Many thanks to Kenneth van Surksum, who mentioned this MP to me. Thanks! Much appreciated.
Even though it is a great report (Free Space Report), it can time out a lot when targeted against reasonable sized environments. And when it does not, it may run for some time (up to an hour or more). Don’t get me wrong here, I am not downplaying the hard work of some much respected SCOM addicts, but just sharing some experiences.
But lucky me! Since a few weeks I have a new colleague who is really into SCOM. He has done many projects as well and one of his customers used the same report. And there they run into the same issues as I did.
However, this customer has some SQL guru’s who looked at the query and did some magic with it. The results? The Reports are rendered way much faster. For instance, the Free Space Report when targeted against All Windows Computers runs in a matter of two minutes! No more time outs…
Which is great!
So why not share this changed XML file?
Again, credits go to Ziemek Borowski and David Allen, because they provided the ideas and the original MPs. And additional credits go to Marinus Witbraad who shares this new report openly with the Community.
This new report can be downloaded from my SkyDrive. Do not forget to remove the old MP before importing the new one. Wait some time for the Report to show up and enjoy life! :)
The good news is that Microsoft will soon release a MP for just that. This MP will be based on the last SQL MP (version 6.0.6648.0) which covered SQL 2000 instances. This MP will depend on the Libraries in the last version of the SQL MP. This separate SQL 2000 MP won’t be developed any further though.
At the moment I have this setup in place at a customer of mine and I must say, it works great. So the SQL 2000 MPs are imported on top of the latest SQL MP which does not cover SQL 2000 anymore.
As you can see, the SQL 2000 MP components have been imported alongside the latest SQL Server MP…
Suppose one does not have the previous version of this MP, and has not the luxury to wait until the ‘new’ MP for covering SQL 2000 comes out. As a service I have put these two SQL 2000 MP components (Microsoft.SQLServer.2000.Discovery.mp and Microsoft.SQLServer.2000.Monitoring.mp) on my SkyDrive, to be found here.
Normally I would not do this since Microsoft is the one and only company responsible for offering their MPs. But the SQL 2000 MPs won’t be developed any further AND I do get a lot of questions out of the Community where to find the SQL 2000 MP related components.
Also, above information about the SQL 2000 MP is based on this thread, to be found on the OpsMgr TechNet Forums:
Taken directly from the website:
MP to be downloaded from here. This MP runs in SCOM SP1 and R2 environments
MP to be downloaded from here.
MP to be downloaded from here.
There are multiple approaches viable here, like a View or a Report. The one which I found to be most popular however is the Report. This Report is created once, published or saved to a MP, and ready to rock and roll for any other time when needed. One can even schedule the Report – select as output an Excel file – and be send out by mail or put on a file share. Since it is an Excel file one can apply many filters against it in order to drill down into the information.
By default such a Report is not available in SCOM. But with a few mouse clicks – all done from the SCOM Console itself, so no rocket science is required – such a Report is quickly created. This posting will describe how to go about it.
One thing I need to mention: this posting is based on SCOM R2. It should work in SCOM SP1 CU#1 as well though.
First we need to create a Group. This Group is dynamically populated and has some excluded members as well which are all the SCOM R2 Management Servers. Not the Gateway Servers though since these are nothing more but Super SCOM Agents. This Group will contain the Class Health Service.
When this Group is created we check its members and wait for about ten minutes (max). This way the newly created Group has a change to ‘get in to the system’, so the Report we are about to create can use it (the Group must be enumerated). Otherwise we end up with an empty Report when we go too fast. So a bit patience is needed here.
Lets start.
Procedure 01: Creating the Group
OK, you’re back? Ten minutes have passed? Time to move on to the next stage.
Procedure 02: Building the Report
With this Report you have a good mechanism in place for checking what servers do run an Agent, who installed it and when. Also detailed information about those Agents is displayed.
There are a few things to reckon with:
Lets start. In this example I will create a SCOM R2 User Role Report Operators where they are only allowed to view the Server OS related reports. All the other Reports are not to be used by these people. For this three procedures are required:
Procedure 01: AD – Group creation and population
OK, now we have covered the AD side of it all. Time to move on to the SCOM R2 Console.
Procedure 02: SCOM R2 Console – User Role ‘Report Operator’ Creation
Now we have covered the SCOM R2 side of it all.
Lets see how far we are. We have created the Global Group ‘GG_SCOM_Report_Operators_ServerOS_Only’. This Global Group has been added to the Domain Local Group ‘DLG_SCOM_Report_Operators_ServerOS_Only’. In the SCOM R2 Console we have added this same group to the User Role ‘Report Operators – Server OS Reports Only’. And the user Test is a member of the earlier mentioned Global Group. And we have copied the ID related to this User Role.
Time to move on to the last and most important procedure. Without it, all previous actions are pointless.
Procedure 03: SSRS – Security Configuration
Recap:
In this posting I showed how to scope the available Reports to certain User Roles. It can be done but it is labor intensive. Actions are needed in AD, SCOM R2 and SSRS.
Also know that by accessing the Reporting Server directly (http://servername/myreports), the security which has been set in SCOM and SSRS will be circumvented. So people still can run those ‘forbidden’ Reports.
However, the interface which SCOM R2 offers is not present so it can be a challenge for those people to get those reports running. For instance compare the SCOM R2 Reporting parameter area for the SQL Report ‘Top 5 Deadlocked Databases’
with the one in the SSRS instance:
I know which one I prefer… :)