Friday, April 29, 2011

Shock & Awe with the new Savision product ‘Vital Signs’ - Part I – Teaser/Introduction

----------------------------------------------------------------------------------
Postings in the same series:
Part  IIThe Installation
Part IIISherlock Holmes is back!
----------------------------------------------------------------------------------

For some time now Savision (the company behind the product Savision Live Maps) has put a new product on the market: Vital Signs
image

What it is and what it does? As Savision states: ‘…Real-time performance monitoring and troubleshooting for Microsoft Windows Server and SQL Server…’.

But hold on. I have been told it has Dashboards for AD and Exchange as well. And soon there will be a dashboard for Hyper-V! Nice!

Opposite of Live Maps, this product doesn’t require SCOM. It can work together with SCOM (or SCSM for that matter) through a connector, but it isn’t a requirement.

Installation Experience
Since my curiosity was aroused, I contacted Savision and they send me all the stuff I needed. Also documentation about how to install it. But I wanted to test it, so I didn’t read anything. No RTFM… Why? Since many times I find people doing the same. So it is a good test in order to see how bullet proof the product is.

And without any hassle, it installed itself. No issues what so ever. Nice!

Configuration Experience
When Vital Signs is installed it requires some configuration tasks. Even though the (web) interface is totally different compared to Savision Live Maps, it is very intuitive so one doesn’t need to be a rocket scientist. It uses the same lay out as the SCOM/SCSM console with two wunderbars: Administration and Analysis.

This way one feels him/her self at ease with the interface.

First Impressions
After the configuration tasks were finished (still I hadn’t read a single word of the documentation!) it was time to take a look what it does. So I added a SQL server and cycled through the available views. Since one picture says more than a thousand words, take a look at these screen dumps:

image
As one can see, SQL server is divided in sections like User Activity, SQL Memory Management, SQL Engine and Concurrency.

Per section there is a broad range of counters available, which can be dragged on to the dashboard on the right:
image

And when one takes a close look, there are also small dashboards per counter available:
image

Let’s drag one on to the bigger dashboard:
image

But that’s awesome! Many times customers of mine asked for Dashboards like those. And not just that, but also more information about SQL. And look what Vital Signs does! This is great! So now customers can tap into a SQL server and get ALL the information they require. Information which the SQL MP – sadly – doesn’t deliver (yet?).

So what does the SCOM Connector do? As Savision states: ‘…Vital Signs will read alert and incident data from SCOM and SCSM and display them in context of the affected system.  Additionally, tasks are created in SCOM for launching Vital Signs in context of the selected system…’.

Basically it means that one is capable of better troubleshooting: SCOM will trigger an Alert and Vital Signs allows one to take a deeper dive into the Alert by showing Tasks which are relevant to that Alert.

In other postings I will take a deeper dive into this new Savision product and show some nice dashboards. Also I will cover the installation and stuff like that. For now I can say that I am very impressed. For a 1.0 version it looks great and awesome. Products like these add much value to any SCOM implementation.

Thursday, April 28, 2011

MP Template ‘Windows Service’ in SCOM R2 doesn’t accept wild cards :(

Sometimes, when multiple services require monitoring, it would be nice to create the required Windows Service Monitor only once, edit the xml by adding wild cards and changing the Discovery to use a WMI query with wildcards as well.

This way the Discoveries and Monitors will cover all services which require monitoring without too much manual labor.

Last week I bumped into a similar situation and tried the approach as described in an old posting of Brian Wren. Even though this posting is based on SCOM RTM/SP1, I was hoping to to get it working in SCOM R2.

So I started the Windows Service Wizard, found under Authoring > Management Pack Templates.
image

Created the Monitor (put in a MP of its own), exported it and adjusted the XML as stated in Brian’s posting. Also adjusted the $MPElement section as advised by Graham Davies.

But as another comment to the posting of Brian already stated, it doesn’t work anymore.
image

The services aren’t Discovered. So no monitoring takes place. And the query was correct because when I ran the same query in WBEMTEST, it showed all related services…

Tried the same approach in one of mine test environments, with the same ‘results’: nothing. Time to contact some respected people in the SCOM community/world and they told me they experienced the same. So it simply doesn’t work anymore. Which is too bad.

Gladly another approach is available. It involves real MP authoring (so the MP Authoring Console is required) but it works. Kevin Holman wrote a posting about it, even though the aim is different it works . Follow it step-by-step (of course, add your own ingredients to the mix!), and you will end up with a working MP.

Posting to be found here.

Whenever someone has another approach, step forward please. I will update the posting accordingly.

Tuesday, April 26, 2011

Whitepaper: Designing Managed Applications

Yesterday Microsoft released a whitepaper, describing how to design managed applications, covering design principles for managed applications.  This is important for LOB app developers, cloud app developers, and those that want to understand more about failure mode analysis.
image

Translated, for companies who build in-house applications running in hosted environments, this document provides guidelines in order to develop these applications in such a way they can be managed/monitored by Enterprise Management Systems (EMS) like SCOM R2.

Whitepaper is to be found here.

Friday, April 22, 2011

How To: Test email Notification Model in SCOM

Today Microsoft released a KB article describing how to test the notification settings after you have configured email notifications for a subscriber or for a subscription.

Nicest thing is that this KB describes how to create a Task that logs an event to the event log. This way you can test the whole flow directly from the SCOM Console, so no need to undertake other actions elsewhere.

Want to know more? Check out KB934756.

Thursday, April 21, 2011

Veeam Community Podcast

 Rick Vanover, Software Strategy Specialist for Veeam Software, interviewed me today for a Veeam Community Podcast.
image 

We talked about many things like OpsMgr, the past, present and future. How technology relates to the business, how to connect OpsMgr to the business, my daily work & challenges and what I see as an exciting aspect of technology in the future.

Funny thing is, that even though the podcast is run by Veeam, there isn’t a single moment of commercial stuff in the podcast. Like: ‘First I was unhappy until I bought the Veeam nWorks MP’… :). So it’s a true COMMUNITY podcast about technology and OpsMgr in particular. Nice! Nor is this podcast about me, myself and I. It’s about OpsMgr.
clip_image002[6]

The podcast needs some editing and is expected to be posted here within two weeks from now. It will named: Episode 18 - Connecting the dots with OpsMgr.

When it’s posted I will blog about it.

Special thanks to Veeam for creating such a platform for the community.

Wednesday, April 20, 2011

Error Code: 80004005 and the error ‘The Operations Manager Server could not execute WMI Query "(null)" on computer…’ when pushing an Agent to Windows Server

Got this error while trying to push a SCOM R2 Agent to a Windows 2003 Server:

The Operations Manager Server could not execute WMI Query "(null)" on computer xxxxxxxxxxxx

Operation: Agent Install

Install account: xxxxxxxxx

Error Code: 80004005

Error Description: Unspecified error

First thing I did was checking out Kevin Holman’s article ‘Console based Agent Deployment Troubleshooting table’. But this time I couldn’t find a real solution.

The Issue
So it was time for some more detailed troubleshooting. I started the special logging for Agent installations as stated here.
image

When finished, the log file OpsMgrCustom.log was created. There were some errors to be found, related to the Agent push going wrong. The first two errors didn’t reveal much information though. Searching for it on the internet didn’t give anything solid to go on.

The Cause
But this part of the log file turned out to be the key to success: ‘The Operations Manager Server could not execute WMI Query "Select * from Win32_Environment where NAME='PROCESSOR_ARCHITECTURE'" on computer xxxxxxxxxxxxx .Operation: Agent InstallInstall account: xxxxxxxxxxxx. Error Code: 80004005Error Description: Unspecified error".’

Also the Management Server where the Agent was being pushed from logged an event in the OpsMgr Event log, EventID 10629:
image

First I thought it was a security related issue. So I started a RDP session to the server with the account being used to push the Agent. No issues there. Then I started wbemtest, connected to root/cimv2 and fired the same query. And to my surprise it turned out empty! No data was shown.

Strange. The same WMI query was run on other servers and here data was returned. So something was amiss with WMI. Time to fix it.

The Workaround
First WMI was repaired. But still the query turned out empty. Then WMI was reregistered. Again to no avail.

So it was time for using the ‘The Other Bing’ for some more searching. Soon I bumped into this thread on the System Center Forums. Here some one bumped into the same issue where also repairing/reregistering WMI didn’t work:
image

But after adding a key in the hive HKEY_CURRENT_USER he got it working. Not a solution but a work around. So I logged out of the server, logged on again with the same account being used for pushing the Agent, so the correct hive would be loaded under HKCU, and added the key:
image

And now the query, run against WMI, returned data. Time to push the Agent once more:
image

Yes! Issue solved! Even though the underlying issue hasn’t been fixed (it’s a workaround), the Agent is pushed none the less. Thanks to Peter Fischer who shared this, so all credits go to him.

Thursday, April 14, 2011

EventID 31552 in OpsMgr event log of RMS and perfmon Reports fail to show data…

Bumped into this issue a couple of times. Even blogged about it before, to be found here. The issue caused my hair – the little bit that remained – to fall out because it wasn’t a very nice situation at all. Every time, the same culprit was found: the Exchange 2010 Reporting MP.

So why blog about it again? Well I have seen situations when you upgrade to the newest version of the Exchange 2010 MP you might bump into the same EventID and situation as well. Unlike the previous posting, this time I will share the Stored Procedure required to get it all running again.

BUT… be careful AND run it only when you know what you’re doing. When in doubt, open a case with PSS. And even when you decide to run the SP, be sure to BACK UP both databases (OpsMgr & Data Warehouse) so there is a way back. Why backing up both databases when you run the SP only against the DW? When a restore is required and you only restore the DW, the OpsMgr and DW will be out of sync. So it’s always better to backup BOTH databases and restore the BOTH of them as well.

A small recap of my previous posting:

The Issue
The performance Reports turn up empty or show no data from a certain date, point in time until present. And this behavior will continue, meaning that when you don’t act the performance Reports won’t show recent data at all. Reports which are affected are generally the performance reports, like the ones from the Windows Server OS Reports. Other performance Reports are affected as well.

When investigating, the OpsMgr event log of the RMS will show many times EventID 31552:
image

Hmm. The message Failed to store data in the Data Warehouse tells a lot. The Instance name (Microsoft.Exchange.2010.Reports.Dataset.Availability) tells me exactly what Dataset is being problematic.

The Cause
As it turns out, data aggregation for Exchange 2010 isn’t working as it should. Danielle Grandini blogged about it in detail. Because of this the performance Reports end up empty. Like stated before, data inserted into the Data Warehouse goes through a series of processes:
image

So the data is present but not ready for consumption by the Reports.

The Solution/Work Around 
Removing any MP won’t remove any dataset from the Data Warehouse. The same goes for the Exchange 2010 Reporting MP. So removing it won’t remove the Dataset. Thus the performance Reports will still end up empty. So when the culprit has been removed a Stored Procedure must be ran against the Data Warehouse (DW) in order to get the flow of data running again in the Data Warehouse.

This is the SP you need to run against the DW:

USE OperationsManagerDW
DECLARE @DatasetId uniqueidentifier
SELECT @DatasetId = DatasetId FROM Dataset WHERE DatasetDefaultName = 'Microsoft.Exchange.2010.Reports.Dataset.Availability'
EXEC StandardDatasetDelete
@DatasetId = @DatasetId

Some explanations are at order here:

  • The name after USE is the name of the Data Warehouse database, which is by default OperationsManagerDW. When your DW has another name, change it accordingly in the SP;
  • The name of the dataset is in blue. When EventID 31552 shows another dataset, adjust it in the SP accordingly.

Like stated in the previous posting, after this the data came back in the Performance Reports (it took about 12 to 15 hours for the Reports to catch up). When the Reports were OK again, I imported the Exchange 2010 MP but left out the Reporting MP part of it. Until now all is well.

I hope that the latest Exchange 2010 Report MP does not have this issue. Still testing it.

Monday, April 11, 2011

As Time Moves On…

The Beginning
In May 2008 I started as a junior Consultant for a small company in the Netherlands. Main focus for me would be SCOM. Had already worked with the product a couple of times, passed the exam and read a book about it. Not much luggage, but the best I could get at that moment.

When I started I worked together with a colleague, a senior. He taught me the ropes. But questions I had and the more he answered, new questions arose. More complex they were, but questions none the less. 

And soon the moment arrived he told me he had taught me all he knew and that my knowledge of the product already transcends his knowledge and experience.

The Big Switch & Time To Learn
So the pupil became the teacher. But the questions were still there. Soon another colleague pointed me out THE SCOM book, SCOM Unleashed. Wow! I read it twice! Finally a book written by people who work with SCOM on a daily basis. Knowing the ins and outs. The pitfalls, the good, the bad & ugly. And sharing it. Like being in a desert for a long long time and suddenly finding an oasis! I drank it all!

In the same time frame I found some good blogs, run by different people. Like Pete Zerger, Maarten Goet, David Allen, Alexandre Verkinderen, Danielle Grandini and the lot. Awesome.

And these guys were MVPs? What’s that? Soon I learned what MVP meant. That SCOM Unleashed had been written by MVPs as well. Wow! They seemed like higher beings to me. Never ever I thought that one day I would be asked to join their league and to find out they are normal people like you and me but having the drive to know very much about some products and share it.

MOAB (Mother Of All Blogs)
And soon I found THE blog about SCOM. Very detailed and technical. But explained in such a way that it was easy to understand. This blog was run by PFE (Premier Field Engineer) Kevin Holman. I left a couple of comments on his blog and he even answered them.

From that moment on I started to learn real quick. Many questions I still had but couldn’t find the answer, or I didn’t understand the answer. So I sent Kevin ‘some’ mails. And he answered them as well. And again, I learned. Not just a little bit, but with enormous leaps. Thanks to Kevin’s blog and his answers to my mails. I almost spammed him!

Since I learned so much new stuff I wanted to share it with the community. So I started a blog. In the same spirit of the SCOM MVPs and PFEs like Kevin Holman, Jimmy Harper and Jonathan Almquist.

And again I was surprised. I had set myself a goal with my blog. If it would not attract a certain amount of visitors per week after a year, I would pull the plug. Why blogging when no one or not too many people read it? Apparently because it isn’t that good. But within a couple of months I already reached the targeted amount of visitors for a year!

And yes, I made mistakes. But MVPs mailed me and pointed me out what wasn’t good. So I corrected those postings, always referring to the people who assisted me in getting it right. And Kevin did the same. So thanks to him I didn’t only learn but made my blog better as well.

A New Role…
In all those years his blog remained Number One for me. And now his blog will change. Why? Kevin has accepted a new role within Microsoft. As from today he will operate as a Data Center TSP (Technical Solution Professional), covering the Microsoft System Center suite of products: SCCM, SCOM, Opalis/Orchestrator, DPM, and Avicode.

As a result his blog will reflect that change as well. Like Kevin states on his blog: ‘…covering a lot broader range of products, instead of focusing only on OpsMgr. I also imagine that things will get more deployment and design focused, probably not as rich with “early to market” issues and challenges like I have seen in the past…’

Thank you Kevin for all you valuable postings and answers to my mail. I learned a lot from it and has brought me to a level of operations where I wouldn’t have been without it. All the best with your new role. Hopefully we stay in contact.

Me, Myself and I
And for me? My role is shifting as well. Primarily focused at SCOM but with new assignments also moving to the business side of things in conjunction with more SC products like SCSM, Opalis, cloud based versions like Windows Intune and SCA, SCVMM and virtualization based on Hyper-V. Also the organization I work for is changing and offering me many challenges as well. So time is short and besides my work I also have a family with whom I want to spend quality time.

So the blog will keep on running and I will keep on working with it. Still loving it! But the last months I have noticed that the total amount of time available for blogging is under pressure. I am following all kinds of trainings, some technical others organizational and personal. I still have the series of upgrading to SQL 2008 R2 in the pipe line. Posting 1 is already out there for some time and Posting 2 is under production. But the high pace isn’t simply feasible any more. However, I prefer quality over quantity. And I blog not for myself, but for the community. And as long as YOU keep on visiting my blog and keeping me sharp, I will MAKE time to blog.

Thank you all for bringing me so much knowledge, experience and joy. Hopefully we can keep this going on for the time to come.

Thursday, April 7, 2011

How To: Configure the SharePoint 2010 MP to support multiple farms

On the Operations Manager Support Forums, Dan Rogers started a thread about how to configure the SharePoint 2010 MP in order to support multiple farms.

Taken directly from the website:

Sometimes the question "how can I monitor multiple farms with the SharePoint 2010 management pack" comes up. The guide says that the MP only supports one farm - but there is a straightforward way to help the MP manage multiple farms.

A file named SharePointMP.config is used to configure the behavior of the SharePoint 2010 management pack. This file should be on all RMS roles in the given management group Icluster case).

The challenge with monitoring multiple farms is due to the need to have farm administrator permission to do the discovery and monitoring that is used to monitor SharePoint 2010. The typical configuration for a single farm has the SCOM admin assuming that the run-as profile is involved and directly manipulating this is a complex operation for multiple farms.

The config file and configuration task included in the MP comes to the rescue here.

That file contains a section called Associations. It is this section that should be adjusted for multiple farm support.

Example:

<Association Account="Contoso - SharePoint Farm administrator -3" Type="Agent">
<Machine Name="NT10" />
<Machine Name="NT11" />
<Machine Name="NT12" />
<Machine Name="NT13" />
<Machine Name="NT14" />
<Machine Name="NT15" />

</Association>

<Association Account="Contoso - SharePoint 2010 Farm Administrator 2" Type="Agent">
<Machine Name="NT64" />
<Machine Name="NT65" />
<Machine Name="NT66" />
</Association>

<Association Account="Contoso - SharePoint 2010 Farm Administrator" Type="Agent">
<Machine Name="NT77" />
<Machine Name="NT75" />
<Machine Name="NT76" />
</Association>

After adjusting the SharePoint configuration file, run the configuration task that is included in the management pack per the instructions found in the guide.

The related thread can be found here.

Updated MP: Exchange Server 2010 Monitoring

Yesterday Microsoft released the updated Exchange Server 2010 Monitoring MP, version 14.02.0071.0.

New features & updates in this MP:

  • Capacity planning and performance reports  
    New reports dig deep into the performance of individual servers and provide detailed information about how much capacity is used in each site.

  • SMTP and remote PowerShell availability report   
    The management pack now includes two new availability reports for SMTP client connections and management end points.

  • New Test-SMTPConnectivity synthetic transaction   
    In addition to the inbound mail connectivity tasks for protocols such as Outlook Web App, Outlook, IMAP, POP, and Exchange ActiveSync, the Management Pack now includes SMTP-connectivity monitoring for outbound mail from IMAP and POP clients.

  • New Test-ECPConnectivity view   
    Views for the Exchange Control Panel test task are now included in the monitoring tree.

  • Cross-premises mail flow monitoring and reporting   
    The Management Pack includes new mail flow monitoring and reporting capabilities for customers who use our hosted service.

  • Improved Content Indexing and Mailbox Disk Space monitoring   
    New scripts have been created to better monitor context indexing and mailbox disk space. These new scripts enable automatic repair of indexes and more accurately report of disk space issues.

  • The ability to disable Automatic Alert Resolution
    In environments that include OpsMgr connectors   When you disable Automatic Alert Resolution, the Correlation Engine won't automatically resolve alerts. This lets you use your support ticketing system to manage your environment.

Other updates and improvements were also added to this version of the Management Pack, including the following:

  • Suppression of alerts when the alerts only occur occasionally was added to many monitors.
  • Most of the event monitors in the Exchange 2010 Management Pack are automatically reset by the Correlation Engine. Automatic reset was added to those event monitors so that issues aren't missed the next time they occur.
  • Monitoring was added for processes that crash repeatedly.
  • Additional performance monitoring was added for Outlook Web App.
  • Monitoring of Active Directory access was improved.
  • Monitoring of anonymous calendar sharing was added.
  • Reliability of database offline alerts was improved.
  • Monitoring for the database engine (ESE) was added.

Kevin Holman also blogged about it, his posting is to be found here. MP to be downloaded from here.

Wednesday, April 6, 2011

Tech-Ed Europe 2012–Date & Location announced

Today Microsoft announced the date & location for Tech-Ed Europe 2012.

As many of us already knew, there won’t be a Tech-Ed Europe 2011. Main reason is that Microsoft got much feedback asking for shifting Tech-Ed Europe back to the summer timeframe. And this is going to happen in 2012.
image

Organizing a Tech-Ed conference takes much time, having one in 2011 in the current timeframe (Q4) and one in Q2 2012 would be way too much. Therefore Microsoft decided to drop the one for 2011.

Another thing is the place: Amsterdam, the country where I live! On one hand it’s nice, on the other hand a pity as well since I always loved to travel to another country (attended Tech-Ed 2008 in Barcelona which was AWESOME!). But also the two ones afterwards in Berlin (2009 & 2010) were very nice.

Any one looking for a Tech-Ed 2011? There is one in North America (Atlanta, Georgia) you can attend. Want to know more about that one? Go here.

Tuesday, April 5, 2011

SCOM R2 Agent won’t start. System event log shows EventID 7024 with Description ‘The System Center Management service terminated with service-specific error 123 (0x7B)’.

Cracked a tough one today.

The Case
A SCOM R2 Agent with CU#4 didn’t start at all. Agent tried to start but died within seconds, showing this message:
clip_image001

So I checked the OpsMgr event log, but it turned out empty. Checked the System event log and it showed Event ID 7024:
image

Easy One? Bummer!
Hmm, EventID 7024. I know that one. Even blogged about it. And Jimmy Harper, SCOM PFE, also blogged about it. But no matter what I did, nothing helped. My posting didn’t help nor Jimmy’s posting for that matter.

A bit frustrating it was.

The issue did resemble the postings though, because the SCOM R2 Agent wasn’t able to create a self-signed certificate as well. The local certificate store of the computer didn’t contain any SCOM related certificate. So the problem was known, but not the cause…

And somewhere the cause of it all was to be found in the description of the EventID: The System Center Management service terminated with service-specific error 123 (0x7B).

But I needed more information to go on. This was just too shallow.

Final Attempt, let’s take a DEEP dive
So I tried another approach. I started diagnostic tracing on the SCOM R2 Agent, as described in KB942864, hoping to find more information to go on.

After having run the command FormatTracing.cmd, many readable log files were created. Most of them were almost empty, but one of them, the file TracingGuidsNative.log, told me that the SCOM R2 Agent tried to access a TMP folder which was totally wrong.

Normally the TEMP and TMP System Variables of any Windows system do have this format: %SystemRoot%\TEMP. But on this problematic server the TMP System Variable looked like this: C:\Program Files\<Other Software>\<Folder>;%SystemRoot%\TEMP.

Windows will not differentiate between these two folders even when a semicolon is in place. It will look upon it as a whole path, like “C:\Program Files\<Other Software>\<Folder>;%SystemRoot%\TEMP”. And paths like these do not work. Ever.

After having removed the corrupt entry (basically bringing it back to the default value of %SystemRoot%\TEMP) and saving the changes, the SCOM R2 Agent started without any errors at all!

Yeah! Nice one!

Recap
So whenever a SCOM R2 Agent doesn’t start and the Event log of the problematic server doesn’t give too many details, try diagnostic tracing in order to obtain more information. But do only so when you know what you’re doing.

Monday, April 4, 2011

New MP: Forefront Identity Manager (FIM) 2010 MP RTM available

On March 31st a new MP has been launched by Microsoft which monitors Forefront Identity Manager 2010 (FIM). In Q2 2010 the pre release was available for download. Based on the input customers gave, Microsoft has adjusted and fine tuned the MP.

Taken directly from the website:
image

MP can be downloaded from here.

Friday, April 1, 2011

Common Mistakes and Confusion with the IIS MP Version

Suppose you run IIS 6 (Windows 2003 Server) and SCOM. So SCOM monitors IIS. You think all is well and the IIS MP is up to date. In the SCOM Console the version of the IIS 2003 MP is 6.0.6658.0.

But when you visit the website with the latest version of the IIS MP this is what you find:
image

So one tends to think: What happened to IIS 6 and 5.5 ? When one looks further on the same website, the confusion becomes even bigger, take a close look at the versions of this MP, in yellow:
image

Exactly! No where version 6.0.6658.0 is to be found, the one which is loaded in your environment. So now you’re sure: This is NOT the MP for monitoring IIS 6.0 or older!

But let’s open the guide related to the MP in order to be sure for a full 100%. The guide is also listed on the same webpage so no further searching is needed.

The header Supported Configurations (page 5) tells it all, doesn’t it?
image

Still looks like we’re right.  But since the document is open, let’s take a deeper look and under the header Files in This Management Pack (page 7) :
image

Now I am lost! So the IIS 7.0 MP, version 6.0.7600.0 contains other MPs as well, targeted at monitoring IIS 2003. But what version are they?

When you open the Catalog for the MPs in the SCOM R2 Console and search for IIS MPs this is what you’ll find:image

When you import the msi-file from the earlier mentioned website, run it and try to import the MPs with the Console, you’ll see the same versions for IIS 6.0 and 5.5 (version 6.0.6658.0), the version which isn’t listed on the same website…

Even though it’s  April Fools’ Day, this is serious, no joking here.

Basically this is happening:

  • The MPs for IIS 6.0 and 5.5 aren’t updated to version 6.0.7600.0;
  • The latest version of those MPs is 6.0.6658.0;
  • Only the MP for IIS 7.0 is on 6.0.7600.0 level;
  • The IIS 7.0 MP contains all the MPs for covering IIS from 5.5 to 7.0;
  • The website isn’t clear about that at all;
  • The document isn’t very clear about that either.

Hopefully I have made a potential confusion topic a bit clearer…