Finally the holiday season has started for me. Therefore this blog will be silent til the first week of January.
As such I wish you all a merry christmas and a happy new year!
Finally the holiday season has started for me. Therefore this blog will be silent til the first week of January.
As such I wish you all a merry christmas and a happy new year!
Yikes! Seems like Microsoft has released a TOTAL NEW AD MP! Which is quite awesome since the previous MP had some serious issues. Most of them seem to be fixed in this MP.
The version of this MP is 10.0.1.0. What has changed? A LOT!!! Taken directly from the guide:
Version 10.0.0.0 of the Management Pack for ADDS is an initial release of a new Management Pack for Active Directory® (AD). It is based on the Active Directory Management Pack (AD MP) and includes many changes from the AD MP.
As you can see, this is indeed a whole new MP. And on the outside it seems Microsoft has addressed many painpoints of the previous version.
This MP works on DCs running Windows Server 2012, 2012 R2 and 2016. It runs on SCOM 2012 R2 or later.
Want to download this MP? Go here.
Kevin Holman has also written a posting about this new MP.
Issue
Suppose you’ve rolled out a VM with Windows Server 2016 Core and deployed on that same VM SQL Server 2016 (with the command line setup.exe /UIMODE=EnableUIOnServerCore /Action=Install).
Another VM runs Windows Server 2016 with Desktop Experience and is used as a Stepping Stone server, hosting all kinds of Consoles in order to manage the products/services hosted by many other VMs running the Core installation option.
On that server you start SQL Server Management Studio and want to connect to the previously installed SQL instance. However, all you get is this error message: ‘…Cannot connect to [SQL instance]. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 5)…’
Cause
When you’ve configured the SQL instance correctly during installation so that the account you’re using has access permissions, SQL and the VM hosting it, require additional configuration in order to access it remotely by SQL Server Management Studio.
Without the additional configuration you can’t access the SQL instance remotely.
Solution
Follow these steps and when done correctly, you’ll be able to access the SQL instance remotely by using SQL Server Management Studio.
New-NetFirewallRule -DisplayName "Allow outbound SQL-Transact Traffic (TCP Port 135)" -Direction outbound –LocalPort 135 -Protocol TCP -Action Allow
These two lines will allow SQL Browser traffic over TCP Port 2382:
New-NetFirewallRule -DisplayName "Allow inbound SQL Browser TCP Traffic (TCP Port 2382)" -Direction inbound –LocalPort 2382 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Browser TCP Traffic (TCP Port 2382)" -Direction outbound –LocalPort 2382 -Protocol TCP -Action Allow
These two lines will allow SQL Browser traffic over UDP Port 1434:
New-NetFirewallRule -DisplayName "Allow inbound SQL Browser UDP Traffic (UDP Port 1434)" -Direction inbound –LocalPort 1434 -Protocol UDP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Browser UDP Traffic (UDP Port 1434)" -Direction outbound –LocalPort 1434 -Protocol UDP -Action Allow
!!!Only when required!!!
These two lines will allow web traffic over TCP Port 80 (e.g for SSRS instances):
New-NetFirewallRule -DisplayName "Allow inbound HTTP Traffic (TCP Port 80)" -Direction inbound –LocalPort 80 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound HTTP Traffic (TCP Port 80)" -Direction outbound –LocalPort 80 -Protocol TCP -Action Allow
!!!Only when required!!!
These two lines will allow secure web traffic over TCP Port 443 (e.g for SSRS instances):
New-NetFirewallRule -DisplayName "Allow inbound HTTPS Traffic (TCP Port 443)" -Direction inbound –LocalPort 80 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound HTTPS Traffic (TCP Port 443)" -Direction outbound –LocalPort 80 -Protocol TCP -Action Allow
!!!Only when required!!!
These two lines will allow SQL Analysis traffic over TCP Port 2383:
New-NetFirewallRule -DisplayName "Allow inbound SQL Analysis Traffic (TCP Port 2383)" -Direction inbound –LocalPort 2383 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Allow outbound SQL Analysis Traffic (TCP Port 2383)" -Direction outbound –LocalPort 2383 -Protocol TCP -Action Allow
Allow WMI traffic
When installing SCOM 2016 for instance, WMI traffic has to be allowed. By default the Windows Firewall on the SQL box blocks it, stopping the installation of SCOM 2016. With this PS oneliner WMI traffic is allowed.
netsh advfirewall firewall set rule group="Windows Management Instrumentation (WMI)" new enable=yes
No restart is required. Now all required SQL and WMI traffic to the SQL server is allowed.
Used resources
Noticed this issue some time ago in my test lab but forgot to blog about it. None the less it can be a nagging issue, while the solution is simple. So here it is.
Issue
A new VM is deployed, based on WS 2016 Std, no GUI. When this VM is added to the domain and restarted, it defaults to the old credentials AND the old system name. This doesn’t work since one has to use another (AD based) account.
For this the LogonUI.exe screen tells you to hit the Escape key twice in order to enter alternate credentials. However, when connected to the VM with Enhanced Session mode, only the first [Escape] key entry is processed:
I hit the [Escape] key the first time, and now I am told to hit that key a second time:
But now the second entry of the [Escape] key isn’t accepted.
Cause
Somehow when running an enhanced session with the related VM, the second hit of the [Escape] key isn’t passed to the VM.
Resolution
Change the session to Basic session.
You have to logon again and as such hit the [Escape] key two times. However, this time the second entry of the [Escape] key will be passed to the VM as well, allowing you to change to other user credentials:
When running a test lab on a tight budget it’s a challenge to get the most out of the available CPU, RAM and storage. Over the last years I learned some nice tricks in order to run the maximum amount of VMs on my test lab, and still having an acceptable performance.
Please be reminded, this approach of combined ‘tricks’ is only viable in test labs and shouldn’t be used in any production environment at any times! And no, I am NOT responsible for your test labs in any kind of way…
Some ground rules first
Here are some basics in order to get the most of the available hardware of your testlab.
Resource saver 01: Differencing Disks
When using differencing disks for ALL the VMs running on your test lab system, you save a LOT of storage. The parent disk contains the server OS and the differencing disk contains the delta’s for that particular VM. For instance, the VM running SQL will have a differencing disk containing the SQL installation and DB files, but use the parent disk running the server OS, containing between 9 to 14 GBs of data.
That parent disk will be used by all other VMs, resulting in MASSIVE disk cost savings per VM.
How to create a parent disk? That’s easy!
Now you’ve got yourself a nice parent disk. Read this posting in order to roll out a VM using this parent disk.
Resource saver 02: No GUI!
Yes, I know. Many Windows users are used to clicking through windows. Hence the name of the OS! BUT when running Windows Server 2016 Std without a GUI as a parent disk, one saves 4,5 GB compared to a parent disk hosting Windows Server 2016 Std with a GUI (Desktop Experience).
When running MANY VMs and as much of them using the no GUI version, one quickly saves tens of GBs!
Besides that, one learns how to work with Windows Server 2016 without a GUI, which is a good thing as well. Ever heard of the utility sconfig? It’s powerfull and helps one out with the basic configuration stuff:
Resource saver 03: Deduplication
Wow! This feature is totally awesome. And pretty easy to use on your Windows 2016 server hosting all the VMs. Simply add this Role (File Server > Data Deduplication) to your server:
And enable them ONLY for the SSD volumes hosting the VMs and related (meta) data:
Set Data deplucation to General purpose file server and files older than zero (0)days:
Once per week, shut down all VMs and run these PS cmdlets per SSD volume for which dedup is enabled and configured: Start-DedupJob -Volume "D:" -Type Optimization -Memory 50
Let it run as long as it takes. With PS cmdlet Get-DedupJob you’ll see the progress of the running dedup job(s).
With the PS cmdlet Get-DedupStatus you’ll see the actual dedup status of the dedup enabled volumes:
When dedup is ready, fire up the VMs and you’re back in business! And of course, all these steps can be scripted with PowerShell as well. And this PS script can be scheduled as required.
Resource saver 04: Dynamic Memory
With dynamic memory you can squeeze the maximum utilization of the available RAM. And even ‘more’ when using Windows Server 2016 WITHOUT a GUI. Since this OS edition has a far lesser footprint on the available resources.
As such you can run VMs hosting AD domain controllers and DNS with only consuming 675 MBs of RAM! And with the dynamic memory config you can set the limit to 1024 MB max.
This way you get the most of the available RAM of your Hyper-V server.
Recap
Sure, everything can be put into the cloud. But guess what? Running 20+ VMs in Azure isn’t cheap. One saves a LOT of money when hosting those same VMs on an oversized desktop as a testlab .
When using it smart with all the resource savers I mentioned before, you’ll squeeze the max out of it, while still having a reasonable performance.
And when combined with Splashtop you can remotely wake up the testlab when required (some additional one time router configuration is needed here). As such this testlab doesn’t have to run 24/7 but is only fired up when required.
Background information
Some years ago I bought myself a new system in order to function as my personal test lab. Since budget didn’t allow for a state of the art system, I had to puzzle a lot. Yes, I needed storage with high IO, a reasonable fast CPU and fast AND loads of RAM.
But again, budget was limited. So after a lot of research I spent every euro of the allocated budget and got myself maximum value for money. All based on PC (desktop) hardware and not a single piece of server hardware because that was way outside the budget. But still the system I finally got allowed me to built my own test lab, running 16 VMs and still delivering good performance!
Since the system allowed for growth, in the past years I added more RAM, additional SSDs for storage and upgraded the CPU as well. On the server OS side of things the lab ran Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and now Windows Server 2016.
The NIC ‘issue’
But I was always a bit hesitant to upgrade the parent Windows Server OS since the Intel desktop motherboard (DZ68DB series) in this system has some quirks. The integrated Intel 82579 Gigabit NIC won’t install by default on a Windows Server OS. It requires some additional steps in order to make it work. The reason here is that the driver BLOCKS the installation on any Windows Server OS by default!
On itself understandable, but quite frustrating after having spend all my available budget on my new to be test lab!
So with every new Windows Server OS upgrade I went through the same challenge. Of course, I could use another NIC instead. And believe me, I tried! But here another quirck came up: that other NIC (I tried different brands with different chipsets) never worked!
In my other systems the same NIC worked without a sweat, but in the would be server it was a no go. No matter what I tried. And believe me, I went deep! So I HAD to make the onboard Intel 82579 Gigabit NIC work, no matter what!
Intel 82579 Gigabit NIC vs ME: 0-1!!!!
When Windows Server 2016 went GA I decided to upgrade my whole lab to this new Server OS. So I had to face the challenge, making the Intel 82579 Gigabit NIC work with Windows Server 2016.
Last weekend it was show time! And to my surprise I finally found out myself how to address it rather quickly and within less than an hour, Windows Server 2016 installed the driver, resulting in a fully functional NIC!
I decided to share this, since the same approach can be used for making any Intel desktop NIC work on Windows Server 2016.
How the west was won
First Windows Server must be put into ‘test mode’. As such it accepts the installation of unsigned drivers. Follow this procedure:
After the reboot the server is in test mode, as shown in the lower right corner of the desktop.
Now it’s time to get the hardware ID’s of the Intel NIC. You’ll need those ID’s later on.
With this information it’s time to ‘hack’ the INF file so the driver will install just fine.
And as stated before, this method can be used with any other Intel NIC. Just be sure to use the correct Hardware Ids.
A few days ago Microsoft released an update for their Windows Server OS MP, version 6.0.7323.0.
Unfortunately is the MP Guide for this updated MP still review mode:
Apparently the ‘author’ was a bit busy and forgot to finalize this important document…
But the changes in this MP are:
As such the changes aren’t that big. This update is more aimed at aligning this MP with the Windows Server OS MP which uses the same library Server OS MPs.
For a few weeks now the Windows Server 2016 MP (version 10.0.8.0) is available for download.
With the release of this MP Microsoft breaks with the tradition that a single Windows Server OS MP covers all versions covered by Mainstream Support, since this MP ‘only’ covers Windows Server 2016 installations, Nano server included.
Mp can be downloaded from here.
For some months the OMS Gateway with SCOM Support was in public preview.
Now it’s GA with these two significant updates:
You can either download the OMS Gateway from your OMS Workspace or the Azure Portal.
Want to know more? Go here.
I thought I understood already all there was to know about Resource Pools. But heck no! Wished I knew what Kevin has just posted when I wrote the chapter ‘Complex Configurations’ for the SCOM 2012 Unleashed book .
But back then I didn’t know. But now I do. And there is much more to Resource Pools than I thought possible. And it can be modified as well!
Totally awesome! Want to know more? Read Kevin’s posting and be amazed, just like me!
All credits go to Kevin Holman for sharing AND his colleague Mihai Sarbulescu turns out to be the SCOM Resource Pool guru! Changes are this man knows a lot more SCOM stuff as well, so perhaps other mind blowing postings are to be expected in the near future?
Some background information
In June 2015 Microsoft completed the acquisition of BlueStripe Software. Their flagship product being BlueStripe FactFinder, a dynamic monitoring solution which maps, monitors and troubleshoots distributed applications across heterogeneous operating systems and multiple datacenter and cloud environments.
On itself very impressive and when combined with SCOM it got even more impressive since it extended SCOM to an unprecedented level. It enabled SCOM to dynamically discover multi layered applications, build DA’s on the fly and show real time performance monitoring in the SCOM Console as well.
Sadly, when BlueStripe Software was acquired by Microsoft, the flagship product was pulled. Only updates for existing customers were available but that was just about it.
OMS it is…
Until now that is. Microsoft and the former BlueStripe people have worked hard in order to fold the BlueStripe FactFinder functionality into OMS as a Solution, branded Service Map, previously called Application Dependency Monitor.
It’s in Public Preview, so you can test drive it for free.
When you want to know more about this new OMS Solution go here and read the whole article all about Service Map, it’s capabilities and possibilities, written by Nick Burling, Principal Program Manager on the Enterprise Cloud Management Team.
As stated earlier, SCCM uses a new approach for it’s updates. Almost three to four times per year an update for SCCM becomes available. As a result, Microsoft speaks now of CaaS, ConfigMgr-as-a-Service.
IMHO, it’s a success. But who am I at the end of the day? Only one man with a blog that’s all. And sure, I get positive feedback from my customers when I ask them about their (update) experiences with CaaS.
But still, it only represents a small number of it all. Especially when we talk about SCCM/ConfigMgr.
So gladly Microsoft has published some numbers which are impressive:
Want to know more? Read this posting by Brad Anderson and be – just like me – amazed & impressed.
Since SCCM 1511 a whole new update mechanism is introduced. In this new approach the Windows 10 update mechanism - where updates are pushed out in so called ‘rings’ - is used by SCCM 1511 and later as well.
As such SCCM is growing into a Software-as-a-Service model, titled CaaS, ConfigMgr-as-a-Service. As a result the latest & greatest version of SCCM is dubbed ConfigMgr Current Branch.
For all of my customers this approach works great. No more Googling required in order to see whether their SCCM environment is up to date. Instead the SCCM Console itself, tells the admins when an update is available.
And it doesn’t end there. SCCM also aids in rolling out the upgrade! Of course, a backup of the related VMs and SQL database is always advised, but still SCCM itself aids you in upgrading to the Current Branch, by:
As such, rolling out an upgrade of SCCM/ConfigMgr has evolved from a tedious and sometimes even hideous task, in a controlled workflow which is pretty solid.
This results in faster adoption of the Current Branch. So in order ‘to keep up’ one has to invest less more time, resources and budget.
System Center-as-a-Service?
Therefore I am hoping that one day the rest of the System Center 2016 stack will adopt the same approach as used by SCCM today.
It would lessen the administration burden significantly and help companies to grow into the idea that System Center-as-a-Service (SCaaS?) is good, helping them to adopt Azure based workloads and services even faster.
Hopefully Microsoft will choose for this approach one day. Please let me know how YOU think & feel about such an approach.
Ever wanted to test drive OMS without having to connect your own environment to it? So you can see what it does, how it works and what kind of services OMS can deliver for your organization?
Especially for this kind of scenario Microsoft has made the Operations Management Suite Experience Center.
What it offers/does? As Microsoft states: ‘…You will log-in as an administrator for an enterprise organization, Contoso. The environment has 500 servers, running on-premises as well as the cloud – in both Azure and AWS. The on-premises system is managed by System Center, and the key workloads being monitoring include; Exchange, SharePoint, SQL, and even MySQL running on Linux…’
With the OMS Experience Center you can test OMS without uploading a single bit of data from your servers. This will help you to build a proper business case for our organization starting to use OMS with their own servers.
Want to test drive OMS? Go to the Operations Management Suite Experience Center and sign up!
Suppose you’ve got a ConfigMgr 1606 (or older) environment and have heard about the Current Branch 1610 being available. How ever, as it’s globally rolled out, it might take some time before it’s available in your region.
As such it might not show up yet in your ConfigMgr environment:
Now there are two things you can do: WAIT, until it’s available and ConfigMgr will let you know when it’s there, OR run a PS script which puts you in the first wave of customers getting the update, AKA Early Update Ring.
This PS script is made by the The Configuration Manager Team, so you know it’s good . Script can be downloaded from here.
How it works
Easy job!
Last Friday Microsoft released the November Refresh for Azure Stack. Many deployment fixes and Azure PaaS services are added!
You can download it from here.
As an added bonus the tip from Charles Joy on Twitter: Increase your MaxPasswordAge. Your Azure Stack POC environment will last a lot longer now!
Microsoft released Azure Advisor under public preview.
As Microsoft states: ‘…While it’s easy to start building applications on Azure, making sure that the underlying Azure resources are setup correctly and being used optimally can be a challenging task…’
Therefore Microsoft released Azure Advisor which is ‘…a personalized recommendation engine that provides proactive best practices guidance for optimally configuring your Azure resources…’
What it does? Again, as Microsoft states: ‘…Azure Advisor analyzes your resource configuration and usage telemetry to detect risks and potential issues. It then draws on Azure best practices to recommend solutions that will reduce your cost and improve the security, performance, and reliability of your applications…’
Please know this service is under Public Preview. So you can use it for free. When it will become generally available I don’t know also not under what pricing. Yet, IMHO this service will help many companies to utilize their Azure resources in an optimal manner.
Want to know more? Go here.
Many times I am asked above question. Gladly, related to my interest of technologies that is . Mostly, the question comes down to: How do you keep up with all new technologies and development?
The answer is quite simple actually. I love to watch videos and with Microsoft Mechanics on Twitter I am quickly informed about ‘the latest & greatest’.
They have tons of videos (many of them only 60 seconds), podcasts, interviews and so on, all about what’s new and how it works.
Of course, with 60 seconds one won’t learn the deeper stuff, but still you learn where to place it in the bigger picture. And when requiring more knowledge there are other sides like Microsoft Virtual Academy, Microsoft Channel 9, lots of blogs and so on. And when that’s not enough, simply Google it (sorry, I don’t use Bing).
And yes, Microsoft Mechanics IS a MARKETING channel. So you’ve got to cut through the marketing mumbo jumbo. But even when that slicing is done, there is still a lot of worth while information to be found there.
Wonder what containers are (besides the obvious that is) and why IT in general and you specifically should know? And what containers can do for you (your company, your customers that is)?
Not wanting to read a tons of pages but under a hundred and STILL get a good and basic understanding of containerized applications (because that’s what I am talking about)?
Yeah, I know. Your field is IT Operations. So why should you care about application development? Let alone application life cycle? I mean, you’re NOT a developer. Duh!
Well guess what. The world we know is changing! And not at light pace but with the speed of light. As such, it comes in handy to know ‘a bit’ more about the world around you, the new technologies and the new world.
No, it’s not out there YET. But it’s changing already. And my guess is that containers are the next BIG thing and will make the revolution introduced by virtualization look like a walk in the park.
Hopefully I’ve made you curious. If so, READ the FREE e-book, all about Containerized Docker Application Lifecycle With Microsoft Platform & Tools.
And yes, 60 pages in total. Nice isn’t it?
A few days ago Microsoft released an update for the Office 365 MP, version 7.1.5134.0.
Changes in this MP (taken directly from the related MP guide):
Office 365 MP, version 7.1.5134.0 can be downloaded from here.
Update (11-17-2016): Based on some valid feedback from a reader I added a section about costs. Thanks for the feedback, much appreciated.
This kind of question I am asked by many customers today. In their own environment they’re running SCOM 2012 R2. They know SCOM 2016 is GA and that OMS has also a lot to offer.
Good bye SCOM & hello OMS?
So why not skip SCOM 2016 all together and move their monitoring into OMS? Simply because OMS also uses the Microsoft Monitoring Agent (MMA), uses Intelligent Packs (IPs, the OMS equivalent of SCOM MPs) and offers a Gateway as well (AKA OMS Log Analytics Forwarder).
On itself a logical question, which isn’t answered that easy however. Simple because it depends on how you’re using SCOM today.
SCOM = Monitoring. OMS = Log Analytics +++
To put it simply, SCOM is a pure bred monitoring tool with some basic log analytics capabilities. On the other hand OMS is a super enhanced log analyzer, with some (still basic) monitoring capabilities folded into it.
So when you’re using SCOM in order to monitor workloads, distributed applications and so on, whether on-premise or in the cloud or anything in between, SCOM is still the place to go and the product to use.
Rich Alerting required? SCOM is the product to use
Also when you have SCOM alerting people when something is wrong with the monitored environment, SCOM is still the product to use. Also because at this moment OMS has only some basic alerting capabilities built into it. Whereas SCOM has by default predefined Alerts (based on the MPs imported), OMS doesn’t have that so most of the Alerts have to be pre-defined manually by you. Which is quite a challenge because you have to think up every possible situation requiring an Alert.
Log analytics required? OMS!
However, when you require a powerful log analytics tool with many preconfigured solutions, like security & auditing, SQL Assessment, AD Assessment and so on, OMS is the product to use. Or better, service.
The speed, dashboarding and possibilities to ‘dig through the collected data’ is totally awesome and unmatched by SCOM. And believe me, SCOM will never get to that level, ever.
So when you require hard log analytics capabilities, OMS is the place to be.
SCOM & OMS. Better together
Good thing is, SCOM & OMS can be combined. So you have the power of SCOM (rich monitoring and alerting) and the log analytics power of OMS. So you’ve the best of both worlds.
As we already know it’s quite easy to attach SCOM to OMS and from there, have a (sub)set of SCOM monitored servers (whether Windows or Linux) uploading data to OMS as well.
So now you have the power of SCOM and OMS. Totally awesome. The fun thing is, you can try this for free. OMS still offers a free data plan. It’s limited in the solutions it has, but still it will give you a good insight of the capabilities and power of OMS.
This brings me to another important topic: costs.
Costs
When your company already has a Software Assurance licensing agreement with Microsoft, changes are they have licenses for the entire System Center suite as part of the same SA. Leveraging OMS will result in an incremental cost on top of your current System Center licenses. Or you will wind up using the ingestion model at $2.30 per GB.
So it’s certainly worth the effort to find out whether your company has a SA in place with licenses for the entire System Center suite. When that’s the case you may use OMS for lower costs than expected.
If not, there is still the free data plan available, allowing you to test drive some OMS functionalities for free.
SCOM 2012 R2 or SCOM 2016?
When you’re on SCOM 2012 R2 level I strongly advise to upgrade to SCOM 2016. Why? There are many reasons, this is the Top 3:
The future
Microsoft goes by the mantra ‘Cloud & Mobile First’. So it’s evident that OMS will keep on growing BIG time. Things we’re missing at this moment (like real monitoring, objects and health states included) with rich Alerting, are most likely to be added sometime in the future. Until then however, SCOM is the product delivering this functionality out of the box.
So SCOM still has a valid business case, and will have that for the years to come. None the less, it can’t and won’t hurt to take a look at OMS and start using it (the free data plan is a good start). Also combine it with SCOM and go from there.
What surprises me the most is the pace of growth in OMS. In less than two years, tons of new features are added. And that pace of growth won’t lessen. I know that for sure. So we’ll see new features, improvement of the existing ones and so on.
Recap
When running SCOM 2012 R2 for rich monitoring and Alerting, SCOM is still the product to use. However, this doesn’t exclude the usage or use case scenario’s for OMS.
OMS delivers rich and enhanced log analytics capabilities. Combined with SCOM you’ve yourself a rich monitoring and log analytics platform at hand, so now you can drill deep into the very core of your IT assets, then you ever imagined.
It will be an exciting journey, starting with SCOM 2016 on-premise and OMS in the cloud.
Note: This article is a cross post from contains copied text from this article written by The Scripting Guys, a Microsoft blog all about PowerShell and OMS.
Last August Microsoft introduced the advanced detection capability in OMS Security. It scans more than seven billion events per day(!) and analyzes them to generate useful detections.
OMS Security advanced detections are provided as a service, which means that customers don’t have to create or maintain the infrastructure and write threat detection rules. Microsoft does it for them on a global scale and brings Microsoft’s vast security knowledge and tools into play.
Microsoft is continuously adding new patterns and new detection types to keep up with the latest attack techniques. Microsoft keeps monitoring the detections to reduce the false positive detections as much they can.
Since yesterday this service is available in Europe as well and is automatically enabled for all OMS Security customers.
Want to know more about this powerfull feature, which is RTU (Ready To Use) without requiring any configuration at all, except for rolling out the Microsoft Monitoring Agent to the systems you want to cover, or to connect your SCOM environment to OMS? Go here.
Remark
OMS is growing on an almost weekly basis in capabilities and coverage, if not daily. Features like this one are really usefull and offer a good insight in how secure your organization really is and whether there are breaches. Normally it would take a lot of time, resources and money to roll out such a service. And now it’s available with just a few mouse clicks for a very affordable price!
For me this is a typical showcase of the power of the cloud and the services it has to offer.
Since a few days the new Integration Packs (IPs) for Orchestrator 2016 are available for download:
Click on the links of the IPs you require for your environment.
In this posting I’ll write about my upgrade experiences of one of my SCOM 2012 R2 UR#11 environments to SCOM 2016 GA (SCOM 2016 + UR#01). The SCOM 2012 R2 environment is rather small, but still representative for many SCOM 2012 R2 environments, since it exists out of two SCOM Management Servers and a few SCOM Agents.
Because the upgrade process of a SCOM Agent to SCOM 2016 is the same, no matter the amount of them, IMHO the upgrade of my SCOM 2012 R2 UR#11 environment is applicable to many SCOM environments. The only thing lacking here is at least one SCOM Gateway Server. Since a Gateway Servers is in essence a SCOM Agent with some additional features, upgrading such a server is a walk in the park, especially compared to upgrading a SCOM Management Server.
This is my test lab:
Side note: Yes, it’s a small environment but it runs locally on my laptop, besides a SCCM environment providing FEP functionality for all VMs, an additional SQL server and a DC. So I think it’s still quite something for an average notebook .
All servers involved run Windows Server 2012 R2. SQL Server 2012 SP1 x64 is used for the SQL instance hosting the SCOM SQL databases.
There is much to tell, so let’s start.
A 01: Pre-Upgrade Tasks
NEVER EVER skip the Pre-Upgrade Tasks! Preparation is key, otherwise the upgrade is most likely to fail which is bad. And something else as well:
!!!BACKUP!!!
Before you start BACKUP! ALL SCOM Management Servers to be more specific. When on VMs, make snapshots or clones. And for the SCOM SQL databases: make VALID backups!
Microsoft recently published a TechNet article all about the Pre-Upgrade Tasks. So I won’t repeat them but only highlight some steps here. STILL TAKE CARE TO COVER ALL STEPS AS MENTIONED IN THIS TECHNET ARTICLE!!!
Steps I want to give additional attention are highlighted in yellow:
A 02: Pre-Upgrade Steps I like to add
In addition to the earlier mentioned TechNet article there are a few additional pre-upgrade steps I would like to add as well here.
B 01: Upgrade – First SCOM Management Server
Now it’s time to start the upgrade and I start with the first SCOM 2012 R2 UR#11 Management Server, also hosting the RSME role. it also hosts the SCOM Web Console and the Console.
Additional information regarding the required UR level:
For the upgrade itself Microsoft has also recently published an updated TechNet article, to be found here. And as you can see, SCOM 2012 R2 doesn’t have to be on UR#11 level, since the upgrade can also be run from UR#9. My guess is however, that most SCOM 2012 R2 environments will be on UR#11 already. When it’s not, but on UR#9 level you don’t have to roll out UR#11 first. Just make sure you meet all requirements and go through all pre-upgrade tasks successfully. Then you’re ready to upgrade to SCOM 2016 just as well.
I won’t post all screenshots, but only the most important ones. And apologies for the lack of quality of those very same screenshots. I recorded the upgrade with the built-in Steps Recorder, not knowing the screens are saved in a lower quality .
And YES I’ve stopped & disabled all SCOM services on all other SCOM Management Servers which aren’t upgraded at that moment.
B 02: Upgrade – Second SCOM Management Server
This upgrade runs much faster since the related SCOM databases are already successfully upgraded (as such the upgrade flags are set in those databases, telling the upgrade to skip them) and this second SCOM Management Server only runs the Console, NOT the Web Console.
Since I stopped & disabled the SCOM services here before upgrading the first SCOM Management Server, I start and set them to start automatically them BEFORE upgrading this SCOM Management Server and STOP & DISABLE them on the FIRST SCOM Management Server, which is already upgraded. DON’T FORGET THIS!!!
Now I start the SCOM services on the first SCOM Management Server and set them to start automatically. Also I start the SCOM website in IIS. Now the most crucial SCOM components are upgraded to SCOM 2016 RTM. There are no SCOM Gateway Servers in my environment to upgrade .
B 03: Upgrade – Install SCOM 2016 UR#01
Now it’s time to install SCOM 2016 UR#01. The order of it is like all other URs for SCOM 2012 R2:
So I start with the first SCOM Management Server which also hosts the SCOM Web Console and Console. In this case these SCOM 2016 components are touched:
After this I upgrade the second SCOM Management Server, also hosting the Console.
In this case these SCOM 2016 components are touched:
Then I run the SQL query (as stated earlier, only the OpsMgr database is touched):
After that I import the SCOM 2016 core MPs:
And now – except for the SCOM Agents that is (still on SCOM 2012 R2 UR#11 level) – SCOM is on 2016 UR#01 level.
C 01 – Upgrade – The Aftermath
Now it’s time to wrap it all up with these steps:
Wrap up
Since SCOM 2016 isn’t that much of a change compared to SCOM 2012 R2, the upgrade is less likely to fail, moreover when you’re sure all components (underlying Windows Server OS and SQL instances included) meet the SCOM 2016 requirements AND you respect the pre-upgrade tasks.
Compared to all other SCOM upgrades I have done before this was the easiest one. None the less: PREPARATION is KEY!!!
Also: when your current SCOM 2012 R2 environment comes from SCOM 2012 SP1 or even older, THINK TWICE before upgrading to SCOM 2016. Changes are things will break during the upgrade, so seriously consider the along side scenario in which a new SCOM 2016 environment is rolled out alongside your current SCOM 2012 R2 environment and monitoring is gradually moved to the new SCOM 2016 UR#01 environment.