Search

Latest articles

View:
Didier Van Hoye
Didier Van Hoye
Cloud and Virtualization Architect. Didier is an IT veteran with over 20 years of expertise in Microsoft technologies, storage, virtualization, and networking. Didier primarily works as an expert advisor and infrastructure architect.
Didier Van Hoye
  • Didier Van Hoye
  • February 3, 2017

Upgrade your CA to SKP & SHA256. Part II: Move from a CSP to KSP provider

Once you have moved to a least Windows Server 2008 R2 you can take this step. Any version below doesn’t allow for this and should be considered the end of life. Many haven’t made the move from a CSP to KSP provider yet, even when they are already running Windows Server 2012 or 2012 R2 for a few reasons. There were some issues with older clients like Windows Server 2003 and Windows XP. These were fixed with a hotfix but in all seriousness, if you’re still on those OS versions you need to move a.s.a.p. and if not there’s nothing we can do to help you. A modern and secure PKI will be the last of your worries I’m afraid. For a Microsoft reference, see Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP).
Read more
Didier Van Hoye
  • Didier Van Hoye
  • January 31, 2017

Upgrade your CA to SKP & SHA256. Part I: Setting the Stage

Many Certificate Authority servers that were installed on Windows Server 2003 never got upgraded until Microsoft ceased the support of Windows 2003. Some of those are still out there running today. A massive amount of them got set up in an era when Wi-Fi in the SME market became very popular and CA servers were deployed to easily secure access to it. To be fair, a lot of administrators didn’t wait for Windows Server 2003 support to expire and made sure their CA was more or less up to date by upgrading them in place. That alone is something to commend. However, the operating system version only introduces the capability of using modern more secure providers and algorithms. It doesn’t upgrade the ones used by the PKI automatically for you. So many of these upgrade PKI servers are still using an old cryptographic provider, the “Microsoft Strong Cryptographic Provider” (SCP) and an old hash algorithm (SHA1) that’s been deprecated (see SHA1 Deprecation: What You Need to Know) or even banned.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • September 19, 2016

Windows Server 2016 Hyper-V Backup Rises to the challenges

In Windows Sever 2016 Microsoft improved Hyper-V backup to address many of the concerns mentioned in our previous Hyper-V backup challenges Windows Server 2016 needs to address: They avoid the need for agents by making the API’s remotely accessible. It’s all WMI calls directly to Hyper-V. They implemented their own CBT mechanism for Windows Server 2016 Hyper-V to reduce the amount of data that needs to be copied during every backup. This can be leveraged by any backup vendor and takes away the responsibility of creating CBT from the backup vendors. This makes it easier for them to support Hyper-V releases faster. This also avoids the need for inserting drivers into the IO path of the Hyper-V hosts. Sure the testing & certification still has to happen as all vendors now can be impacted by a bug MSFT introduced. They are no longer dependent on the host VSS infrastructure. This eliminates storage overhead as wells as the storage fabric IO overhead associated with performance issues when needing to use host level VSS snapshots on the entire LUN/CSV for even a single VM. This helps avoid the need for hardware VSS providers delivered by storage vendors and delivers better results with storage solution that don’t offer hardware providers. Storage vendors and backup vendors can still integrate this with their snapshots for speedy and easy backup and restores. But as the backup work at the VM level is separated from an (optional) host VSS snapshot the performance hit is less and the total duration significantly reduced. It’s efficient in regard to the number of data that needs to be copied to the backup target and stored there. This reduces capacity needed and for some vendors the almost hard dependency on deduplication to make it even feasible in regards to cost. These capabilities are available to anyone (backup vendors, storage vendors, home grown PowerShell scripts …) who wishes to leverage them and doesn’t prevent them from implementing synthetic full backups, merge backups as they age etc. It’s capable enough to allow great backup solutions to be built on top of it.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • September 12, 2016

Hyper-V backup challenges Windows Server 2016 needs to address

Personally I have been very successful at providing good backup designs for Hyper-V in both small to larger environments using budgets that range in between “make due” to “well-funded”.  How does one achieve this? Two factors. The first factor is knowing the strengths and limitations of the various Hyper-V versions when you design the backup solution. Bar the ever better scalability, performance and capabilities with each new version of Hyper-V, the improvements in back up from 2012 to 2012 R2 for example were a prime motivator to upgrade. The second factor of success is due to the fact that I demand a mandate and control over the infrastructure stack to do so. In many case you are not that lucky and can’t change much in already existing environments. Sometimes not even in new environments when the gear, solutions have already been chosen, purchased and the design is deployed before you get involved.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • August 25, 2016

Don’t Fear but Respect Redirected IO with Shared VHDX

When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).
Read more
Didier Van Hoye
  • Didier Van Hoye
  • August 19, 2016

Musings on Windows Server Converged Networking & Storage

Too many people still perceive Windows Server as “just” an operating system (OS). It’s so much more. It’s an OS, a hypervisor, a storage platform with a highly capable networking stack. Both virtualization and cloud computing are driving the convergence of all the above these roles forward fast, with intent and purpose. We’ll position the technologies & designs that convergence requires and look at the implications of these for a better overall understanding of this trend.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • June 27, 2016

Windows 2016 Makes a 100% In Box High Performance VDI Solution a Realistic Option

With Windows Server 2016 we have gained some very welcome capabilities to do cost effective VDI deployments using all in box technologies. The main areas of improvement are in storage, RemoteFX and with Discrete Device Assignment for hardware pass-through to the VM. Let’s take a look at what’s possible now and think out loud on what solutions are possible as well as their benefits and drawbacks.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • April 28, 2016

A closer look at NUMA Spanning and virtual NUMA settings

With Windows Server 2012 Hyper-V became truly NUMA aware.  A virtual NUMA topology is presented to the guest operating system. By default, the virtual NUMA topology is optimized by matching the NUMA topology of physical host. This enables Hyper-V to get the optimal performance for virtual machines with high performance, NUMA aware workloads where large numbers of vCPUs and lots of memory come into play. A great and well known example of this is SQL Server.
Read more
Didier Van Hoye
  • Didier Van Hoye
  • March 22, 2016

Need Hard Processor affinity for Hyper-V?

The need or perceived need for hard CPU processor affinity stems from a desire to offer the best possible guaranteed performance.  The use cases for this do exist but the problems they try to solve or the needs they try to meet might be better served by a different design or architecture such as dedicated hardware. This is especially true when this requirement is limited to a single or only a few virtual machines needing lots of resources and high performance that are mixed into an environment where maximum density is a requirement. In such cases, the loss of flexibility by the Hyper-V CPU scheduler in regards to selecting where to source the time slices of CPU cycles is detrimental. The high performance requirements of such VMs also means turning of NUMA spanning. Combining processor affinity and high performance with maximum virtual machine density is a complex order to fulfill, no matter what.
Read more