Microsoft SQL Server is the backbone of many businesses, but when it comes to high availability, which path should you take: Always On Availability Groups (AG) or Failover Cluster Instances (FCI)?
Part 2 discussed adding the Linux server to Veeam managed servers and adding immutable repositories, along with incompatible configs that Veeam’ll block if you try using them. Today, you will learn about the entire configuration deeper: how immutability works, scenarios with bad actors, how they can succeed, and how to test if all is working fine.
Part 1 discussed what a hardened repository was, how Veeam B&R V11 had everything to achieve that, and how to set up Linux for those purposes. Now, you will learn how to add the Linux server to Veeam managed servers and how to add immutable repositories. You’ll also learn about incompatible configs that Veeam will block if you try to use them.
Veeam Backup & Replication V11 introduced the ability to build your own immutable, hardened backup repository. There’s no more need to use third-party compatible solutions, like WORM disk storage or others. Now, you can do that using any server with storage that meets the requirements plus several supported Linux distros and XFS.
No one needs to tell that the efficiency and variety of security measures that keep your system up and running have gone quite up in recent years. However, you can’t be ready for everything, so it’s good to make sure your backup’s got your back, in this case – for Office 365 users.
One vital element of cybersecurity is keeping all your IT resources up-to-date. In your Azure Kubernetes Service (AKS), for instance, Microsoft patches your nodes at night. Sometimes, you may not even know it. Sometimes, for the patch to activate, you need to reboot the node. But checking everything manually each time a patch comes out is a hassle.
Getting comprehensive information on each and every virtual machine (VM) is a pain. To make things easier, Microsoft introduced Azure Resource Graph. You can now use a simple PowerShell script and receive contextual VM data at scale across your given set of subscriptions. No need to surf through every single one separately anymore.
Finally, you won’t have to use Fling software to use the Advanced VMware Cross vCenter vMotion capability (XVM). The feature has been introduced as part of vSphere in the recent vCenter Server 7 U1c update. Now, you can bulk-migrate VMs between vCenter Server instances without both of them being part of the same SSO.
The issue of providing block-level storage for Windows Subsystem for Linux (WSL) is quite mysterious. Presently, iSCSI isn’t available in WSL out-of-the-box. But the good news is that you can enable it, although it’s not a straightforward path. WSL2, having a real custom Linux kernel inside, can help you initiate and complete the process.
Azure DevOps is a very powerful product that encompasses the whole application lifecycle, including DevOps capabilities. Release Pipelines is one of the features that helps automate the testing and delivery of desired applications to end-users on multiple stages. But to use it properly, you should know how to couple it with PowerShell Tasks.
As you remember from my previous article, I have been interested in testing the performance levels of two virtual SAN configurations from different vendors. I got my results, but this experience prompted me to continue. Here, I’ve chosen to try another configuration for performance comparison, albeit with only a slightly different list of participants.
Since no one needs an introduction from VMware vSAN, I’d like to say a few words about its companion – Ceph. Basically, it is an object-based software storage platform. I know that doesn’t sound epic at all, but Ceph is also completely fault-tolerant, uses off-the-shelf hardware, and is extremely scalable. The most interesting thing is that some Ceph releases apply erasure-coded data pools so that it would be a less resource-hungry solution than traditional replicated pools. In practice, that means the following: when you store an object in a Ceph storage cluster, the algorithm divides this object into data and coding chunks, stored in different OSDs (that way, the system could lose an OSD without actually losing the data).
Now, that’s when I thought that theoretically, Ceph could make a good virtualization platform (proper configuration, of course), so I had to see whether it would be justified in terms of time and resources spent. Naturally, I hardly could have done it without a credible comparison, hence VMware vSAN (with a similar configuration, of course, otherwise it would make no sense).
So, shall we?