Microsoft SQL Server is the backbone of many businesses, but when it comes to high availability, which path should you take: Always On Availability Groups (AG) or Failover Cluster Instances (FCI)?
Microsoft is always looking to offer newer and faster ways for their customers to embrace cloud services, even if you are not moving any of your workloads to Azure, MS wants for your company to integrate with the cloud. Software-as-a-Service (SaaS) platforms are a simple way to start, even for Internet-of-Things (IoT), that’s why the Redmond company is announcing: IoT Central.
I did previously setup during a few occasions, VPN access on Windows Server 2012 R2, but haven’t tested that on the newly released Windows Server 2016. Remote access role is a VPN which protects the network connection or your remote connection from one side to another and protecting both sides from attacks or data sniffing as VPN protocol uses a tunnel inside of a standard data connection.
The reason for writing this article was the goal of eliminating any possible confusion in the process of configuring the StarWind Virtual Tape Library in pair with the Microsoft System Center Data Protection Manager. The integration of SCDPM provides a benefit of consolidating the view of alerts across all your DPM 2016 servers. Alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting. The grouping functionality is further completed with the console capable of separating issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems.
On 28th of September, Microsoft has released the Virtual Network peering, in GA. This new functionality gives you the opportunity to connect 2 Virtual Network in Azure between them, by using the network of the Azure Datacenter. Bye bye VPN S2S between virtual network 🙂
These days, more and more companies need high-quality, reliable and efficient server hardware. Home labs used by enthusiasts and professionals in the IT sphere for software developing and testing, studying for an IT certification, and configuring virtual environments became popular as well. Small companies are also interested in cheap and compact servers, the production of which is based on a couple of virtual machines or networking applications. Supermicro company ranks one of the leading positions in server development for a long time. Supermicro products range from the Hi-End clusters to microservers. Recently the company released two compact servers: SuperServer E200-8D and its younger model – SuperServer E300-8D.
Linux virtual machines in a cloud service like Amazon Web Services (AWS) or Microsoft Azure have become a very common practice. You can find pretty much anything that you need regarding Linux applications, including appliances or even PaaS technologies that actually work with Linux in the backend. Now Ubuntu has decided to go a bit further than that and offers an OS version with a customized kernel for AWS.
The past month has been categorized as something of a performance and upgrades challenge as one of the constant calls I hear is “application X is going to slow”, of course, a month ago it was fine but today it isn’t and normally this is just down to increasing load. One of the common fixes for increasing load is to add more vCPU and RAM but often that can cause its own set of problems especially when NUMA boundaries are crossed and when vCPU contention pushes things a little too far.
It is not new that VMware vCenter Server Appliance (VCSA) is a very popular option, especially for small businesses which can save money on an additional Windows Server license. It is a prepackaged and preconfigured virtual appliance with PostgreSQL database, vCenter server 6.5 components and also (in case you deploy “all-in-one” VM) Platform services Controller that contain all of the necessary services for running vCenter Server such as vCenter Single Sign-On, License service, and VMware Certificate Authority.
Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is, in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself. From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.
In my previous blog post, I covered SharePoint 2016 installation process, my next logical step after that was to configure SharePoint app catalog so that I can add K2 for SharePoint app and as I covered this process earlier on my personal blog, I expected this to be a small task. Indeed, I had created it following steps from my old blog post in just minutes, but, alas, I run into loads of warnings while running K2 for SharePoint AppDeployment.exe. I sorted most of them but after seeing extra warnings telling me that additional configuration is required just because I’m using HTTP instead of HTTPS I decided that it is better to re-create my app catalog using HTTPS.