Microsoft SQL Server is the backbone of many businesses, but when it comes to high availability, which path should you take: Always On Availability Groups (AG) or Failover Cluster Instances (FCI)?
Probably most of the enterprise businesses have heard of Microsoft Exchange Server. It’s Microsoft platform delivering email, scheduling, and tools for custom collaboration and messaging service applications and is installed on Windows Server operating systems. Its main aim is not just to let workers inside an organization communicate but to collaborate. So, you can install Exchange Server 2016 on Windows Server 2016 using two ways:
However, in this article, I’ll focus on installation with the help of GUI.
This is going to be the third article in the series of reviewing and understanding Azure Stack (previous articles: Azure Stack Release and Deployment Models). Now that we have a good grasp of the platform, how the Integrated Systems come into play and features available; I want to extend the topics to other two important matters for existing customers of Azure: How the support model will work with hardware and software in place from different vendors; and what’s the interconnection with the already existing Azure Pack solution.
As all of you know, VMware Labs posts handful utilities for VMware administrators to make the management of vSphere virtualization infrastructure easier. Those tools are being developed by VMware Engineers, Community, and Open Source. Today I would like to emphasize some of the latest tools available to download and implement.
Whether you have heard it called software-defined storage, referring to a stack of software used to dedicate an assemblage of commodity storage hardware to a virtualized workload, or hyper-converged infrastructure (HCI), referring to a hardware appliance with a software-defined storage stack and maybe a hypervisor pre-configured and embedded, this “revolutionary” approach to building storage was widely hailed as your best hope for bending the storage cost curve once and for all. With storage spending accounting for a sizable percentage – often more than 50% — of a medium-to-large organization’s annual IT hardware budget, you probably welcomed the idea of an SDS/HCI solution when the idea surfaced in the trade press, in webinars and at conferences and trade shows a few years ago.
Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show. As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used. Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability. High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.
This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes. The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.
Sooner or later every single IT guy comes to the idea of having some lab. There are a million reasons why you would need a lab: learning new technologies, improving skills, trying crazy ideas you would never dare to try in the production network, you name it. Even though it is a work-related activity for most home labbers this is just another hobby for many of us. That’s why people spend so many hours of their personal time building the homelab, investing significant funds into new hardware, thoroughly planning its setup, looking for a help in online communities or sharing their experience to help others. There is a whole universe of home labbers and I am happy to be part of this community. In this post, I would like to share my experience with 3 generations of home labs I have had so far and the thoughts about next generation.
A lot of the time I see and speak to people asking about DR solutions when what they really want is HA with a few backups so I wanted to use a blog article to go through some of the technical terms used in conjunction with DR. When people say “I want DR”, I’ll ask them about the sort of disasters they are looking to protect against and most of the time the response is “I want to keep working if my hypervisor crashes”.
These days, almost every company that stores its own data on-premise, will it be the virtual machines, databases or just plain files, needs to be sure that this data won’t be lost due to some accident or human error. More to say, backup solutions are usually requested to be flexible, feature-rich and utilize different storage-optimization technologies for increased cost-efficiency. The additional requirement of the long-term data retention significantly increases the overall cost of the final solution, which resulted in an increased demand for public clouds like Amazon Web Services and Azure as a backup repository for their data.
I’ve recently covered in my previous post about Azure Stack being finally released to the Integration Systems (Dell EMC, HPE and Lenovo). We covered in the article the clarification of what Azure Stack actually means, features and functionalities available. In this second part will go beyond, reviewing how much is going to cost, deployment alternatives, Azure integration, and the disconnected scenario.