Google and Rackspace cooperate over creating a new server configuration based on IBM Power9 processors. The design is expected to be shared as part of the Open Compute Project. The hardware set will include 48V Open Compute racks by Google and Facebook collaborative development.
Learn More
Here is a part of a series about Microsoft Resilient File System, first introduced in Windows Server 2012. It shows an experiment, conducted by StarWind engineers, dedicated to seeing the ReFS in action. This part is mostly about the FileIntegrity feature in the file system, its theoretical application and practical performance under real virtualization workload. The feature is responsible for data protection in ReFS, basically the reason for “resilient” in its name. It’s goal is avoidance of the common errors that typically lead to data loss. Theoretically, ReFS can detect and correct any data corruption without disturbing the user or disrupting production process.
Learn More
This is a short overview of Microsoft Resilient File System, or ReFS. It introduces the subject and gives a short insight into its main characteristics and theoretical use. It is a part of a series of posts dedicated to ReFS and is, basically, an introduction to the practical posts. All the experiments that show how ReFS really performs, are also listed in the blog. ReFS seems to be a great replacement for the NTFS and its resilience is most convenient for cases, when data loss is critically unacceptable. The file system cooperates with Microsoft Storage Spaces Direct in order to perform automatic corruption repairs, without any attention of the user.
Learn More
Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery. We previously discussed the limited scope of virtual systems clustering and failover: active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork. Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.
Learn More
Snapshots in VMware vSphere often cause various problems with configurations and performance, unless they are properly used – for live backup of virtual machines and temporary keeping VM configuration before the update.
However, using them in large infrastructures is unavoidable. At some point you may need to delete/consolidate virtual machine snapshots (Delete All button in Snapshot Manager), which is quite time-consuming and demanding in terms of storage performance. Thus it would be a good thing to know in advance how much time it takes.
Learn More
According to IDC quarterly report, EMC’s Data Domain leads in the purpose-built backup appliance market.
Learn More
In part three of this multi-part blog series, we covered how to configure OMS to collect data through integration with System Center Operations Manager (SCOM), and through direct connections to individual servers. We also covered how Microsoft Operations Management Suite uses Solutions to deliver insights into your log data by providing a cost-effective, all-in-one cloud management solution so you can better protect guest workloads in Azure, AWS, Windows Server, Linux, VMWare, and Open Stack.
Learn More
In part two of this multi part blog series, we covered how to configure OMS to collect data through integration with System Center Operations Manager (SCOM), and through direct connections to individual servers. We also covered how Microsoft Operations Management Suite uses Solutions to deliver insights into your log data by providing a cost-effective, all-in-one cloud management solution so you can better protect guest workloads in Azure, AWS, Windows Server, Linux, VMWare, and Open Stack
Learn More
The need or perceived need for hard CPU processor affinity stems from a desire to offer the best possible guaranteed performance. The use cases for this do exist but the problems they try to solve or the needs they try to meet might be better served by a different design or architecture such as dedicated hardware. This is especially true when this requirement is limited to a single or only a few virtual machines needing lots of resources and high performance that are mixed into an environment where maximum density is a requirement. In such cases, the loss of flexibility by the Hyper-V CPU scheduler in regards to selecting where to source the time slices of CPU cycles is detrimental. The high performance requirements of such VMs also means turning of NUMA spanning. Combining processor affinity and high performance with maximum virtual machine density is a complex order to fulfill, no matter what.
Learn More
New, significantly improved Raspberry Pi 3 has been released this month. Improvements include a quad-core 64-bit ARM processor, an upgraded graphic processor, and a built-in wireless adapter. In order to meet the storage needs Western Digital has issued a new specialized low-profile hard drive called PiDrive.
Learn More