Storage Spaces Direct optimized pool
Storage Spaces Direct can optimize a storage pool to balance data equally across the set of physical
drives that comprise the pool. Over time, as physical drives are added or removed or as data is written
or deleted, the distribution of data among the set of physical drives that comprise the pool can
become uneven. In some cases, this might result in certain physical drives becoming full, whereas
other drives in the same pool have much lower consumption.
Similarly, if you add new storage to the pool, optimizing the existing data to utilize the new storage
results in better storage efficiency across the pool and, potentially, improved performance from the
newly available additional physical storage throughput. Optimizing the pool is a maintenance task
that the administrator performs.
When the optimize pool command is started, Storage Spaces Direct moves data among the physical
drives in the pool. The data movement is a background operation, designed to minimize impact to
foreground or tenant workloads.
Failure scenarios
Storage Spaces Direct addresses various failure scenarios. To understand how this works, you first
need to review some basic information about virtual drives.
A virtual drive consists of extents, each of which are 1 GB in size. A 100 GB virtual drive will therefore
consist of one hundred 1 GB extents. If the virtual drive is mirrored (using ResiliencySettingName),
there are multiple copies of the extent. The number of copies of the extent (obtained by using
NumberOfDataCopies) can be two or three. For example, a mirrored virtual drive with three data
copies consumes 300 extents. The placement of extents is governed by the fault domain, which in
Storage Spaces Direct is nodes (StorageScaleUnit), so, as shown in Figure 2-61, the three copies of an
extent (A) are placed on three different storage nodes; for example, nodes 1, 2, and 3 in the figure.
Another extent (B) of the same virtual drive might have its three copies placed on different nodes; for
example, nodes 1, 3, and 4, and so on. This means that a virtual drive has its extents distributed across
all storage nodes and the copies of each extent are placed on different nodes. Figure 3-11 depicts a
four-node deployment with a mirrored virtual drive with three copies and an example layout of
extents.
Figure 2-61:
A four-node deployment
Next, let’s take a look at various failure scenarios and examine how Storage Spaces handles them.
Do'stlaringiz bilan baham: |