.. |_| unicode:: 0xA0 
   :trim:

.. _Planning Node Hardware Configurations:

Planning Node Hardware Configurations
-------------------------------------

Virtuozzo Infrastructure Platform works on top of commodity hardware, so you can create a cluster from regular servers, disks, and network cards. Still, to achieve the optimal performance, a number of requirements must be met and a number of recommendations should be followed.

.. _System Limits:

System Limits
~~~~~~~~~~~~~

The table below lists the current hardware limits for Virtuozzo Infrastructure Platform servers:

========  =================  ================
Hardware  Theoretical        Certified
========  =================  ================
RAM       64 TB              1 TB
CPU       5120 logical CPUs  384 logical CPUs
========  =================  ================

.. note:: A logical CPU is a core (thread) in a multicore (multithreading) processor.

Virtuozzo Infrastructure Platform management supports the following web browsers:

- Firefox, current and the two most recent versions;
- Chrome, current and the two most recent versions;
- Safari, current and the two most recent versions;
- Microsoft Edge, current version;
- Internet Explorer 11.

.. _Hardware Requirements:

Hardware Requirements
~~~~~~~~~~~~~~~~~~~~~

An Virtuozzo Infrastructure Platform cluster consists of a single management node and multiple storage and compute nodes. Both types have different requirements to hardware.

The following table lists the minimal and recommended hardware for the management node:

+-----------------+--------------------------------------------------------+-----------------------------------------------------------+
| Type            | Minimal                                                | Recommended                                               |
+=================+========================================================+===========================================================+
| CPU             | 64-bit x86 processors with AMD-V or Intel VT           | 64-bit x86 processors with AMD-V or Intel VT              |
|                 | hardware virtualization extensions enabled;            | hardware virtualization extensions enabled;               |
|                 |                                                        |                                                           |
|                 | 16 logical CPUs, including:                            | 32 logical CPUs (same as minimal plus more cores for      |
|                 |                                                        | workload                                                  |
|                 | - 1.5 cores for management services,                   |                                                           |
|                 | - 4 cores for compute services,                        |                                                           |
|                 | - 2 cores for networking,                              |                                                           |
|                 | - 3.2 cores for storage services,                      |                                                           |
|                 | - and the rest for workload                            |                                                           |
+-----------------+--------------------------------------------------------+-----------------------------------------------------------+
| RAM             | 32GB, including:                                       | 64GB or more (same as minimal plus more memory for        |
|                 |                                                        | workload)                                                 |
|                 | - 1.2 GB for management services,                      |                                                           |
|                 | - 12 GB for compute services,                          |                                                           |
|                 | - 4.5 GB for storage services,                         |                                                           |
|                 | - and the rest for workload.                           |                                                           |
+-----------------+--------------------------------------------------------+-----------------------------------------------------------+
| Storage         | System+Metadata: 200GB SATA HDD                        | System+Metadata+Cache: One recommended enterprise-grade   |
|                 |                                                        | SSD with power loss protection; 200GB or more capacity;   |
|                 | Storage: 100GB SATA HDD                                | and 75 MB/s sequential write performance per serviced     |
|                 |                                                        | HDD. For example, a node with 10 HDDs will need an SSD    |
|                 |                                                        | with at least 750 MB/s sequential write speed             |
|                 |                                                        |                                                           |
|                 |                                                        | Storage: Four or more HDDs or SSDs; 1 DWPD endurance      |
|                 |                                                        | minimum, 10 DWPD recommended                              |
+-----------------+--------------------------------------------------------+-----------------------------------------------------------+
| Network         | 1GbE or faster network interface for storage and       | Two bonded 10GbE network interfaces for storage traffic;  |
|                 | a VLAN tagged 1GbE for other traffic                   | two bonded VLAN tagged 1GbE or 10GbE for other traffic    |
+-----------------+--------------------------------------------------------+-----------------------------------------------------------+

The following table lists the minimal and recommended hardware for a single compute or storage node:

+-----------------+---------------------------------------------------+-----------------------------------------------------------+
| Type            | Minimal                                           | Recommended                                               |
+=================+===================================================+===========================================================+
| CPU             | 64-bit x86 processors with AMD-V or Intel VT      | 64-bit x86 processors with AMD-V or Intel VT              |
|                 | hardware virtualization extensions enabled;       | hardware virtualization extensions enabled;               |
|                 |                                                   |                                                           |
|                 | 8 logical CPUs, including:                        | 32 logical CPUs (same as minimal plus more cores for      |
|                 |                                                   | workload)                                                 |
|                 | - 0.5 cores for management services,              |                                                           |
|                 | - 0.5 cores for compute services,                 |                                                           |
|                 | - 2 cores for networking,                         |                                                           |
|                 | - 2.2 cores for storage services, and             |                                                           |
|                 | - the rest for workload                           |                                                           |
+-----------------+---------------------------------------------------+-----------------------------------------------------------+
| RAM             | 8GB, including:                                   | 64GB or more (same minimal requirements plus more memory  |
|                 |                                                   | for workload)                                             |
|                 | - 0.2 GB for management services,                 |                                                           |
|                 | - 0.2 GB for compute services,                    |                                                           |
|                 | - 4 GB for storage services, and                  |                                                           |
|                 | - the rest for workload                           |                                                           |
+-----------------+---------------------------------------------------+-----------------------------------------------------------+
| Storage         | System: 100GB SATA HDD                            | System: 100GB SATA HDD                                    |
|                 |                                                   |                                                           |
|                 | Metadata: 100GB SATA HDD (on the first five nodes | Metadata+Cache: One or more recommended enterprise-grade  |
|                 | in the cluster)                                   | SSDs with power loss protection; 100GB or more capacity;  |
|                 |                                                   | and 75 MB/s sequential write performance per serviced     |
|                 | Storage: 100GB SATA HDD                           | HDD. For example, a node with 10 HDDs will need an SSD    |
|                 |                                                   | with at least 750 MB/s sequential write speed (on the     |
|                 |                                                   | first five nodes in the cluster)                          |
|                 |                                                   |                                                           |
|                 |                                                   | Storage: Four or more HDDs or SSDs; 1 DWPD endurance      |
|                 |                                                   | minimum, 10 DWPD recommended                              |
+-----------------+---------------------------------------------------+-----------------------------------------------------------+
| Network         | Two 1GbE or faster network interfaces for         | Two bonded 10GbE network interfaces for storage traffic;  |
|                 | storage and other traffic                         | two bonded 1GbE or 10GbE for other traffic                |
+-----------------+---------------------------------------------------+-----------------------------------------------------------+

.. _Hardware Recommendations:

Hardware Recommendations
~~~~~~~~~~~~~~~~~~~~~~~~

The following recommendations explain the benefits added by specific hardware in the hardware requirements table and are meant to help you configure the cluster hardware in an optimal way:

.. _Storage Cluster Composition Recommendations:

Storage Cluster Composition Recommendations
*******************************************

Designing an efficient storage cluster means finding a compromise between performance and cost that suits your purposes. When planning, keep in mind that a cluster with many nodes and few disks per node offers higher performance while a cluster with the minimal number of nodes (5) and a lot of disks per node is cheaper. See the following table for more details.

+------------------------+-----------------------------------------------+----------------------------------------------+
| Design considerations  | Minimum nodes (5), many disks per node        | Many nodes, few disks per node               |
+========================+===============================================+==============================================+
| Optimization           | Lower cost.                                   | Higher performance.                          |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Free disk space to     | More space to reserve for cluster rebuilding  | Less space to reserve for cluster rebuilding |
| reserve                | as fewer healthy nodes will have to store the | as more healthy nodes will have to store the |
|                        | data from a failed node.                      | data from a failed node.                     |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Redundancy             | Fewer erasure coding choices.                 | More erasure coding choices.                 |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Cluster balance and    | Worse balance and slower rebuilding.          | Better balance and faster rebuilding.        |
| rebuilding performance |                                               |                                              |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Network capacity       | More network bandwidth required to maintain   | Less network bandwidth required to maintain  |
|                        | cluster performance during rebuilding.        | cluster performance during rebuilding.       |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Favorable data type    | Cold data (e.g., backups).                    | Hot data (e.g., virtual environments).       |
+------------------------+-----------------------------------------------+----------------------------------------------+
| Sample server          | Supermicro SSG-6047R-E1R36L (Intel Xeon       | Supermicro SYS-2028TP-HC0R-SIOM (4 x Intel   |
| configuration          | E5-2620 v1/v2 CPU, 32GB RAM, 36 x 12TB HDDs,  | E5-2620 v4 CPUs, 4 x 16GB RAM, 24 x 1.9TB    |
|                        | 1 x 500GB system disk).                       | Samsung SM863a SSDs).                        |
+------------------------+-----------------------------------------------+----------------------------------------------+

.. note::

   #. These considerations only apply if failure domain is host.

   #. The speed of rebuilding in the replication mode does not depend on the number of nodes in the cluster.

   #. Virtuozzo Infrastructure Platform supports hundreds of disks per node. If you plan to use more than 36 disks per node, contact sales engineers who will help you design a more efficient cluster.

.. _General Hardware Recommendations:

General Hardware Recommendations
********************************

- At least five nodes are required for a production environment. This is to ensure that the cluster can survive failure of two nodes without data loss.

- One of the strongest features of Virtuozzo Infrastructure Platform is scalability. The bigger the cluster, the better Virtuozzo Infrastructure Platform performs. It is recommended to create production clusters from at least ten nodes for improved resiliency, performance, and fault tolerance in production scenarios.

- Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in each node will yield better cluster performance, capacity, and overall balance.

- Any cluster infrastructure must be tested extensively before it is deployed to production. Such common points of failure as SSD drives and network adapter bonds must always be thoroughly verified.

- It is not recommended for production to run Virtuozzo Infrastructure Platform on top of SAN/NAS hardware that has its own redundancy mechanisms. Doing so may negatively affect performance and data availability.

- To achieve best performance, keep at least 20% of cluster capacity free.

- During disaster recovery, Virtuozzo Infrastructure Platform may need additional disk space for replication. Make sure to reserve at least as much space as any single storage node has.

- If you plan to use Acronis Backup Gateway to store backups in the cloud, make sure the local storage cluster has plenty of logical space for staging (keeping backups locally before sending them to the cloud). For example, if you perform backups daily, provide enough space for at least 1.5 days' worth of backups. For more details, see the `Administrator's Guide <https://dl.acronis.com/u/storage2/html/VirtuozzoStorage_2_admins_guide_en-US/exporting-storage-cluster-dat/connecting-abc-via-abgw.html#connecting-to-public-cloud-storage-via-virtuozzo-backup-gateway>`__.

.. _Storage Hardware Recommendations:

Storage Hardware Recommendations
********************************

- It is possible to use disks of different size in the same cluster. However, keep in mind that, given the same IOPS, smaller disks will offer higher performance per terabyte of data compared to bigger disks. It is recommended to group disks with the same IOPS per terabyte in the same tier.

- Using the recommended SSD models may help you avoid loss of data. Not all SSD drives can withstand enterprise workloads and may break down in the first months of operation, resulting in TCO spikes.

  - SSD memory cells can withstand a limited number of rewrites. An SSD drive should be viewed as a consumable that you will need to replace after a certain time. Consumer-grade SSD drives can withstand a very low number of rewrites (so low, in fact, that these numbers are not shown in their technical specifications). SSD drives intended for Virtuozzo Infrastructure Platform clusters must offer at least 1 DWPD endurance (10 DWPD is recommended). The higher the endurance, the less often SSDs will need to be replaced, improving TCO.

  - Many consumer-grade SSD drives can ignore disk flushes and falsely report to operating systems that data was written while it in fact was not. Examples of such drives include OCZ Vertex 3, Intel 520, Intel X25-E, and Intel X-25-M G2. These drives are known to be unsafe in terms of data commits, they should not be used with databases, and they may easily corrupt the file system in case of a power failure. For these reasons, use to enterprise-grade SSD drives that obey the flush rules (for more information, see http://www.postgresql.org/docs/current/static/wal-reliability.html). Enterprise-grade SSD drives that operate correctly usually have the power loss protection property in their technical specification. Some of the market names for this technology are Enhanced Power Loss Data Protection (Intel), Cache Power Protection (Samsung), Power-Failure Support (Kingston), Complete Power Fail Protection (OCZ).

  - Consumer-grade SSD drives usually have unstable performance and are not suited to withstand sustainable enterprise workloads. For this reason, pay attention to sustainable load tests when choosing SSDs. We recommend the following enterprise-grade SSD drives which are the best in terms of performance, endurance, and investments: Intel S3710, Intel P3700, Huawei ES3000 V2, Samsung SM1635, and Sandisk Lightning.

- The use of SSDs for write caching improves random I/O performance and is highly recommended for all workloads with heavy random access (e.g., iSCSI volumes).

- Running metadata services on SSDs improves cluster performance. To also minimize CAPEX, the same SSDs can be used for write caching.

- If capacity is the main goal and you need to store non-frequently accessed data, choose SATA disks over SAS ones. If performance is the main goal, choose SAS disks over SATA ones.

- The more disks per node the lower the CAPEX. As an example, a cluster created from ten nodes with two disks in each will be less expensive than a cluster created from twenty nodes with one disk in each.

- Using SATA HDDs with one SSD for caching is more cost effective than using only SAS HDDs without such an SSD.

- Use HBA controllers as they are less expensive and easier to manage than RAID controllers.

- Disable all RAID controller caches for SSD drives. Modern SSDs have good performance that can be reduced by a RAID controller's write and read cache. It is recommend to disable caching for SSD drives and leave it enabled only for HDD drives.

- If you use RAID controllers, do not create RAID volumes from HDDs intended for storage (you can still do so for system disks). Each storage HDD needs to be recognized by Virtuozzo Infrastructure Platform as a separate device.

- If you use RAID controllers with caching, equip them with backup battery units (BBUs) to protect against cache loss during power outages.

.. _Network Hardware Recommendations:

Network Hardware Recommendations
********************************

- Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and public traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent possible denial-of-service attacks from the outside.

- Network latency dramatically reduces cluster performance. Use quality network equipment with low latency links. Do not use consumer-grade network switches.

- Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.

- To avoid intrusions, Virtuozzo Infrastructure Platform should be on a dedicated internal network inaccessible from outside.

- Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.

- For maximum sequential I/O performance, use one 1Gbit/s link per each hard drive, or one 10Gbit/s link per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is important in backup scenarios.

- For maximum overall performance, use one 10 Gbit/s link per node (or two bonded for high network availability).

- It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (e.g., 9000-byte jumbo frames). Such settings require additional configuration of switches and often lead to human error. 10 Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.

.. _Hardware and Software Limitations:

Hardware and Software Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Hardware limitations:

- Each management node must have at least two disks (one system+metadata, one storage).

- Each compute or storage node must have at least three disks (one system, one metadata, one storage).

- Five servers are required to test all the features of the product.

- The system disk must have at least 100 GBs of space.

Software limitations:

- The maintenance mode is not supported. Use SSH to shut down or reboot a node.

- One node can be a part of only one cluster.

- Only one S3 cluster can be created on top of a storage cluster.

- Only predefined redundancy modes are available in the management panel.

- Thin provisioning is always enabled for all data and cannot be configured otherwise.

.. note:: For network limitations, see :ref:`Network Limitations`.

.. _Minimum Configuration:

Minimum Configuration
~~~~~~~~~~~~~~~~~~~~~

The minimum configuration described in the table will let you evaluate the following features of the storage cluster:

+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| Node #            | 1st disk role   | 2nd disk role   | 3rd and other disk roles | Access points                                          |
+===================+=================+=================+==========================+========================================================+
| 1                 | System          | Metadata        | Storage                  | iSCSI, Object Storage private, S3 public, NFS, ABGW    |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| 2                 | System          | Metadata        | Storage                  | iSCSI, Object Storage private, S3 public, NFS, ABGW    |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| 3                 | System          | Metadata        | Storage                  | iSCSI, Object Storage private, S3 public, NFS, ABGW    |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| 4                 | System          | Metadata        | Storage                  | iSCSI, Object Storage private, ABGW                    |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| 5                 | System          | Metadata        | Storage                  | iSCSI, Object Storage private, ABGW                    |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+
| 5 nodes           | |_|             |5 MDSs in total  | 5 or more CSs in total   | Access point services run on five nodes in total       |
| in total          |                 |                 |                          |                                                        |
+-------------------+-----------------+-----------------+--------------------------+--------------------------------------------------------+

.. note:: SSD disks can be assigned system, metadata, and cache roles at the same time, freeing up more disks for the storage role.

Even though five nodes are recommended even for the minimal configuration, you can start evaluating Virtuozzo Infrastructure Platform with just one node and add more nodes later (e.g., a three-node installation will let you evaluate the high availability feature). At the very least, an Virtuozzo Infrastructure Platform cluster must have one metadata service and one chunk service running. A single-node installation will let you evaluate services such as iSCSI, ABGW, etc. However, such a configuration will have two key limitations:

#. Just one MDS will be a single point of failure. If it fails, the entire cluster will stop working.

#. Just one CS will be able to store just one chunk replica. If it fails, the data will be lost.

.. important:: If you deploy Virtuozzo Infrastructure Platform on a single node, you must take care of making its storage persistent and redundant to avoid data loss. If the node is physical, it must have multiple disks so you can replicate the data among them. If the node is a virtual machine, make sure that this VM is made highly available by the solution it runs on.

.. note:: Acronis Backup Gateway works with the local object storage in the staging mode. It means that the data to be replicated, migrated, or uploaded to a public cloud is first stored locally and only then sent to the destination. It is vital that the local object storage is persistent and redundant so the local data does not get lost. There are multiple ways to ensure the persistence and redundancy of the local storage. You can deploy your Acronis Backup Gateway on multiple nodes and select a good redundancy mode. If your gateway is deployed on a single node in Virtuozzo Infrastructure Platform, you can make its storage redundant by replicating it among multiple local disks. If your entire Virtuozzo Infrastructure Platform installation is deployed in a single virtual machine with the sole purpose of creating a gateway, make sure this VM is made highly available by the solution it runs on.

.. _Recommended Configuration:

Recommended Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~

The recommended configuration will help you create clusters for production environments:

+--------------------+------------------+---------------------+-----------------------+--------------------------------------+
| Node #             | 1st disk role    | 2nd disk role       | 3rd and other         | Access points                        |
|                    |                  |                     | disk roles            |                                      |
+====================+==================+=====================+=======================+======================================+
| Nodes              | System           | SSD; metadata,      | Storage               | iSCSI, Object Storage private,       |
| 1 to 5             |                  | cache               |                       | S3 public, ABGW                      |
+--------------------+------------------+---------------------+-----------------------+--------------------------------------+
| Nodes 6+           | System           | SSD; cache          | Storage               | iSCSI, Object Storage private, ABGW  |
+--------------------+------------------+---------------------+-----------------------+--------------------------------------+
| 5 or               | |_|              | 5 MDSs              | 5 or more CSs         | All nodes run required access points |
| more               |                  | in total            | in total              |                                      |
| nodes              |                  |                     |                       |                                      |
| in total           |                  |                     |                       |                                      |
+--------------------+------------------+---------------------+-----------------------+--------------------------------------+

Even though a production-ready cluster can be created from just five nodes with recommended hardware, it is still recommended to enter production with at least ten nodes if you are aiming to achieve significant performance advantages over direct-attached storage (DAS) or improved recovery times.

.. important:: To ensure high availability of metadata, at least five metadata services must be running per cluster in any production environment. In this case, if up to two metadata service fail, the remaining metadata services will still be controlling the cluster.

Following are a number of more specific configuration examples that can be used in production. Each configuration can be extended by adding chunk servers and nodes.

HDD Only
********

This basic configuration requires a dedicated disk for each metadata server.

**Nodes 1-5 (base)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         HDD        System 
2         HDD        MDS 
3         HDD        CS
...
---------------------------------
N         HDD        CS
========  =========  ============

**Nodes 6+ (extension)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         HDD        System 
2         HDD        CS 
3         HDD        CS
...
---------------------------------
N         HDD        CS
========  =========  ============

HDD + System SSD (No Cache)
***************************

This configuration is good for creating capacity-oriented clusters.

**Nodes 1-5 (base)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         SSD        System, MDS
2         HDD        CS
3         HDD        CS
...
---------------------------------
N         HDD        CS
========  =========  ============

**Nodes 6+ (extension)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         SSD        System 
2         HDD        CS 
3         HDD        CS
...
---------------------------------
N         HDD        CS
========  =========  ============

HDD + SSD
*********

This configuration is good for creating performance-oriented clusters.

**Nodes 1-5 (base)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         HDD        System
2         SSD        MDS, cache
3         HDD        CS 
...
---------------------------------
N         HDD        CS
========  =========  ============

**Nodes 6+ (extension)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         HDD        System 
2         SSD        Cache
3         HDD        CS
...
---------------------------------
N         HDD        CS
========  =========  ============

SSD Only
********

This configuration does not require SSDs for cache.

When choosing hardware for this configuration, have in mind the following:

- Each Virtuozzo Infrastructure Platform client will be able to obtain up to about 40K sustainable IOPS (read + write) from the cluster.
- If you use the erasure coding redundancy scheme, each erasure coding file, e.g., a single VM's or container's HDD disk, will get up to 2K sustainable IOPS. That is, a user working inside a VM or container will have up to 2K sustainable IOPS per virtual HDD at their disposal. Multiple VMs and containers on a node can utilize more IOPS, up to the client's limit.
- In this configuration, network latency defines more than half of overall performance, so make sure that the network latency is minimal. One recommendation is to have one 10Gbps switch between any two nodes in the cluster.

**Nodes 1-5 (base)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         SSD        System, MDS
2         SSD        CS
3         SSD        CS 
...
---------------------------------
N         SSD        CS
========  =========  ============

**Nodes 6+ (extension)**

========  =========  ============
Disk No.  Disk Type  Disk Role(s)
========  =========  ============
1         SSD        System 
2         SSD        CS
3         SSD        CS
...
---------------------------------
N         SSD        CS
========  =========  ============

HDD + SSD (No Cache), 2 Tiers
*****************************

In this configuration example, tier 1 is for HDDs without cache and tier 2 is for SSDs. Tier 1 can store cold data (e.g., backups), tier 2 can store hot data (e.g., high-performance virtual machines).

**Nodes 1-5 (base)**

========  =========  ============  ====
Disk No.  Disk Type  Disk Role(s)  Tier
========  =========  ============  ====
1         SSD        System, MDS
2         HDD        CS            1
3         SSD        CS            2
...
---------------------------------------
N         HDD/SSD    CS            1/2
========  =========  ============  ====

**Nodes 6+ (extension)**

========  =========  ============  ====
Disk No.  Disk Type  Disk Role(s)  Tier
========  =========  ============  ====
1         SSD        System
2         HDD        CS            1
3         SSD        CS            2
...
---------------------------------------
N         HDD/SSD    CS            1/2
========  =========  ============  ====

HDD + SSD, 3 Tiers
******************

In this configuration example, tier 1 is for HDDs without cache, tier 2 is for HDDs with cache, and tier 3 is for SSDs. Tier 1 can store cold data (e.g., backups), tier 2 can store regular virtual machines, and tier 3 can store high-performance virtual machines.

**Nodes 1-5 (base)**

=================  ==================  ========================  ==========
Disk No.           Disk Type           Disk Role(s)              Tier
=================  ==================  ========================  ==========
1                  HDD/SSD             System
2                  SSD                 MDS, T2 cache
3                  HDD                 CS                        1
4                  HDD                 CS                        2
5                  SSD                 CS                        3
...
---------------------------------------------------------------------------
N                  HDD/SSD             CS                        1/2/3
=================  ==================  ========================  ==========

**Nodes 6+ (extension)**

========  =========  ============  =====
Disk No.  Disk Type  Disk Role(s)  Tier
========  =========  ============  =====
1         HDD/SSD    System
2         SSD        T2 cache
3         HDD        CS            1
4         HDD        CS            2
5         SSD        CS            3
...
----------------------------------------
N         HDD/SSD    CS            1/2/3
========  =========  ============  =====

.. _Raw Disk Space Considerations:

Raw Disk Space Considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When planning the Virtuozzo Infrastructure Platform infrastructure, keep in mind the following to avoid confusion:

- The capacity of HDD and SSD is measured and specified with decimal, not binary prefixes, so "TB" in disk specifications usually means "terabyte". The operating system, however, displays drive capacity using binary prefixes meaning that "TB" is "tebibyte" which is a noticeably larger number. As a result, disks may show capacity smaller than the one marketed by the vendor. For example, a disk with 6TB in specifications may be shown to have 5.45 TB of actual disk space in Virtuozzo Infrastructure Platform.

- Virtuozzo Infrastructure Platform reserves 5% of disk space for emergency needs.

Therefore, if you add a 6TB disk to a cluster, the available physical space should increase by about 5.2 TB.

