When changing the performance level of boot volume which two performance levels can you select?
Show
OverviewThis page discusses the many factors that determine the performance of the block storage volumes that you attach to your virtual machine (VM) instances. Before you begin, consider the following:
Choose a storage optionTo choose a block storage option that is appropriate for your workload, consider factors such as machine type support, disk size, and performance limits. Disk typesYou can provide several different types of block storage for your instances to use. When you configure a zonal or regional persistent disk, you can select one of the following disk types. If you create a disk in the Google Cloud console, the default disk type is pd-balanced. If you create a disk using the gcloud CLI or the Compute Engine API, the default disk type is pd-standard.
Performance limitsThe following table shows performance limits for persistent disks. For information about local SSD performance limits, see Local SSD performance.
The following table shows maximum sustained IOPS for zonal persistent disks: The following table shows maximum sustained throughput for zonal persistent disks: * Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors. Regional persistent disks are supported on only E2, N1, N2, and
N2D machine type VMs. The following table shows maximum sustained IOPS for regional PDs: The following table shows maximum sustained throughput for regional persistent disks: * Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors. ‡ Requires at least 64 vCPU and N1 or N2 machine type. May be lower when it is in unreplicated
mode. § Requires at least 64 vCPU and N1 or N2 machine type. Attaching a disk to multiple virtual machine instances in read-only mode mode or in multi-writer mode does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit. Persistent disks created in multi-writer mode have specific IOPS and throughput limits. To learn how to share persistent disks between multiple VMs, see Sharing persistent disks between VMs. Persistent disk I/O operations share a common path with vNIC network traffic within your VM's hypervisor. Therefore, if your VM has significant network traffic, the actual read bandwidth and IOPS consistency might be less than the listed maximum limits. Some variability in the performance limits is to be expected, especially when operating near the maximum IOPS limits with an I/O size of 16 KB. For a summary of bandwidth expectations, see Bandwidth summary table. Configure your persistent disks and instancesPersistent disk performance scales with the size of the disk and with the number of vCPUs on your VM instance. Performance scales until it reaches either the limits of the disk or the limits of the VM instance to which the disk is attached. The machine type and the number of vCPUs on the instance determine the VM instance limits. The following tables show performance limits for zonal persistent disks. Performance by machine type and vCPU countThe following tables show how zonal persistent disk performance varies according to the machine type and number of vCPUs on the VM to which the disk is attached. A2 standard VMs
A2 ultra VMs
C2 VMs
C2D VMs
E2 VMs*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time. *E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time. *E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time. N1 VMs
N2 VMs
N2D VMs
M1 VMs
M2 VMs
T2D VMs
T2A VMs
ExampleConsider a 1,000 GB zonal SSD persistent disk attached to a VM with an N2 machine type and 4 vCPUs. The read limit based solely on the size of the disk is 30,000 IOPS, because SSD persistent disks can reach up to 30 IOPS per GB of disk space. However, the VM has 4 vCPUs so the read limit is restricted to 15,000 IOPS. Review performanceYou can review persistent disk performance metrics in Cloud Monitoring, Google Cloud's integrated monitoring solution. To learn more, see Reviewing persistent disk performance metrics. Optimize disk performanceTo increase disk performance, start with the following steps:
After you ensure that any bottlenecks are not due to the disk size or machine type of the VM, your app and operating system might still need some tuning. See Optimizing persistent disk performance and Optimizing local SSD performance. Other factors that affect performanceYour virtual machine (VM) instance has a network egress cap that depends on the machine type of the VM. Compute Engine stores data on persistent disks with multiple parallel writes to ensure built-in redundancy. Also, each write request has some overhead that uses additional write bandwidth. The maximum write traffic that a VM instance can issue is the network egress cap divided by a bandwidth multiplier that accounts for the replication and overhead. The network egress caps are listed in the Maximum egress bandwidth (Gbps) column in the machine type tables for general purpose, compute-optimized, memory-optimized, and accelerator-optimized machine families. The bandwidth multiplier is approximately 1.16x at full network utilization meaning that 16% of bytes written are overhead. For regional disks, the bandwidth multiplier is approximately 2.32x to account for additional replication overhead. In a situation where persistent disks compete with network egress bandwidth, 60% of the maximum network egress bandwidth, defined by the machine type, is allocated to persistent disk writes. The remaining 40% is available for all other network egress traffic. Refer to egress bandwidth for details about other network egress traffic. The following example shows how to calculate persistent disk's maximum write bandwidth on an N1 VM instance. The bandwidth allocation is the portion of network egress bandwidth allocated to persistent disk. The maximum write bandwidth is persistent disk's maximum write bandwidth adjusted for overhead.
You can calculate the maximum persistent disk bandwidth using the following formulas: N1 VM with 1 vCPU Persistent disk's bandwidth allocation at full network utilization is Persistent disk's maximum write bandwidth with no network contention is Persistent disk's maximum write bandwidth at full network utilization is Note that the network egress limits provide an upper bound on performance. Other factors may limit performance below this level. See the following sections for information on other constraints. For standard persistent disks, simultaneous reads and writes share the same resources. While your instance is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that use more write throughput or IOPS are able to perform fewer reads. Persistent disks cannot simultaneously reach their maximum throughput and IOPS limits for both reads and writes. Note that throughput = IOPS * I/O size. To take advantage of maximum throughput limits for simultaneous reads and writes on SSD persistent disks, use an I/O size such that read and write IOPS combined don't exceed the IOPS limit. Instance IOPS limits for simultaneous reads and writes
Instance throughput limits (MB per second) for simultaneous reads and writes
* For SSD persistent disks, the max read throughput and max write throughput are independent of each other, so these limits are constant. The IOPS numbers in this table are based on an 8 KB I/O size. Other I/O sizes, such as 16 KB, might have different IOPS numbers but maintain the same read/write distribution. Persistent disks can be up to 64 TB in size, and you can create single logical volumes of up to 257 TB using logical volume management inside your VM. A larger volume size impacts performance in the following ways:
Multiple disks of the same type If you have multiple disks of the same type attached to a VM instance in the same mode (for example, read/write), the performance limits are the same as the limits of a single disk that has the combined size of those disks. If you use all the disks at 100%, the aggregate performance limit is split evenly among the disks regardless of relative disk size. For example, suppose you have a 200 GB standard disk and a 1,000 GB standard disk. If you do not use the 1,000 GB disk, then the 200 GB disk can reach the performance limit of a 1,200 GB standard disk. If you use both disks at 100%, then each has the performance limit of a 600 GB standard persistent disk (1,200 GB / 2 disks = 600 GB disk). Multiple disks of different types If you have multiple disks of different types attached to a single VM, then the SSD per-VM limit determines the total performance limit for the VM. This total performance limit is shared between all disks attached to the VM. For example, suppose you have one 5,000 GB standard disk and one 1,000 GB SSD disk attached to an N2 VM with one vCPU. The read IOPS limit for the standard disk is 3,000 and the read IOPS limit for the SSD disk is 15,000. Because the limit of the SSD disk determines the overall limit, the total read IOPS limit for your VM is 15,000. This limit is shared between all attached disks. Persistent disks have higher latency than locally attached disks such as local SSDs because they are network-attached devices. They can provide very high IOPS and throughput, but you must make sure that sufficient I/O requests are done in parallel. The number of I/O requests done in parallel is referred to as the I/O queue depth. To ensure that you are issuing enough I/O requests in parallel, follow the recommendations to use a high I/O queue depth. What's next
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2022-10-20 UTC. [{ "type": "thumb-down", "id": "hardToUnderstand", "label":"Hard to understand" },{ "type": "thumb-down", "id": "incorrectInformationOrSampleCode", "label":"Incorrect information or sample code" },{ "type": "thumb-down", "id": "missingTheInformationSamplesINeed", "label":"Missing the information/samples I need" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }] Which elastic performance option should you choose for the block volume?Balanced: The default performance level for new and existing block and boot volumes, and provides a good balance between performance and cost savings for most workloads. With this option, you are purchasing 10 VPUs per GB/month.
How do I change the boot volume in Oracle Cloud?Using the Console. Open the navigation menu and click Compute. Under Compute, click Instances.. Click the instance that you want to reattach the boot volume to.. Under Resources, click Boot Volume.. Click the Actions menu, and then click Attach Boot Volume. Confirm when prompted.. What are boot volumes?A boot volume refers to the portion of a hard drive containing the operating system, its supporting file system, and what hard drive contains the operating system.
What is the difference between boot volume and block volume?Block volume: A detachable block storage device that allows you to dynamically expand the storage capacity of an instance. Boot volume: A detachable boot volume device that contains the image used to boot a Compute instance. See Boot Volumes for more information.
|