When changing the performance level of boot volume which two performance levels can you select?

  • Overview
  • Solutions
  • Products
  • Pricing
  • Resources
  • Docs
    • Overview
    • Guides
    • Reference
    • Samples
    • Support
    • Resources
  • Support
  • Console
  • Contact Us
  • Start free

Stay organized with collections Save and categorize content based on your preferences.

Overview

This page discusses the many factors that determine the performance of the block storage volumes that you attach to your virtual machine (VM) instances. Before you begin, consider the following:

  • Persistent disks are networked storage and generally have higher latency compared to physical disks or local SSDs. To reach the maximum performance limits of your persistent disks, you must issue enough I/O requests in parallel. To check if you're using a high enough queue depth to reach your required performance levels, see I/O queue depth.

  • Make sure that your application is issuing enough I/Os to saturate your disk.

  • For workloads that primarily involve small (from 4 KB to 16 KB) random I/Os, the limiting performance factor is random input/output operations per second (IOPS).

  • For workloads that primarily involve sequential or large (256 KB to 1 MB) random I/Os, the limiting performance factor is throughput.

Choose a storage option

To choose a block storage option that is appropriate for your workload, consider factors such as machine type support, disk size, and performance limits.

Disk types

You can provide several different types of block storage for your instances to use. When you configure a zonal or regional persistent disk, you can select one of the following disk types. If you create a disk in the Google Cloud console, the default disk type is pd-balanced. If you create a disk using the gcloud CLI or the Compute Engine API, the default disk type is pd-standard.

  • Standard persistent disks (pd-standard) are suited for large data processing workloads that primarily use sequential I/Os.
  • Balanced persistent disks (pd-balanced) are an alternative to SSD persistent disks that balance performance and cost. With the same maximum IOPS as SSD persistent disks and lower IOPS per GB, a balanced persistent disk offers performance levels suitable for most general-purpose applications at a price point between that of standard and SSD persistent disks.
  • SSD persistent disks (pd-ssd) are suited for enterprise applications and high-performance database needs that require lower latency and more IOPS than standard persistent disks provide. SSD persistent disks are designed for single-digit millisecond latencies; the observed latency is app specific.
  • Extreme persistent disks (pd-extreme) offer consistently high performance for both random access workloads and bulk throughput. They are designed for high-end database workloads, such as Oracle or SAP HANA. Unlike other disk types, you can provision your desired IOPS. For more information, see Extreme persistent disks.

Performance limits

The following table shows performance limits for persistent disks. For information about local SSD performance limits, see Local SSD performance.

Zonal persistent disks

The following table shows maximum sustained IOPS for zonal persistent disks:

Zonal
standard
PD
Zonal
balanced
PD
Zonal
SSD PD
Zonal
extreme PD
Zonal
SSD PD
multi-writer mode
Read IOPS per GB0.75 6 30 30
Write IOPS per GB1.5 6 30 30
Read IOPS per instance7,500* 80,000* 100,000* 120,000* 100,000*
Write IOPS per instance15,000* 80,000* 100,000* 120,000* 100,000*

The following table shows maximum sustained throughput for zonal persistent disks:

Zonal
standard
PD
Zonal
balanced
PD
Zonal
SSD PD
Zonal
extreme PD
Zonal
SSD PD
multi-writer mode
Throughput per GB (MB/s)0.12 0.28 0.48 0.48
Read throughput per instance (MB/s)1,200* 1,200* 1,200* 2,200** 1,200**
Write throughput per instance (MB/s)400** 1,200* 1,200* 2,200** 1,200**

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.

Regional persistent disks

Regional persistent disks are supported on only E2, N1, N2, and N2D machine type VMs. The following table shows maximum sustained IOPS for regional PDs:

Regional
standard
PD
Regional
balanced
PD
Regional
SSD PD
Read IOPS per GB0.75 6 30
Write IOPS per GB1.5 6 30
Read IOPS per instance7,500* 60,000* 100,000*‡
Write IOPS per instance15,000* 30,000* 80,000*§

The following table shows maximum sustained throughput for regional persistent disks:

Regional
standard
PD
Regional
balanced
PD
Regional
SSD PD
Throughput per GB (MB/s)0.12 0.28 0.48
Read throughput per instance1,200* 1,200* 1,200*
Write throughput per instance200** 600* 600*

* Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.

‡ Requires at least 64 vCPU and N1 or N2 machine type. May be lower when it is in unreplicated mode.

§ Requires at least 64 vCPU and N1 or N2 machine type.

Attaching a disk to multiple virtual machine instances in read-only mode mode or in multi-writer mode does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit. Persistent disks created in multi-writer mode have specific IOPS and throughput limits. To learn how to share persistent disks between multiple VMs, see Sharing persistent disks between VMs.

Persistent disk I/O operations share a common path with vNIC network traffic within your VM's hypervisor. Therefore, if your VM has significant network traffic, the actual read bandwidth and IOPS consistency might be less than the listed maximum limits. Some variability in the performance limits is to be expected, especially when operating near the maximum IOPS limits with an I/O size of 16 KB. For a summary of bandwidth expectations, see Bandwidth summary table.

Configure your persistent disks and instances

Persistent disk performance scales with the size of the disk and with the number of vCPUs on your VM instance.

Performance scales until it reaches either the limits of the disk or the limits of the VM instance to which the disk is attached. The machine type and the number of vCPUs on the instance determine the VM instance limits.

The following tables show performance limits for zonal persistent disks.

Performance by machine type and vCPU count

The following tables show how zonal persistent disk performance varies according to the machine type and number of vCPUs on the VM to which the disk is attached.

A2 standard VMs

pd-standard

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 5,000 400 800
a2-highgpu-2g 15,000 7,500 400 1,200
a2-highgpu-4g 15,000 7,500 400 1,200
a2-highgpu-8g 15,000 7,500 400 1,200
a2-megagpu-16g 15,000 7,500 400 1,200

pd-balanced

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 15,000 800 800
a2-highgpu-2g 20,000 20,000 1,200 1,200
a2-highgpu-4g 50,000 50,000 1,200 1,200
a2-highgpu-8g 80,000 80,000 1,200 1,200
a2-megagpu-16g 80,000 80,000 1,200 1,200

pd-ssd

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-highgpu-1g 15,000 15,000 800 800
a2-highgpu-2g 25,000 25,000 1,200 1,200
a2-highgpu-4g 60,000 60,000 1,200 1,200
a2-highgpu-8g 100,000 100,000 1,200 1,200
a2-megagpu-16g 100,000 100,000 1,200 1,200

A2 ultra VMs

pd-standard

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-ultragpu-1g 15,000 5,000 400 800
a2-ultragpu-2g 15,000 7,500 400 1,200
a2-ultragpu-4g 15,000 7,500 400 1,200
a2-ultragpu-8g 15,000 7,500 400 1,200

pd-balanced

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-ultragpu-1g 15,000 15,000 800 800
a2-ultragpu-2g 20,000 20,000 1,200 1,200
a2-ultragpu-4g 50,000 50,000 1,200 1,200
a2-ultragpu-8g 80,000 80,000 1,200 1,200

pd-ssd

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
a2-ultragpu-1g 15,000 15,000 800 800
a2-ultragpu-2g 25,000 25,000 1,200 1,200
a2-ultragpu-4g 60,000 60,000 1,200 1,200
a2-ultragpu-8g 100,000 100,000 1,200 1,200

C2 VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
4 4,000 3,000 240 240
8 4,000 3,000 240 240
16 4,000 3,000 240 240
30 8,000 3,000 240 240
60 15,000 3,000 240 240

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
4 4,000 4,000 240 240
8 4,000 4,000 240 240
16 4,000 8,000 480 600
30 8,000 15,000 480 600
60 15,000 15,000 800 1,200

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
4 4,000 4,000 240 240
8 4,000 4,000 240 240
16 4,000 8,000 480 600
30 8,000 15,000 480 600
60 15,000 30,000 800 1,200

C2D VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2 4,590 3,060 245 245
4 4,590 3,060 245 245
8 4,590 3,060 245 245
16 4,590 3,060 245 245
32 8,160 3,060 245 245
56 8,160 3,060 245 245
112 15,300 3,060 245 245

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2 4,590 4,080 245 245
4 4,590 4,080 245 245
8 4,590 4,080 245 245
16 4,590 8,160 245 326
32 8,160 15,300 245 612
56 8,160 15,300 245 612
112 15,300 30,600 408 1,224

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2 4,590 4,080 245 245
4 4,590 4,080 245 245
8 4,590 4,080 245 245
16 4,590 8,160 245 326
32 8,160 15,300 245 612
56 8,160 15,300 245 612
112 15,300 30,600 408 1,224

E2 VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
e2-medium* 10,000 1,000 200 200
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
e2-medium* 10,000 12,000 200 200
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,000 1,200
32 or more 50,000 50,000 1,000 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
e2-medium* 10,000 12,000 200 200
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,000 1,200
32 or more 60,000 60,000 1,000 1,200

*E2 shared-core machine types run two vCPUs simultaneously shared on one physical core for a specific fraction of time.

N1 VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 Up to 3,000 204 240
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more 80,000 80,000 1,200 1,200

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more 100,000 100,000 1,200 1,200

N2 VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more 80,000 80,000 1,200 1,200

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more 100,000 100,000 1,200 1,200

pd-extreme

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
n2-standard-64 120,000 120,000 2,200 2,200
n2-highmem-64 120,000 120,000 2,200 2,200
n2-highmem-80 120,000 120,000 2,200 2,200

N2D VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more Up to 80,000 Up to 80,000 1,200 1,200

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more Up to 100,000 Up to 100,000 1,200 1,200

M1 VMs

pd-standard

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m1-megamem-96 15,000 7,500 400 1,200
m1-ultramem-40 15,000 7,500 400 1,200
m1-ultramem-80 15,000 7,500 400 1,200
m1-ultramem-160 15,000 7,500 400 1,200

pd-balanced

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m1-megamem-96 80,000 80,000 1,200 1,200
m1-ultramem-40 60,000 60,000 1,200 1,200
m1-ultramem-80 70,000 70,000 1,200 1,200
m1-ultramem-160 70,000 70,000 1,200 1,200

pd-ssd

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m1-megamem-96 90,000 90,000 1,200 1,200
m1-ultramem-40 60,000 60,000 1,200 1,200
m1-ultramem-80 70,000 70,000 1,200 1,200
m1-ultramem-160 70,000 70,000 1,200 1,200

pd-extreme

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m1-megamem-96 90,000 90,000 2,200 2,200

M2 VMs

pd-standard

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m2-megamem-416 15,000 7,500 400 1,200
m2-ultramem-208 15,000 7,500 400 1,200
m2-ultramem-416 15,000 7,500 400 1,200
m2-hypermem-416 15,000 7,500 400 1,200

pd-balanced

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m2-megamem-416 40,000 40,000 1,200 1,200
m2-ultramem-208 60,000 60,000 1,200 1,200
m2-ultramem-416 40,000 40,000 1,200 1,200
m2-hypermem-416 40,000 40,000 1,200 1,200

pd-ssd

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m2-megamem-416 40,000 40,000 1,200 1,200
m2-ultramem-208 60,000 60,000 1,200 1,200
m2-ultramem-416 40,000 40,000 1,200 1,200
m2-hypermem-416 40,000 40,000 1,200 1,200

pd-extreme

Machine typeMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
m2-ultramem-208 60,000 60,000 2,200 2,200
m2-ultramem-416 40,000 40,000 1,200 2,200
m2-hypermem-416 40,000 40,000 1,200 2,200

T2D VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 3,000 204 240
2-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 20,000 20,000 1,200 1,200
32-63 50,000 50,000 1,200 1,200
64 or more Up to 80,000 Up to 80,000 1,200 1,200

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 15,000 204 240
2-7 15,000 15,000 240 240
8-15 15,000 15,000 800 800
16-31 25,000 25,000 1,200 1,200
32-63 60,000 60,000 1,200 1,200
64 or more Up to 100,000 Up to 100,000 1,200 1,200

T2A VMs

pd-standard

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 15,000 1,000 204 240
2-3 15,000 2,400 240 240
4-7 15,000 3,000 240 240
8-15 15,000 5,000 400 800
16 or more 15,000 7,500 400 1,200

pd-balanced

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 20,000 20,000 204 240
2-7 20,000 20,000 240 240
8-15 25,000 25,000 800 800
16-31 25,000 25,000 1,200 1,200
32-47 60,000 60,000 1,200 1,200
48 80,000 80,000 1,800 1,800

pd-ssd

VM vCPU countMaximum write IOPSMaximum read IOPSMaximum write throughput (MB/s)Maximum read throughput (MB/s)
1 20,000 20,000 204 240
2-7 20,000 20,000 240 240
8-15 25,000 25,000 800 800
16-31 25,000 25,000 1,200 1,200
32-47 60,000 60,000 1,200 1,200
48 80,000 80,000 1,800 1,800

Example

Consider a 1,000 GB zonal SSD persistent disk attached to a VM with an N2 machine type and 4 vCPUs. The read limit based solely on the size of the disk is 30,000 IOPS, because SSD persistent disks can reach up to 30 IOPS per GB of disk space. However, the VM has 4 vCPUs so the read limit is restricted to 15,000 IOPS.

Review performance

You can review persistent disk performance metrics in Cloud Monitoring, Google Cloud's integrated monitoring solution.

To learn more, see Reviewing persistent disk performance metrics.

Optimize disk performance

To increase disk performance, start with the following steps:

  • Resize your persistent disks to increase the per-disk IOPS and throughput limits. Persistent disks do not have any reserved, unusable capacity, so you can use the full disk without performance degradation. However, certain file system and applications might perform worse as the disk becomes full, so you might need to consider increasing the size of your disk to avoid such situations.

  • Change the machine type and number of vCPUs on the instance to increase the per-instance IOPS and throughput limits.

After you ensure that any bottlenecks are not due to the disk size or machine type of the VM, your app and operating system might still need some tuning. See Optimizing persistent disk performance and Optimizing local SSD performance.

Other factors that affect performance

Network egress caps on write throughput

Your virtual machine (VM) instance has a network egress cap that depends on the machine type of the VM.

Compute Engine stores data on persistent disks with multiple parallel writes to ensure built-in redundancy. Also, each write request has some overhead that uses additional write bandwidth.

The maximum write traffic that a VM instance can issue is the network egress cap divided by a bandwidth multiplier that accounts for the replication and overhead.

The network egress caps are listed in the Maximum egress bandwidth (Gbps) column in the machine type tables for general purpose, compute-optimized, memory-optimized, and accelerator-optimized machine families.

The bandwidth multiplier is approximately 1.16x at full network utilization meaning that 16% of bytes written are overhead. For regional disks, the bandwidth multiplier is approximately 2.32x to account for additional replication overhead.

In a situation where persistent disks compete with network egress bandwidth, 60% of the maximum network egress bandwidth, defined by the machine type, is allocated to persistent disk writes. The remaining 40% is available for all other network egress traffic. Refer to egress bandwidth for details about other network egress traffic.

The following example shows how to calculate persistent disk's maximum write bandwidth on an N1 VM instance. The bandwidth allocation is the portion of network egress bandwidth allocated to persistent disk. The maximum write bandwidth is persistent disk's maximum write bandwidth adjusted for overhead.

VM vCPU CountNetwork egress cap (MBps)Bandwidth allocation (MBps)Maximum write bandwidth (MBps)Maximum write bandwidth at full network utilization (MBps)
1 250 150 216 129
2-7 1,250 750 1,078 647
8-15 2,000 1,200 1,724 1,034
16+ 4,000 2,400 3,448 2,069

You can calculate the maximum persistent disk bandwidth using the following formulas:

N1 VM with 1 vCPU
The network egress cap is
2 Gbps / 8 bits = 0.25 GB per second = 250 MB per second.

Persistent disk's bandwidth allocation at full network utilization is
250 MB per second * 0.6 = 150 MB per second.

Persistent disk's maximum write bandwidth with no network contention is
* Zonal disks: 250 MB per second / 1.16 ~= 216 MB per second
* Regional disks: 250 MB per second / 2.32 ~= 108 MB per second

Persistent disk's maximum write bandwidth at full network utilization is
* Zonal disks: 150 MB per second / 1.16 ~= 129 MB per second
* Regional disks: 150 MB per second / 2.32 ~= 65 MB per second

Note that the network egress limits provide an upper bound on performance. Other factors may limit performance below this level. See the following sections for information on other constraints.

Simultaneous reads and writes

For standard persistent disks, simultaneous reads and writes share the same resources. While your instance is using more read throughput or IOPS, it is able to perform fewer writes. Conversely, instances that use more write throughput or IOPS are able to perform fewer reads.

Persistent disks cannot simultaneously reach their maximum throughput and IOPS limits for both reads and writes.

Note that throughput = IOPS * I/O size. To take advantage of maximum throughput limits for simultaneous reads and writes on SSD persistent disks, use an I/O size such that read and write IOPS combined don't exceed the IOPS limit.

Instance IOPS limits for simultaneous reads and writes

Standard persistent diskSSD persistent disk (8 vCPUs)SSD persistent disk (32+ vCPUs)SSD persistent disk (64+ vCPUs)
ReadWriteReadWriteReadWriteReadWrite
7,500 0 15,000 0 60,000 0 100,000 0
5,625 3,750 11,250 3,750 45,000 15,000 75,000 25,000
3,750 7,500 7,500 7,500 30,000 30,000 50,000 50,000
1875 11,250 3,750 11,250 15,000 45,000 25,000 75,000
0 15,000 0 15,000 0 60,000 0 100,000

Instance throughput limits (MB per second) for simultaneous reads and writes

Standard persistent diskSSD persistent disk (6-14 vCPUs)SSD persistent disk (16+ vCPUs)
ReadWriteReadWriteReadWrite
1200 0 800* 800* 1,200* 1,200*
900 100
600 200
300 300
0 400

* For SSD persistent disks, the max read throughput and max write throughput are independent of each other, so these limits are constant.

The IOPS numbers in this table are based on an 8 KB I/O size. Other I/O sizes, such as 16 KB, might have different IOPS numbers but maintain the same read/write distribution.

Logical volume size

Persistent disks can be up to 64 TB in size, and you can create single logical volumes of up to 257 TB using logical volume management inside your VM. A larger volume size impacts performance in the following ways:

  • Not all local file systems work well at this scale. Common operations, such as mounting and file system checking might take longer than expected.
  • Maximum persistent disk performance is achieved at smaller sizes. Disks take longer to fully read or write with this much storage on one VM. If your application supports it, consider using multiple VMs for greater total-system throughput.
  • Snapshotting large amounts of persistent disk might take longer than expected to complete and might provide an inconsistent view of your logical volume without careful coordination with your application.
Multiple disks attached to a single VM instance

Multiple disks of the same type

If you have multiple disks of the same type attached to a VM instance in the same mode (for example, read/write), the performance limits are the same as the limits of a single disk that has the combined size of those disks. If you use all the disks at 100%, the aggregate performance limit is split evenly among the disks regardless of relative disk size.

For example, suppose you have a 200 GB standard disk and a 1,000 GB standard disk. If you do not use the 1,000 GB disk, then the 200 GB disk can reach the performance limit of a 1,200 GB standard disk. If you use both disks at 100%, then each has the performance limit of a 600 GB standard persistent disk (1,200 GB / 2 disks = 600 GB disk).

Multiple disks of different types

If you have multiple disks of different types attached to a single VM, then the SSD per-VM limit determines the total performance limit for the VM. This total performance limit is shared between all disks attached to the VM.

For example, suppose you have one 5,000 GB standard disk and one 1,000 GB SSD disk attached to an N2 VM with one vCPU. The read IOPS limit for the standard disk is 3,000 and the read IOPS limit for the SSD disk is 15,000. Because the limit of the SSD disk determines the overall limit, the total read IOPS limit for your VM is 15,000. This limit is shared between all attached disks.

Low I/O queue depth

Persistent disks have higher latency than locally attached disks such as local SSDs because they are network-attached devices. They can provide very high IOPS and throughput, but you must make sure that sufficient I/O requests are done in parallel. The number of I/O requests done in parallel is referred to as the I/O queue depth. To ensure that you are issuing enough I/O requests in parallel, follow the recommendations to use a high I/O queue depth.

What's next

  • Benchmark your persistent disks and local SSDs.
  • Optimize your persistent disk and local SSD performance.
  • Learn about persistent disk and local SSD pricing.
  • Learn how to review your project log entries using the Logs Explorer.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2022-10-20 UTC.

[{ "type": "thumb-down", "id": "hardToUnderstand", "label":"Hard to understand" },{ "type": "thumb-down", "id": "incorrectInformationOrSampleCode", "label":"Incorrect information or sample code" },{ "type": "thumb-down", "id": "missingTheInformationSamplesINeed", "label":"Missing the information/samples I need" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]

Which elastic performance option should you choose for the block volume?

Balanced: The default performance level for new and existing block and boot volumes, and provides a good balance between performance and cost savings for most workloads. With this option, you are purchasing 10 VPUs per GB/month.

How do I change the boot volume in Oracle Cloud?

Using the Console.
Open the navigation menu and click Compute. Under Compute, click Instances..
Click the instance that you want to reattach the boot volume to..
Under Resources, click Boot Volume..
Click the Actions menu, and then click Attach Boot Volume. Confirm when prompted..

What are boot volumes?

A boot volume refers to the portion of a hard drive containing the operating system, its supporting file system, and what hard drive contains the operating system.

What is the difference between boot volume and block volume?

Block volume: A detachable block storage device that allows you to dynamically expand the storage capacity of an instance. Boot volume: A detachable boot volume device that contains the image used to boot a Compute instance. See Boot Volumes for more information.