Demonstrating Linear Scalability of Cohesity Data Platform

June 27, 2017

Products, TechTalks

Many enterprise storage systems claim they scale out linearly, but in reality their performance reaches a cap as more nodes are added. As a result, businesses are hesitant to buy scale-out systems due to worries that as their enterprise scales, the system might not actually scale with it. Typically, the primary reason for degradation in performance is system bottlenecks that only show up at higher node counts.

Cohesity is a LIMITLESS scaleout system. It is built upon web-scale principles pioneered over the past decade at the world’s largest web companies. Our goal is to show that Cohesity has no performance bottlenecks as our cluster becomes ever larger.

Test Setup

Cohesity is a hyper-converged platform that consolidates all the secondary data and various associated workflows including backups, test/dev copies, files, objects and analytics. The IO patterns for these workflows fall broadly into two buckets: large sequential read/write and random read/write.

  1. Large sequential RW: Data protection and analytics exhibit this IO pattern. To test the scalability for these workflows, we use sequential inline dedup (IDD) read/write workload. The expectation is linear increase in throughput as the cluster size scales.
  2. Random RW: Test/dev and file/object workflows exhibit this IO pattern. We use IDD random read/write workload to test scalability for these uses. The expectation is linear increases in IOPS as the cluster size scales.

We simulated the workload using fio by writing four 2GB files per node, and we scaled our cluster from 8 nodes to 256 nodes. We used 1MB block size for sequential reads and writes and 4KB block size for random reads and writes. In addition, we ran our performance tests on Azure cloud using Cohesity Cloud Edition, that runs the same software as our on premise Cohesity C2000 Hyperconverged Nodes or Cisco UCS.

Test Result

Top two charts demonstrate that for IDD sequential read/write workload, the relative throughput (MB/sec) increases linearly with cluster size.

The bottom two charts show relative scalability of random reads/writes. The addition of Cohesity nodes increases IOPS linearly. As a result more random read/write workloads (e.g. more test and dev VMs) can be run off of Cohesity as more nodes are added.


As demonstrated by these scalability tests, Cohesity offers limitless scalability in its distributed storage platform. What this means is that businesses can be rest assured that Cohesity can scale with their growing storage demands without compromising performance.

  1. Ben Price | UC, Santa Barbara

    Our plans for cluster build-out with additional nodes makes this essential to continued service expansion. We are excited for the upcoming ability to share our cluster administratively using multi-tenant with everyone on campus later in August.

    June 28, 2017 at 2:13 pm
  2. Jake Martinez | Burris Logistics

    True test

    June 29, 2017 at 11:10 am
  3. Pinchas Zerbib | Meridian Capital Group


    June 30, 2017 at 10:20 am
  4. Eric Duncan | UT Medical Center

    That’s actually reassuring. As we continue to move storage to Cohesity, I am not worried about performance.

    July 4, 2017 at 7:51 am
  5. Eric Zuspan | MultiCare

    Quite true

    July 4, 2017 at 7:19 pm
  6. Ciro Petitto | Intesasanpaolo

    True Test

    July 7, 2017 at 7:59 am
  7. Binyomin Kolodny | Meridian Capital Group

    It always amazes me when the promises turn out to be true, and in Cohesity’s case it has.

    July 11, 2017 at 9:53 am
  8. Mike Franklin | ARIT, UCSB

    Just added another node!

    July 11, 2017 at 2:15 pm
Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *