神刀安全网

Enterprise Cassandra Deployments

Successful Cassandra deployment in the cloud takes a certain amount of skill and knowledge. In this blog I will share best practices for a successful Cassandra deployment in the cloud such as Amazon Web Services ( AWS ) and Google Cloud Platform ( GCP ). I will also discuss how you can protect and recover the data in your Cassandra cluster when deployed in the public cloud.

Cassandra Deployments in the Public Cloud

There are two main ways to have successful enterprise Cassandra deployments in AWS:

  • Use a ‘managed’ Cassandra deployment. This is a reasonable approach; the only downside is the inability to customize data placement beyond what is typically provided by the vendor.
  • Manage the cluster yourself, if you want to have full control over your Cassandra deployment and customize it to scale and perform according your business needs.

Please refer documentation from Datastax or Apache Cassandra for detailed best practices for Cassandra deployments. The following best practices are reiterated here to simplify deployment of a scalable Cassandra cluster in AWS.

Storage

Both AWS and GCP offer two types of local storage options for their compute instances. For AWS, the options are Elastic Block Storage (EBS) and Ephemeral Storage. In GCP, the options are Persistent Disks and Local SSD. Both options have their own trade-offs. Pick the one that best suits your deployment.

Persistent Storage

EBS volumes have performance characteristics similar to network storage such as a NAS or SAN. Because storage from EBS volumes is multitenant in nature, it is not suitable for distributed, log structured databases such as Cassandra. If you want to ensure reliable performance through EBS volumes, use “EBS optimized instances” with “provisioned IOPS.” Since the IO traffic to EBS volumes goes over the instance network, EBS optimized will provide reliable performance, but does not guarantee any increased performance. To get the most performance out of EBS volumes, use instances with multiple disks and separate the volumes used for “Data” and “Commitlog.”

If you have to use EBS Volumes, go with “EBS Optimized Instances,” with “Provisioned IOPS” and split data and Commit Log on different volumes.

Ephemeral Storage

Ephemeral storage is a great option to use for downstream Cassandra environments such as test/dev and staging. Since ephemeral volumes are typically SSDs, there is no need to separate the data and commitlog into separate drives. To optimize performance, choose EC2 instances with multiple ephemeral storage drives and stripe them together into one logical volume. Send data and commitlog to the logical volume. Remember that when you stop the EC2 instance, the data in Ephemeral storage will be deleted.

Stripe all Ephemeral drives into a single volume to get maximum performance.

Data Distribution

Cassandra uses Snitch to determine the topology of the cluster. The Snitch is also used to determine the data center and rack for each node in the cluster. Replication strategy is used to determine the placement of data across the different nodes in the cluster.

AWS

  1. EC2Snitch works well for single region deployments.
  1. Ec2MultiRegionSnitch works well for cluster distributed across multiple AWS regions. Please note, Multi-DC, Active-Active configuration makes your cluster resilient to region failures and for multi-dc cluster, use “local quorum” to improve performance.

Google Cloud Platform (GCP)

  1. Use GoogleCloudSnitch for Cassandra deployments in GCP.
  1. GoogleCloudSnitch treats regions as Cassandra data centers and availability zones as racks.

Other Features

Both AWS and GCP offer a series of operational capabilities that seem to make it easier to maintain high uptime on Apache Cassandra cluster. These capabilities have to be chosen carefully depending your deployment needs.

  1. Load Balancing: Network-level load balancing service is offered by both AWS (Elastic Load Balancer) and GCP. It is not recommended to put a load balancer in front of Cassandra. It is better to use a client connector that can handle this, consider open source utilities such Presto and Astyanax
  1. Autoscaling: Both AWS and GCP offer Autoscaling facility for compute instances. However, although this functionality looks very enticing, putting all C* nodes in an autoscaling group will not produce the desired result. Instead, create a separate autoscaling group of ‘1 node’ for every node in your cluster. This will ensure that anytime a node goes down, it is automatically brought back up. This will ensure the cluster data placement is not disturbed when nodes go down and nodes coming up do not cause excessive data movement.

Scalable Backups and Reliable Point-in-Time Recovery

Backing up your data is a must-have for enterprise organizations. The impact of data loss is so high that no enterprise wants to risk relying alone on replication. Protecting data in a public cloud environment may seem daunting, but there are several options available that can make this easier compared to an on-premise deployment. A simple option may look like one of the native snapshots:

Storage Snapshots

I have seen few customers use snapshots of EBS volumes as a way to protect the data in their cluster against logical and/or operational errors. This solution will work but with some severe limitations. On top of the overhead of maintaining the scripts to take snapshots, there is a large overhead in storage costs. Taking a snapshot of volumes does not allow for incremental backups of your data. This causes an exponential bloat in the amount of data stored in snapshots, leading to high cost of the backup solution.

Amazon S3 or Google Cloud Storage

Using an S3 bucket as the secondary storage enables enterprises to use low-cost, highthroughput storage, while reducing overall operating costs for storing backup data. Here are a few recommendations when using S3:

  1. S3 Region: Amazon S3 buckets are region-specific. A bucket created in one region will provide better performance characteristics to other infrastructure in the same region. Subsequently, it is important to ensure the S3 bucket used as the secondary storage is in the same region as the production database being protected and the Datos IO cluster.
  1. Storage Class and Lifecycle Policy: Amazon S3 allows users to create lifecycle policies on buckets. These policies dictate the performance and response time characteristics of objects in the bucket. STANDARD_1A is more suitable for longer lived and less frequently access data, and is not suitable as secondary storage. Additionally, do not turn on any lifecycle policies on the buckets used for the secondary storage. All buckets should be in the standard storage class.

Conclusion

As enterprises leverage the agility provided by public cloud environments, it’s important to understand the various facets of Cassandra deployments to ensure the efficient use of your money and time.

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Enterprise Cassandra Deployments

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址