神刀安全网

Scaling Percona XtraDB Cluster with ProxySQL in Kubernetes

Scaling Percona XtraDB Cluster with ProxySQL in Kubernetes How do you scale Percona XtraDB Cluster with ProxySQL in Kubernetes?

In my previous post I looked how to run Percona XtraDB Cluster in a Docker Swarm orchestration system, and today I want to review how can we do it in the more advanced Kubernetes environment.

There are already some existing posts from Patrick Galbraith ( https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/mysql-galera ) and Raghavendra Prabhu ( https://github.com/ronin13/pxc-kubernetes ) on this topic. For this post, I will show how to run as many nodes as I want, see what happens if we add/remove nodes dynamically and handle incoming traffic with ProxySQL (which routes queries to one of working nodes). I also want to see if we can reuse the ReplicationController infrastructure from Kubernetes to scale nodes to a given number.

These goals should be easy to accomplish using our existing Docker images for Percona XtraDB Cluster ( https://hub.docker.com/r/percona/percona-xtradb-cluster/ ), and I will again rely on the running service discovery (right now the images only work with etcd).

The process of setting up Kubernetes can be pretty involved (but it can be done; check out the Kubernetes documentation to see how: http://kubernetes.io/docs/getting-started-guides/ubuntu/ ). It is much more convenient to use a cloud that supports it already (Google Cloud, for example). I will use Microsoft Azure, and follow this guide:  http://kubernetes.io/docs/getting-started-guides/coreos/azure/ . Unfortunately the scripts from the guide install previous version of Kubernetes (1.1.2), which does not allow me to use ConfigMap. To compensate, I will duplicate the ENVIRONMENT variables definitions for Percona XtraDB Cluster and ProxySQL pods. This can be done more optimally in the recent version of Kubernetes.

After getting Kurbernetes running, starting Percona XtraDB Cluster with ProxySQL is easy using following pxc.yaml file (which you also can find with our Docker sources https://github.com/percona/percona-docker/tree/master/pxc-57/kubernetes ):

apiVersion: v1 kind: ReplicationController metadata:  name: pxc-rc  app: pxc-app spec:  replicas: 3 # tells deployment to run N pods matching the template  selector:  app: pxc-app  template: # create pods using pod definition in this template  metadata:  name: pxc  labels:  app: pxc-app  spec:  containers:  - name: percona-xtradb-cluster  image: perconalab/percona-xtradb-cluster:5.6test  ports:  - containerPort: 3306  - containerPort: 4567  - containerPort: 4568  env:  - name: MYSQL_ROOT_PASSWORD  value: "Theistareyk"  - name: DISCOVERY_SERVICE  value: "172.18.0.4:4001"  - name: CLUSTER_NAME  value: "k8scluster2"  - name: XTRABACKUP_PASSWORD  value: "Theistare"  volumeMounts:  - name: mysql-persistent-storage  mountPath: /var/lib/mysql  volumes:  - name: mysql-persistent-storage  emptyDir: {}  imagePullPolicy: Always --- apiVersion: v1 kind: ReplicationController metadata:  name: proxysql-rc  app: proxysql-app spec:  replicas: 1 # tells deployment to run N pods matching the template  selector:  front: proxysql  template: # create pods using pod definition in this template  metadata:  name: proxysql  labels:  app: pxc-app  front: proxysql  spec:  containers:  - name: proxysql  image: perconalab/proxysql  ports:  - containerPort: 3306  - containerPort: 6032  env:  - name: MYSQL_ROOT_PASSWORD  value: "Theistareyk"  - name: DISCOVERY_SERVICE  value: "172.18.0.4:4001"  - name: CLUSTER_NAME  value: "k8scluster2"  - name: MYSQL_PROXY_USER  value: "proxyuser"  - name: MYSQL_PROXY_PASSWORD  value: "s3cret" --- apiVersion: v1 kind: Service metadata:  name: pxc-service  labels:  app: pxc-app spec:  ports:  # the port that this service should serve on  - port: 3306  targetPort: 3306  name: "mysql"  - port: 6032  targetPort: 6032  name: "proxyadm"  # label keys and values that must match in order to receive traffic for this service  selector:  front: proxysql 

Here is the command to start the cluster:

kubectlcreate -f pxc.yaml 

The command will start three pods with Percona XtraDB Cluster and one pod with ProxySQL.

Percona XtraDB Cluster nodes will register themselves in the discovery service and we will need to add them to ProxySQL (it can be done automatically with scripting, for now it is a manual task):

kubectlexec -itproxysql-rc-4e936 add_cluster_nodes.sh 

Increasing the cluster size can be done with the scale command:

kubectlscale --replicas=6 -f pxc.yaml 

You can connect to the cluster using a single connection point with ProxySQL: You can find it this way:

kubectldescribe -f pxc.yaml   Name: pxc-service Namespace: default Labels: app=pxc-app Selector: front=proxysql Type: ClusterIP IP: 10.23.123.236 Port: mysql 3306/TCP Endpoints: <none> Port: proxyadm 6032/TCP Endpoints: <none> SessionAffinity: None 

It exposes the endpoint IP address 10.23.123.236 and two ports: 3306 for the MySQL connection and 6032 for the ProxySQL admin connection.

So you can see that scaling Percona XtraDB Cluster with ProxySQL in Kubernetes is pretty easy. In the next post, I want to run benchmarks in the different Docker network environments.

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Scaling Percona XtraDB Cluster with ProxySQL in Kubernetes

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址