Now that you have a fully functioning Cassandra cluster, you can move on to launch‐
ing Pithos, which will provide the S3 API and use Cassandra as the object store.
Pithos
is a daemon that “provides an S3-compatible frontend to a Cassandra cluster.”
So if you run Pithos in your Kubernetes cluster and point it to your running Cassan‐
dra cluster, you can expose an S3-compatible interface.
To that end, I created a Docker image for Pithos,
runseb/pithos
, on Docker Hub. It’s
an automated build, so you can check out the Dockerfile there. The image contains
the default configuration file. You will want to change it to edit your access keys and
bucket store definitions.
You will now launch Pithos as a Kubernetes replication controller and expose a ser‐
vice with an external load-balancer created on GCE. The Cassandra service that you
launched allows Pithos to find Cassandra by using DNS resolution.
However, you need to set up the proper database schema for the object store. This is
done through a bootstrapping process. To do it, you need to run a nonrestarting pod
that installs the Pithos schema in Cassandra. Use the YAML file from the example
directory that you cloned earlier:
$ kubectl create -f ./pithos/pithos-bootstrap.yaml
Wait for the bootstrap to happen (i.e., for the pod to get in
succeed
state). Then launch
the replication controller. For now, you will launch only one replica. Using a replica‐
tion controller makes it easy to attach a service and expose it via a public IP address.
$ kubectl create -f ./pithos/pithos-rc.yaml
$ kubectl create -f ./pithos/spithos.yaml
$ ./kubectl get services --selector="name=pithos"
NAME LABELS SELECTOR IP(S) PORT(S)
pithos name=pithos name=pithos 10.19.251.29 8080/TCP
104.197.27.250
Since Pithos will serve on port 8080 by default, make sure that you open the firewall
for the public IP of the load-balancer. Once the Pithos pod is in its running state, you
are done and have built an S3-compatible object store backed by Cassandra running
in Docker containers managed by Kubernetes. Congratulations!
Discussion
The setup is interesting, but you need to be able to use it and confirm that it is indeed
S3 compatible. To do this, you can try the well-known S3 utilities like
s3cmd
or
boto
.
For example, start by installing
s3cmd
and create a configuration file:
$ cat ~/.s3cfg
[default]
access_key = AKIAIOSFODNN7EXAMPLE
secret_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Do'stlaringiz bilan baham: