Expand size PVC of statefulset on k8s 1.9 - amazon-ec2

I have a statefulset of kafka. I need to expand the disk size, i try wihout succes to use the automatic resize feature of k8s 1.9
Here : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
I did activate feature gates and admission pluging, i think it's work beacause i can succefully change the size of the pvc after the modification.
But nothing happend i have modified the size of the PVC from 50Gi to 250Gi.
The capacity did change everywhere in the pvc, but not on AWS the EBS volume is still 50gb and a df -h in the pod still show 50gb
Did i miss something ? Do i have to manually resize on aws ?
thank you

That is an alpha feature which has some problems and limitations.
Try to find some information on the Issues on Github which is related to your problem:
Support automatic resizing of volumes
[pvresize]Display of pvc capacity did not make corresponding changes when pv resized
Also, check that comment, it can be useful:
#discordianfish please try EBS PVC resize with 1.10. Currently the user experience of resizing volumes with file systems is not ideal. You will have to edit the pvc and then wait for FileSystemResizePending condition to appear on PVC and then delete and recreate the pod that was using the PVC. If there was no pod using the PVC, then once condition FileSystemResizePending appears on PVC then you will have to start a pod using it for file system resize to finish.

I made the feature work, but in a very very dirty way.
Modify the size of the PVC
Modify the size of the EBS manually
Force unmount the volume on AWS
The pod crash and is
rescheduled by the statefullset, when the pod is up again the volume and partition have the correct size

Related

How to mount PVC on pod in OpenShift Online 3

I just ported my application to OpenShift Online 3 (from version 2), and now I'm struggling to understand how to manage persistent, "shared" data, that is not wiped after each build.
After reading the documentation about Persistent Volume Claims, I created a new PVC inside my project, of type RWO, using the Web dashboard. At this point I tried to understand how to access this storage from inside each pod, or if I needed to do something to mount it, and I ended up doing this:
$ oc volume dc/myapp --add --type=persistentVolumeClaim --claim-name=pvcname --mount-path=/usr/share/data
After this, it looks like the new configuration was successfully registered:
$ oc volume dc --all
deploymentconfigs/myapp
pvc/pvcname (allocated 1GiB) as volume-jh1jf
mounted at /usr/share/data
I could also see the new /usr/share/data directory from inside the pods created by the new builds.
However, after making this change, all deployments started failing with this error:
Failed to attach volume "pvc-0b747c80-a687-11e7-9eb0-122631632f42" on node "ip-172-31-48-134.ec2.internal" with: Error attaching EBS volume "vol-0008c8127ff0f4617" to instance "i-00195cc4e1d31f8ce": VolumeInUse: vol-0008c8127ff0f4617 is already attached to an instance status code: 400, request id: 722f3797-f486-4739-ab4e-fe1826ae53af. The volume is currently attached to instance "i-089e2a60e525f447c"
from which it looks like my latest change had the effect of attaching the volume to a specific instance. But then how can I mount the volume to my pods so that it survives each build and deploy?
Because you are using an EBS volume type, you must set the deployment strategy on the deployment config to Recreate instead of Rolling. This is because an EBS volume can only be mounted on a single node in the cluster at a time. This means you cannot using a rolling deployment, nor scale your application above 1 replica, as both result in more than one instance and there is no guarantee they will be deployed to the same node.

Cannot get Fabric8 to fully launch in AWS using stackpoint

I have been trying to spin up a Kubernetes/Fabric8 installation on AWS using Stackpoint as described in this video: https://www.youtube.com/watch?v=lNRpGJTSMKA
My problem is that three of the apps wont start becuase no volumes are available and I cannot see how to resolve those PV requests. For example Gogs is reporting the following error:
Unable to mount volumes for pod "gogs-2568819805-bcw8e_default(03d618b9-7477-11e6-8c6b-0a945216fb91)": timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
I am pretty sure this is very simple but cannot see how to connect the dots here from the various K8, Fabric8 docs. I can create a new EBS volume in AWS easily enough but cannot see how to then update this running stack to attach it to these services. Any help would be greatly appreciated!
Sorry about that, what version of gofabric8 are you using? We're currently adding persistent volume support for the core platform apps although the integration our stackpoint isn't there quite yet. Hopefully soon though.
For now you should be able to disable the PV claims using --pv=false during the deploy. So gofabric8 deploy --pv=false. We'll look at using this as the default until the integration is there and we can leverage AWS persistent volumes
We just shipped functionality that allows you to create and manage AWS volumes for Kubernetes. You get a volume, PV, and claim - just name the claim to be what is required by Fabric8. Eventually, you'll be able to use dynamic volume creation.

How Do I Make A Persistent Volume Accessible to Multiple Kubernetes Pods?

I've done quite a bit of research and have yet to find an answer to this. Here's what I'm trying to accomplish:
I have an ELK stack container running in a pod on a k8s cluster in GCE - the cluster also contains a PersistentVolume (format: ext4) and a PersistentVolumeClaim.
In order to scale the ELK stack to multiple pods/nodes and keep persistent data in ElasticSearch, I either need to have all pods write to the same PV (using the node/index structure of the ES file system), or have some volume logic to scale up/create these PVs/PVCs.
Currently what happens is if I spin up a second pod on the replication controller, it can't mount the PV.
So I'm wondering if I'm going about this the wrong way, and what is the best way to architect this solution to allow for persistent data in ES when my cluster/nodes autoscale.
Persistent Volumes have access semantics. on GCE I'm assuming you are using a Persistent Disk, which can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup Nfs or some other storage that let's you write from multiple pods.
In case you are interested in running NFS - https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md
FYI: We are still working on supporting auto-provisioning of PVs as you scale your deployment. As of now it is a manual process.

High availability issue with rethinkdb cluster in kubernetes

I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable.
Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s.
Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container?
That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it).
To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893
And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
Try using PetSets http://kubernetes.io/docs/user-guide/petset/
That allows you to name your (pet) pods. If a pod is killed, then it will come back with the same name.
Summary of the petset feature is as follows.
Stable hostname
Stable domain name
Multiple pets of a similar type will be named with a "-n" (rethink-0,
rethink-1, ... rethink-n for example)
Persistent volumes
Now apps can cluster/peer together
When a pet pod dies, a new one will be started and will assume all the same "state" (including disk) of the previous one.

How to set disk size for Kubernetes minions on AWS?

I successfully deployed Kubernetes on AWS using "getting started on AWS ec2 guide" (http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html), but the disk size of all the minions (kubernetes hosts) is 8gb. I would like to increase the disk size, but I haven't found a way to do it.
I can change the VM size by setting MINION_SIZE (e.g. export MINION_SIZE=m3.medium) prior to installing, but the disk size is still 8gb.
From the Kubernetes install instructions for other cloud providers there's an option to set MINION_DISK_SIZE to set the disk size. I tried that with AWS ec2 installation, and the variable is ignored.
I also poked around the config files, but I didn't see anything obvious.
Any suggestions on how to set the disk size for minions when installing Kubernetes on AWS ec2?
I recently stumbled upon the same issue. Have a look at BLOCK_DEVICE_MAPPINGS in kubernetes/cluster/aws/util.sh. You can modify it to have something more appropriate for a EBS-only minion.
For example:
[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":80}}]
AWS docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
I have faced this very issue and tried the current accepted answer but it looks like Kubernetes is changing quite fast what may make this answer also outdated soon.
To this date, I've tested the solution below that might become or not a definitive solution in the future:
There is this PR on Kubernetes' github project that implements an easy way to ignore the SSD storage by setting KUBE_AWS_STORAGE=ebs before running kubernetes/cluster/kube-up.sh.
Hope it is helpful!

Resources