When I create python buy dashboard and go to CLI in jeastic and run lsblk
I can see that the / is mount. is python container or CentOS?
If it is container how can they mount /?because in kubernetes we can not mount /?
If it is CentOS why I do not have full permission?!
Jelastic is full on containers, no VMs are used there for now. In container it is possible to mount / if you have enough privileges for that, that works for both Kubernetes and Jelastic. Python image is also working inside container, by default you are connecting to SSH over jelastic user that has limited permissions that is why you cannot mount /. If you contact your hosting service provider I believe they will be able to provide you root access.
Related
I'm trying to run ElasticSearch in Azure Container Instances. I've created the container like this using the Azure CLI:
az container create --image elasticsearch:7.4.2 --name $containerGroupName -g $resourceGroup --ip-address public --dns-name-label <mydns> --memory 8 --cpu 2 --ports 9200
The container ends up in a waiting state. When I check out the logs in the Azure portal, I'll see the following error:
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Apparently, this has something to do with virtual memory: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
But how do I change it inside the container? I cannot connect to it since the container is not running.
Very late to the party but hopefully this will help someone in the future. Elasticsearch if deployed stand-alone requires the environment variable discovery.type to be set. Since Azure does not allow vars with a . (dot) at the moment you'll have to create a custom Docker image and host this e. g. with Azure Container Registry or Dockerhub, ...
Content can be as little as this:
FROM elasticsearch:7.8.1
ENV discovery.type=single-node
I realize that ACI is not the best option to run ElasticSearch. I only need it temporarily for a proof-of-concept while I'm waiting for a stable environment of ElasticSearch elsewhere.
Eventually I got it running by picking an older image of ElasticSearch:
az container create --image elasticsearch:5.6.14-alpine --name $containerGroupName -g $resourceGroup --ip-address public --dns-name-label <mydns> --memory 4 --cpu 2 --ports 9200
The solution to this is simply adding this line to the bound elasticsearch.yml config file:
discovery.type: single-node
For this to be permanent even when you restart the container, the config directory of the elasticsearch container needs to be bound to an Azure Fileshare directory, where you make permanent changes there.
Create a Azure file share and create a elasticsearch folder with another config folder inside it: https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-file-share
Deploy a elasticsearch container with the fileshare volume mounted and bind /mnt/elasticsearch/config to a folder you've created on the new fileshare (template tags: "mountPath": "/mnt/elasticsearch/config", "shareName": "elasticsearch/config"): https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
While the container doesn't error out and shuts itself down because of this max virtual memory error, copy the entire /usr/share/elasticsearch/config folder and filers to the fileshare folder. As soon as you get terminal access, run cp -R /usr/share/elasticsearch/config /mnt/elasticsearch/config
On your file share folder, you should now have a config directory with files and other folders created by the elastisearch startup process. One of this files is elasticsearch.yml. Open it, add the line discovery.type: single-node and save it.
Finally, change mounting and binding location to the correct location, so elasticsearch starts up and reads our now modified configurations. Bind /usr/share/elasticsearch/config to the fileshare /elasticsearch/config folder (template tags: "mountPath": "/usr/share/elasticsearch/config", "shareName": "elasticsearch/config"). More info here: https://www.elastic.co/guide/en/elasticsearch/reference/master/docker.html#docker-configuration-methods
Start the container with the new mounting locations and now the max virtual memory error should be gone.
Please, only use this trick for proofs-of-concept or very low volume logs. Using network storage like Azure Fileshare in general is not suitable for Elasticsearch and can even crash it.
I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes?
This Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?
When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"
Welcome on StackOverflow Srinath
To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.
The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a free trial account, valid for 12 months).
I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called AKS):
Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.
az group create --name azEvalRG --location eastus
az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus
I have dockerized the Zeppelin application. Now I want to see what OS privileges does docker and zeppelin each has.
I have tries some commands as shown below, but it is not giving me expected output.
docker service ls
docker service inspect --pretty redis
Also, what is the command to list out all the services which can possibly be available in docker?
I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".
I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.