How to build a cassandra cluster with docker on a windows machine? - windows

I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?

I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.

Related

How to execute script by host after starting docker container

I have docker-compose.yml file and I start a container with DB via
docker-compose up -d db command.
I need to execute script by host machine that, briefly speaking, export dump to db in container.
So, now it looks like:
docker-compose up -d db
./script.sh
But I want to combine these two commands into one.
My question is "Is it possible?"
I found out that Docker Compose doesn't support this feature.
I know that I can create another script with these commands in it, but I want to leave only
docker-compose up -d db
UPD: I would like to mention that I am using mcr.microsoft.com/mssql/server:2017-latest image
Also, have to say one more time that I need to execute script exactly on host machine
You can't use the Docker tools to execute commands on the host system. A general design point around Docker is that containers shouldn't be able to affect the host.
Nothing stops you from writing your own shell script that runs on the host and does the steps you need:
#!/bin/sh
docker-compose -d up
./wait-for.sh localhost 1433
./script.sh
(The wait-for.sh script is the same as described in the answers to Docker Compose wait for container X before starting Y that don't depend on Docker health checks.)
For your use case it may be possible to run the data importer in a separate container. A typical setup could look like this; note that the importer will run every time you run docker-compose up. You may want to actually build this into a separate image.
version: '3.8'
services:
db: { same: as you have currently }
importer:
image: mcr.microsoft.com/mssql/server:2017-latest
entrypoint: ./wait-for.sh db 1433 -- ./script.sh
workdir: /import
volumes: [.:/import]
The open-source database containers also generally support putting scripts in /docker-entrypoint-initdb.d that get executed the first time the container is launched, but the SQL Server image doesn't seem to support this; questions like How can I restore an SQL Server database when starting the Docker container? have a complicated setup to replicate this behavior.

Run a PowerShell script on Azure AKS nodes,

I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?
Thanks a million
Abdel
Similar question has been asked here. User philipwelz has written:
Hey,
although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.
I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.
BR,
Philip
This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the following doc. You are right that install scripts work only on Linux:
Install scripts work on Linux hosts only.
But, for the Windows and Mac software you have specific yaml files:
For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:
STEP 3:
Generate a defender.yaml file, where:
The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
and then STEP 4:
kubectl create -f ./defender.yaml
I think that the above answer is not completely correct.
The twistcli command, does not export daemonset for Windows Nodes. The "PLATFORM" option, is for choosing the OS of the computer that the command will run.
After testing, I have made the conclusion that there is no Docker Image for Prisma Cloud for Windows Kubernetes Nodes, as it is deployed as a service at Windows OS, and not Container (as in Linux). Wrapping up, the Daemonset is not working at the Windows Hosts
I believe the only solution is this -> Windows
This is the Powershell script that Wytrzymały Wiktor has mentioned.
Unfortunately this cannot be automated easily, as you have to deploy an Azure VM per AKS Cluster (at the same network), and RDP to the AKS Windows Node and run the script.
If anyone has another suggestion or solution, feel free to share.

Docker communication to hardware on host

I'm trying to learn some things about Docker. Right now, I'm asking myself, if a Docker container can communicate with something like a camera from the host.
Can anyone answer me that, and say how to pls?
Since Docker 1.2.0, you can use --device flag in docker run to access to host devices.
This is used this way:
docker run ... --device=/dev/ttyUSB0 ...

How do I run the Hetionet v1.0 docker container?

I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

Resources