I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?
Thanks a million
Abdel
Similar question has been asked here. User philipwelz has written:
Hey,
although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.
I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.
BR,
Philip
This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the following doc. You are right that install scripts work only on Linux:
Install scripts work on Linux hosts only.
But, for the Windows and Mac software you have specific yaml files:
For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:
STEP 3:
Generate a defender.yaml file, where:
The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
and then STEP 4:
kubectl create -f ./defender.yaml
I think that the above answer is not completely correct.
The twistcli command, does not export daemonset for Windows Nodes. The "PLATFORM" option, is for choosing the OS of the computer that the command will run.
After testing, I have made the conclusion that there is no Docker Image for Prisma Cloud for Windows Kubernetes Nodes, as it is deployed as a service at Windows OS, and not Container (as in Linux). Wrapping up, the Daemonset is not working at the Windows Hosts
I believe the only solution is this -> Windows
This is the Powershell script that WytrzymaĆy Wiktor has mentioned.
Unfortunately this cannot be automated easily, as you have to deploy an Azure VM per AKS Cluster (at the same network), and RDP to the AKS Windows Node and run the script.
If anyone has another suggestion or solution, feel free to share.
Related
I have got an assignment. The assignment is "Write a shell script to install and configure docker swarm(one master/leader and one node) and automate the process using Jenkins." I am new to this technology and finding it difficult to proceed. Can anyone help me in explaining step-by-step process of how to proceed?
#Rajnish Kumar Singh, Have you tried to check resources online? I understand you are very new to this technology, but googling some key words like
what is docker swarm
what is jenkins , etc would definitely helps
Having said that, Basically you need to do below set of steps to complete your assignment
Pre-requisites
2 or more - Ubuntu 20.04 Server
(You can use any linux distros like ubuntu, Redhat etc, But make sure your install and execute commands change accordingly.
Here we need two nodes mainly to configure the master and worker node cluster)
Eg :
manager --- 132.92.41.4
worker --- 132.92.41.5
You can create these nodes in any of public cloud providers like AWS EC2 instances or GCP VMs etc
Next, You need to do below set of steps
Configure Hosts
Install Docker-ce
Docker Swarm Initialization
You can refer this article for more info https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/
This completes first part of your assignment.
Next, You can create one small shell script and include all those install and configuration commands in that script. Basically shell script is collection of set of linux commands. Instead of running each commands separately , you will run script alone and all set up will be done for you.
You can create small script using touch command
touch docker-swarm-install.sh
Specify proper privileges to script to make it executable
chmod +x docker-swarm-install.sh
Next include all your install + configure commands, which you have used earlier to do docker swarm set up in scripts (You can refer above shared link)
Now, when your script is ready, you can configure this script in jenkins job and whenever jenkins job is run, script will get execute and docker swarm cluster will be created
You need a jenkins server. Jenkins is open source software, you can install it in any of public cloud instance (Aws EC2)
Reference : https://devopsarticle.com/how-to-install-jenkins-on-aws-ec2-ubuntu-20-04/
Next once installation is completed. You need to configure job in jenkins
Reference : https://www.toolsqa.com/jenkins/jenkins-build-jobs/
Add your 'docker-swarm-install.sh' as build step in created job
Reference : https://faun.pub/jenkins-jobs-hands-on-for-the-different-use-cases-devops-b153efb483c7
If all set up is successful and now when you run your jenkins job, your docker swarm cluster must be get created.
I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes?
This Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?
When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"
Welcome on StackOverflow Srinath
To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.
The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a free trial account, valid for 12 months).
I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called AKS):
Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.
az group create --name azEvalRG --location eastus
az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus
I want to be able to either run a Windows Container as a domain user
Example (no idea on how to run as a different user)
docker run -it microsoft/nanoserver powershell
Or alternatively being able to run powershell script in the container as a domain user. I would have to pass in -e to docker run .. but that is ok.
The reason for this is to run something like (but the application uses domain resources like SQL and file shares)
dotnet app.dll
the answer to your question eventually found it's way to the container docs and is brand new.
please refer to this link until it will be published in the MSDN container site.
https://github.com/Microsoft/Virtualization-Documentation/blob/live/virtualization/windowscontainers/manage-containers/manage-serviceaccounts.md
----edit:---
and live link has moved again:
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.
I am currently trying to deploy smartfoxserver 2X on EC2 using dotcloud. I have been able to detect the private ip of the amazon web instance, and using the dotcloud tools I have been able to determine the correct port. However, I have difficulty installing the server proper via the command line so that I can log into it using the AdminTool.
My postinstall is fairly straightforward:
./SFS2X/sfs2x-service start-launchd
I find that on 'dotcloud push' there is a fair amount of promising output in my cygwin terminal, but the push hangs after saying that the sfs2x-service has been launched correctly, until timeout.
Consequently, my question is, has anyone found a way to install SFS2X on EC2 via dotcloud successfully? I managed to have partial success with SFS Pro, with a complete push to dotcloud, by calling ./jre/bin/java -jar installer.jar in my postinstall. Do I need to do extra legwork and build an installer jar for SFS2X? Is there a way that would be best to do this?
I do understand that there is a standard approach to deployment with SFS2X using RightScale on EC2, however I am interested in deployment using the dotcloud platform.
Thanks in advance.
The reason why it is hanging is because you are trying to start your process in the postinstall, and this is not the correct place to do that. The postinstall script is suppose to finish, if it doesn't the deployment will time out, and then get cancelled.
Once the postinstall script is finished, it will then finish the rest of your deployment.
See this page for more information about dotCloud postinstall script:
http://docs.dotcloud.com/0.9/guides/hooks/#post-install
Pay attention to this warning at the end.
Warning:
If your post-install script returns an error (non-zero exit code), or if it runs for more than 10 minutes, the platform will consider that your build has failed, and the new version of your code will not be deployed.
Instead of putting this in the postinstall script, you should add it as a background process, so that it starts up once the deployment process is complete.
See this page for more information on adding background processes to dotCloud services:
http://docs.dotcloud.com/0.9/guides/daemons/
TL;DR: You need to create a supervisord.conf file, and add it to the root of your project, and add your service to that.
Example (you will need to change to fit your situation):
[program:smartfoxserver]
command = /home/dotcloud/current/SFS2X/sfs2x-service start-launchd
Also, make sure you have the correct dotCloud service specified in your dotcloud.yml in order to have the correct binary and libraries installed for what your smartfoxserver application.