Windows containers 2016 run powershell as a domain user - windows

I want to be able to either run a Windows Container as a domain user
Example (no idea on how to run as a different user)
docker run -it microsoft/nanoserver powershell
Or alternatively being able to run powershell script in the container as a domain user. I would have to pass in -e to docker run .. but that is ok.
The reason for this is to run something like (but the application uses domain resources like SQL and file shares)
dotnet app.dll

the answer to your question eventually found it's way to the container docs and is brand new.
please refer to this link until it will be published in the MSDN container site.
https://github.com/Microsoft/Virtualization-Documentation/blob/live/virtualization/windowscontainers/manage-containers/manage-serviceaccounts.md
----edit:---
and live link has moved again:
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts

Related

Run a PowerShell script on Azure AKS nodes,

I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?
Thanks a million
Abdel
Similar question has been asked here. User philipwelz has written:
Hey,
although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.
I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.
BR,
Philip
This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the following doc. You are right that install scripts work only on Linux:
Install scripts work on Linux hosts only.
But, for the Windows and Mac software you have specific yaml files:
For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:
STEP 3:
Generate a defender.yaml file, where:
The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
and then STEP 4:
kubectl create -f ./defender.yaml
I think that the above answer is not completely correct.
The twistcli command, does not export daemonset for Windows Nodes. The "PLATFORM" option, is for choosing the OS of the computer that the command will run.
After testing, I have made the conclusion that there is no Docker Image for Prisma Cloud for Windows Kubernetes Nodes, as it is deployed as a service at Windows OS, and not Container (as in Linux). Wrapping up, the Daemonset is not working at the Windows Hosts
I believe the only solution is this -> Windows
This is the Powershell script that WytrzymaƂy Wiktor has mentioned.
Unfortunately this cannot be automated easily, as you have to deploy an Azure VM per AKS Cluster (at the same network), and RDP to the AKS Windows Node and run the script.
If anyone has another suggestion or solution, feel free to share.

Permissions to run docker container on self-hosted windows azure devops agent

In an azure devops build pipeline running on a self-hosted windows agent, I am trying to execute a tool that run a docker container.
Unfortunately I get this error :
Failed to start: failed to create container: Error response from daemon: CreateFile c:\Users\BUILDAGENT\.aerokube\selenoid: Access is denied.
The build agent is configured with its own windows local user "BUILDAGENT", so he has permissions on the C:\Users\BUILDAGENT\ folder
Looking at the process manager, I see that except com.docker.service, the others docker processes are running with the user that launched the Docker Desktop (my coworker).
If I restart windows and relaunch docker myself, the settings selected by my coworker ("Disk Image Location" for instance), are not restored...
Is there a way to make docker run as a daemon on startup with a specific user (service or system user, but not mine or my coworker) ?
Once this is done I guess I just have to give permissions for that specific user on the C:\Users\BUILDAGENT\ folder to solve my issue, right ?
Update :
I added my BUILDAGENT user in docker-users group, and it solves the permission issue, but I still would like to run docker as a service, instead of login as my local user to launch it with its GUI...
but I still would like to run docker as a service, instead of login as my local user to launch it with its GUI
You could try to create a task scheduler to run docker with that specific user when your PC starts.
Please check this thread How to create an automated task using Task Scheduler on Windows 10 for some more details.
In this case, docker will start automatically every time you start your computer.

WebLogic in Docker: how to start managed servers automatically

I am learning Docker, and creating an image for Oracle WebLogic 12.2.1.4 server.
My image is ready, working fine. It contains
an admin server
two managed servers
When I run my image with docker run -d -p 7001:7001 --name WL oracle/weblogic-12.2.1.4.0:1.0 the admin server starts automatically because I added the following line at the end of my Dockerfile:
CMD /u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh
But I need to start managed servers manually. I need to login into the container and start them by hand:
docker exec -it WL /bin/bash
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This is not what I want. I want to start managed servers automatically after admin server is up and running.
I was thinking about to create a new bash script, copy it into the image and use it to boot up the admin and managed servers. Like this:
start-wls-domain.sh
#!/bin/bash
/u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh &
# there are a more sophisticated way to check the status of the admin server but it is okay for test
sleep 60
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This script can be called from Dockerfile with CMD command.
But with this solution, I lost the ability to see the output on the default Docker log. The docker logs WL -f will display nothing.
Another issue with this bash script solution is if this script finished the container will stop running. Do I need an infinite loop at the end of this script?
If possible I would like to have a solution without start-wls-domain.sh.
What is the best and easiest way to start Weblogic managed servers automatically within a Docker container?
I followed the suggestions and I run different servers in different containers. That way I was able to start properly the server.
I published the solution on Github, here.

EC2 user-data not starting my application

I am using user-data of ec2 instance to power up my auto scale instances and run the application. I am running node js application.
But it is not working properly. I have debugged and checked the instance cloud monitor output. So it says
pm2 command not found
After reading and investigating a lot I have found that the path for the command as root is not there.
As EC2 user-data when it tries to run it finds the path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
After ssh as ec2-user it is
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
After ssh as sudo su it is
/root/.nvm/versions/node/v10.15.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
It works only for the last path.
So what is the way or script to run the command as root during launch of the instance provided by user-data?
All thought to start your application with userdata is not recommended, because as per AWS documentation they are not assuring that instance will only come up after successful execution of user data. Even if user data failed it will spin up your instance.
For your problem, I assume if you give the complete absolute path of the binary, It will work.
/root/.nvm/versions/node/v10.15.3/bin/pm2
Better solution for this approach, create a service file for your application startup and start application with systemd or service.

Trouble building build over Web interface

I set up a CI server for Xamarin.Forms using TeamCity on a mini Mac. When I run the build command from the terminal as root it builds successfully, but when I try to fire up a build from the Web UI it fails with the following error:
/Library/Frameworks/Mono.framework/External/xbuild/Xamarin/iOS/Xamarin.iOS.Common.targets(0,0):
Tool exited with code: 1. Output: mdimport will not import on behalf
of root user. Exiting.
Amr, I cannot speak to Mac, but in Windows TC installs by default with the system account which would prevent any program/tools installed under a specific user account to run from TC Web UI. In Windows, I had to change the account for the service under which the teamcity server runs. I'm guessing you would have to do the same for Mac.
Stop the TC server service. Change the service user from system to your user. Start the TC server service.
this happens when you do:
sudo mdimport
but not:
mdimport
So, make sure that you currently own the current folder and you have read, write and execute permissions as well.
The solution is to install TeamCity in the recommended directory which is the /Library/TeamCity folder.

Resources