SSH server evaluate environment variables with wrong user - windows

In Windows, we can get environment variables by $env:{VARIABLE_NAME}. If the variable contains other variables, which is commonplace in PATH, like %WINDIR%\System32, this command will print the evaluated result, like C:\Windows\System32 for the previous example. This is done during shell startup, I suppose.
But inconsistency occurs when it comes to SSH server.
I recently installed pnpm, a high-performance alternative to npm, on my Windows server. It is installed to %LocalAppData%, and it adds its program root to user-level PATH by referring to PNPM_HOME, another user-level variable it creates. When I tried to call pnpm from the remote SSH session, it failed to be resolved. Then I ran $env:PATH in the session, and received the following output:
...;$PNPM_HOME$;...
The referred environment variable failed to be evaluated, while everything goes as expected running the same command on my server through remote desktop.
I looked into corresponding documents, and propounded a plausible explanation: All Windows services are run by the user SYSTEM, and so is the SSH server. When an SSH tunnel establishes, the SSH server reads the registry and evaluated embedded environment variables under the context of itself instead of the remote user. SYSTEM has no PNPM_HOME, thus this PATH segment fails to be evaluated, then the problem ensues.
To corroborate my presumption, I added a new segment $USERNAME$ to PATH, which got evaluated as 'SYSTEM' in the SSH session.
Now, here's my question: how to resolve this problem without changing the environment variables, if my assumption is proven; If not, what's the actual cause of this problem?
By the way, should this be submitted as a bug to OpenSSH?

Related

Volume mount and environment variable access for a Window service running in a container in kubernetes

An application that runs as a Windows service in a Windows container (without any overrides for run-as user - New-Service -Name x -BinaryPath someservice.exe step in the container build) is expecting an environment variable set on the container as well as read a file mounted in the container. I know that I can run applications directly via the entrypoint and they are able to read the env variable and file from mount, but as a service I am getting errors indicating it isn't.
Are environment variables scoped to user by default, would something like a RunAs configuration in security context be needed, or some other mechanism? Or would there be any limitations on access to file mounts by the service?
edit
Investigate the environment variables a bit more, seems like this might be the part that's missing. I tried to echo a var based on specific scopes:
PS C:\dir> echo ([System.Environment]::GetEnvironmentVariable("varname","User"))
PS C:\dir> echo ([System.Environment]::GetEnvironmentVariable("varname","Machine"))
PS C:\dir> echo ([System.Environment]::GetEnvironmentVariable("varname","Process"))
expected_value
So I suspect the service doesn't have access to the process. Am going to try rescoping the variable:
[System.Environment]::SetEnvironmentVariable('varname',$env:varname,[System.EnvironmentVariableTarget]::Machine)
The issue was related due to environment variable scoping - the environment variables passed to the container are Process scoped. However, if a Windows service is started in the container, it does not have access to the scope (the entrypoint process does).
In order to resolve this, I set entrypoint to a powershell wrapper for the app that copied the Process scope environment variable to the Machine scope before starting the Windows service:
[System.Environment]::SetEnvironmentVariable('varname',$env:varname,[System.EnvironmentVariableTarget]::Machine)
# start service command.

Android studio not running .sh script with Jsch

Whenever I try to execute a sh script via Jsh nothing happens , however when I execute it through a normal ssh session it works fine , I haven't been able to get a single sh file to work/run regardless of the contents of the sh file.
I have tried
channelssh.setCommand("/home/exiatron00/Desktop/bash test.sh");
channelssh.setCommand("/home/exiatron00/Desktop/./test.sh");
channelssh.setCommand("/home/exiatron00/Desktop/test.sh");
I don't see anything wrong with your command, so I would have to assume it's your setup.
Are you sure you're even logging into your server? I would check your last logs to make sure you are even connecting.
Are you on the same network as the machine you're attempting to connect to? If you aren't on wifi I would assume your machine is hidden behind a NAT.

Jenkins job with batch stage has a zombie environment variable

I have a python command line program that can retrieve passwords for me.
I can run it from the command line and get the passwords.
Example command:
passwordmanpro_cli javaprops OpenWeatherMap_DEV
The response comes back fine.
I have the Jenkins agent installed on the machine. I have a "Execute Windows Batch Command" step which calls exactly the same commandline.
This USED to work without an issue. A month ago it stopped working. With an error:
ERROR Success not returned from passwordmanagerpro
Using URL - https://icsecpws.cc.ic.ac.uk:443/restapi/json/v1/resources (AUTHTOKEN ommitted - 36)
ResponseCode - 200
resJSON - {'operation': {'name': 'Authentication', 'result': {'status': 'Failed', 'message': 'User is not allowed to access from this host'}}}
IP Being used to send message might be: 155.198.31.184
Something is different from how this request is sent via a windows terminal and how it is sent from a windows batch step in Jenkins.
I have verified the windows batch command is being run as a correct user.
I have verified the username and password credentials being passed are the same
I have verified the source IP by adding debug lines in python.
The password manager we are using has an API user setup and you enter an IP address the API is allowed to be called from. Apparently the restriction is not done on source IP but a reverse hostname lookup.
Can anyone give me advice on how to debug this further?
Update 1
More debugging has revealed that the cause is the environment is being set wrong. The host machine is windows 10. I have an environment variable called PASSMANCLI_AUTHTOKEN and another called PASSMANCLI_URL
In control panel I am setting both variables system wide. NOT for a particular user.
What is really strange is that the PASSMANCLI_URL is changing and being picked up ok, but the PASSMANCLI_AUTHTOKEN variable is not. I have added a "set" windows batch command in the config and I have confirmed that the PASSMANCLI_AUTHTOKEN value is NOT coming from the system setting but somewhere else. I am wondering if Jenkins does anything special with this.
BTW: I have also used the whoami command as part of the project config and confirmed Jenkins is running as the same user.
Update 2
I have gone through the entire windows registery and looked at all Environment entries and deleted PASSMANCLI_AUTHTOKEN. I have confirmed it is not in the environment console. I have restarted the Jenkins agent and the entire server. I then run my jenkins job with a single command "Set" and it reads back the OLD value of the token!
update 3
I have created a brand new Jenkins job with 1 step which is the windows batch command. It simply has "set" so displays the environment variables. I can see PASSMANCLI_AUTHTOKEN is still being set even though I have completely wiped it from the machine.
update 4
I thought it might be something to do with the way the Jenkins runner uses JAVA. (Our Jenkins runner is using Java 8 32 bit). I wrote a Java program which runs
processBuilder.command("cmd.exe", "/c", "set");
and outputs the result.
I checked the output and the variable is NOT set, as expected.
I still don't know where this variable is coming from when executed via Jenkins.

GUI programs won't open in an ssh server. ssh -X and downloading XQuartz have not helped

So I use a remote server for some of my schoolwork and have no trouble logging onto the machine and navigating. The problem arises when I attempt to run a software that uses a GUI called ds9. It's used for image processing but I don't think that is relevant. Anyways, I've tried ssh -X username#university.edu, I've downloaded XQuartz, and I've made sure XQuartz's Security preferences are all checked. Still, I receive the same error message: Application initialization failed: no display name and no $DISPLAY environment variable
Unable to initialize window system.
I would be extremely grateful if anybody could identify the issue.
It may happened that you set a wrong DISPLAY env. var. at login time on the server. In general, ssh -X set the value to something like DISPLAY=localhost:10.0 (a tunnel set up by ssh in between your server and your local machine).

Error when accessing UNC paths through powershell script when remoting

I am trying to execute a program inside of a power shell script. The PS script is being called from a C# method using Runspaces. The program tries to make an update to a config file on a remote server. When I run this whole thing I get the following error:
System.UnauthorizedAccessException: Access to the path \\some path is denied.
The PS script is on a remote server. If I run the PS script directly on the server then the PS script and the program inside of it runs fine and is able to access the remote system.
Has anyone run into this before? I was told that this is failing because I am running it through Visual Studio and C# and that I won't be allowed to access network resources through a powershell script that is being run through a C# class. Someone else told me that the permissions that I am using to start the PS script in the runspace are not translating to the program that I am calling within the script.
Other ideas and possible solutions?
Thanks
It looks like you're trying to modify a file on a UNC path on a secondary server. This won't work due to the age old "double hop" problem. You are on machine A, executing a remote script on B that tries to modify a file on C. Your authentication from A to B cannot be reused to connect from B to C. This is a design limitation for NTLM (windows integrated authentication.)
However, all is not lost: You must use CredSSP authentication when connecting with powershell remoting from A to B, and then you can connect to C without a problem.
References:
http://tfl09.blogspot.ca/2013/02/powershell-remoting-double-hop-problem.html
http://www.ravichaganti.com/blog/?p=1230

Resources