Cygwin bash 'Cannot allocate memory' error - bash

I have kind of a bizarre problem. I have a directory on a network share that contains the cygwin binaries. This cygwin dir used to be on a dell poweredge box and was mounted as a virtual directory in IIS8. A webservice would execute a call then to a bash script like so:
\\poweredgeshare\cygwin\bin\bash --login -c "//poweredgeshare/mytools/myshellscript.sh 2>&1 | tee -a //poweredgeshare/logs/logoutput.txt"
This worked great until we got a new dell Equalogic fileserver with an FS7600 front end and tried to execute that call from IIS8 on a windows server 2012 box. With that combination, I now get the following error:
/usr/bin/bash: //fs7600/mytools/myshellscript.sh: Cannot allocate memory
This WORKS if I mount that exact same fs7600 share on my windows 8 box's IIS8 as a virtual directory. All web calls function as expected. It just blows up after I deploy it to the production (server 2012) machine.
I have found that if I remove the 2>&1 | tee -a and just use > instead it works in all cases. So there is something about the pipe to tee that is causing an issue.
I am not a shell script expert- I didn't write this, I'm just trying to debug it and understand why the exact same cygwin binaries calling the exact same shell scripts but just on a new filesystem blows up.
So to sum up what worked and what doesn't work:
- old fileshare works on both dev (windows 8) and production (server 2012) environment
- new fileshare works on dev, but NOT server 2012
Does anyone have any thoughts as to why this might be happening?

Related

Why are my bash scripts refusing to run until I type 'exit' ever since I used 'script /dev/null'?

I am working on a cluster, the 'home' machine of which runs Linux version 2.6.38.4 on Debian 4.3.2-1.1. Other machines on the cluster run more recent versions of Linux (3.x.x.x) but on differing flavours (some Redhat, some Debian etc).
As usual, I transferred to chos 8 on one of these machines and set a script running in a screen, but the server began denying that there were any sockets available when I went to reattach it. I followed advice I found online and typed ‘script /dev/null’ in order to retrieve my screens, but it keeps happening. Also, when I start a new screen now, the command prompt is preceded by ‘(base)’.
Now, if I try to run a bash script on anything other than the home machine, the scripts won't run until I follow the command with 'exit', as follows:
bash ~/DAPHNIA/Scripts/compare_BUSCO_depths.sh 2 21 3 3 2-WGS_Clone_21_CGCTATGT-GTGTCGGA_L001;
exit;
The contents of the script don't seem to matter - this irritating quirk now happens regardless of the script being run.
Does anybody have any idea a) what has caused this, and b) how I can fix it, please?

Running Cygwin application as a Windows Service

I am working on WinDRBD (https://github.com/LINBIT/windrbd) a port of the Linux DRBD driver to Microsoft Windows.
We want to run the user mode helper as a Windows Service (DRBD sometimes calls user space applications with call_usermodehelper(), which we emulate by a daemon that retrieves those request from the kernel driver, runs them and returns the exit status to the kernel).
When we run the daemon in a cygwin shell, everything works fine. However when running the daemon as a Windows Service it seems that cygwin cannot find its installation directory (which is C:\cygwin64 on my machines).
The registry entry (HKLM/Software/CygWin/setup/rootdir) points to the correct location, but I am not sure if it can also be accessed by the Windows Service?
/bin/sh isn't found by the service, however /cygdrive/c/cygwin64/bin/sh
exists, so when I run the shell with that path it can start
(and also finds the DLLs it requires to run). However shell
complains with:
bash.exe: warning: could not find /tmp, please create!
which definitely exists when running cygwin the normal way.
Has anyone ever tried to run a CygWin compiled EXE as a Windows
Service? Here is the output of sc query windrbdum:
SERVICE_NAME: windrbdum
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
(um is for user mode).
Thanks for any insights,
Johannes
As matzeri pointed out, cygrunsrv is the cygwin tool when it comes to running cygwin binaries as a service under Windows. It serves both as a wrapper (that does the Windows specific service API and event handling) as well as a tool to install, remove, start and stop services (this can still be done with the sc utility like
sc start <servicename>
).
To install a service (I) do:
cygrunsrv.exe -I windrbdlog -p /cygdrive/c/windrbd/usr/sbin/windrbd.exe \
-a log-server \
-1 /cygdrive/c/windrbd/windrbd-kernel.log \
-2 /cygdrive/c/windrbd/windrbd-kernel.log
where windrbdlog is the Windows name of the service, /cygdrive ... is the
full path to the cygwin application (no need to code any Windows Service API
calls there, it's just a Cygwin/POSIX executable), log-server is the argument
to the binary (so what is being started is windrbd log-server) and -1 and -2
are rediects for stdout and stderr. Exactly what I need, thanks to matzeri
for pointing me to cygrunsrv.

TortoiseSVN post-commit hook fails on Win 7

I want to update remote staging server automatically after committing on my dev box.
I'm trying to setup TortoiseSVN post-commit hook on Win 7 64.
I have TortoiseGit installed in system with bunch of useful commands like 'ssh'
I created test.bat script that contains:
ssh -l {username} -i "C:\Users\{path-to-ssh-key.pem}" {server_address} ./svnup
This script running 'svn up' on remote staging server. And this test.bat file works fine when launched manually. But it not works in post-commit configuration. Blank console screen appearing and than TortoiseSVN showing an error:
Error: The hook script returned an error:
Error: 0 [main] ssh 2040 fhandler_base::dup: dup(some disk file) failed, handle 0, Win32 error 6
Error: dup() in/out/err failed
Could you advice?
UPD: I upgraded batch script to use full path.
"C:\Program Files (x86)\Git\bin\ssh.exe" -l {username} -i "C:\Users\{path-to-ssh-key.pem}" {server_address} ./svnup
But error still there. Now it has some new number
Error: The hook script returned an error:
Error: 0 [main] ssh.exe" 6976 fhandler_base::dup: dup(some disk file) failed, handle 0, Win32 error 6
Error: dup() in/out/err failed
Your hook probably cannot find ssh.
Using the full path name can help.
If that doesn't help changing the working directory to the location of ssh can help.
In the worst case you could add the location of ssh to the path, from within the batch file. This will only affect the path, during execution. I believe that a new shell is created by tortoise each time it's called.
As mentioned in the svn book:
For security reasons, the Subversion repository executes hook programs with an empty environment—that is, no environment variables are set at all, not even $PATH (or %PATH%, under Windows). Because of this, many administrators are baffled when their hook program runs fine by hand, but doesn't work when run by Subversion. Be sure to explicitly set any necessary environment variables in your hook program and/or use absolute paths to programs.
Which means your hook script doesn't know where to find ssh and what the current directory is (so using relative paths most likely won't work either).
Solution is to use plink.exe instead TortoiseGit ssh.exe.
And this will work:
c:\plink.exe -ssh -batch -l {username} -i "C:\Users\{path-to-ssh-key.pem}" {server_address} ./svnup

Executing Perl Script From Linux Box Using SSH Causing "The local device name is already in use"

I have a Perl script which maps two drives, and then proceeds to copy files one of the drives to the other. The Perl script is located on a Windows box, but we are SSHing from a Linux box into the Windows box to execute the script. When I run the script directly from the Windows box, everything works without issue, the drives are mapped and the files are copied over successfully. When I attempt to execute the script from my Linux box via SSH, the script fails and I get the following output:
The local device name is already in use.
Error mapping source \\xxx.xxx.net\localdirectory
This error occurs when attempting to map the first drive, I don't know if it would fail on the second drive as well since it has not made it that far.
I have several other Perl scripts that are executed this same way (via ssh from Linux to Windows box) and they execute without issue, this is the only one that maps a drive though. This is the code I am using to execute the script:
#!/bin/sh
ssh -t -t user#server "cd /Path/to/Perl/Script; /cygdrive/C/Perl/bin/perl.exe Script.pl"
What user is your ssh daemon running as? Presumably System. That user doesn't have authority to map network drives, as far as I recall. Can you not just do this on the Linux box directly using samba?
In case anyone needs this in the future, we we're able to get it working. The issue was due to the SVCCopSSH being used for the CopSSH service on our Windows machine. We had to disable the CopSSH service, set the Log On as the network account we were using to SSH from Linux to Windows, and restart the service. This fixed all issues we were having.

Why can't Perl launch telnet under Windows?

I had enabled telnet client feature on Windows 2008, and tried to launch it from a Perl script:
perl -e "system('c:\windows\system32\telnet localhost')"
Then I got an error like this:
'c:\windows\system32\telnet' is not recognized as an internal or external command,
operable program or batch file.
I could run it from 'cmd' terminal, or, if I copy the telnet.exe to local dir, it could be launched. I examined the permissions of telnet.exe under c:\windows\system32, no special finding.
Could anybody help me on this case? Thanks a lot!
I ran into the exact same problem recently, trying to launch telnet and the rdp client programmatically. The problem is related to an aspect of windows "design" called file system redirection:
"Windows on Windows 64-bit (WOW64) provides file system redirection. In a 64-bit version of Windows Server 2003 or of Windows XP, the %WinDir%\System32 folder is reserved for 64-bit applications. When a 32-bit application tries to access the System32 folder, access is redirected to the following folder: %WinDir%\SysWOW64. By default, file system redirection is enabled."
http://www.codeproject.com/tips/55290/Disabling-Windows-file-system-redirection-on-a-CFi.aspx
I think you have to specify the full name of the program, that is telnet.exe. But you'd be better off using Net::Telnet module or something like Expect.pm that handles interactive sessions programmatically.
hi you are using Perl, so i was wondering why you don't use Net::Telnet, instead of the telnet.exe of windows, which AFAIK is not friendly for programming.
On my computer following code works (Windows 7):
$telnet = $ENV{'WINDIR'} . '\system32\telnet.exe';
system("$telnet 192.168.1.1");
It must be either a) permissions b) incorrect path, c) you need .exe at the end or d) you need to capitalise the "c:"
You might want to verify under what account Perl execute and check if that account has permissions to run executables like telnet.
Just to verify the theory, I'd run Perl under a high privileged account (like admin) to check if it runs telnet and then tweak the account it is running under.
Hope this helps!
Read about console host.
ConHost represents a permanent change in the way that console application I/O is handled.
See also a related post on SysInternals forums.
I do not have access to any Windows Server 2008 machines right now, so I cannot test this, but can you check what happens when you run wperl -e "system('c:\windows\system32\telnet localhost')" from the equivalent of Start -> Run on that OS.
The answer is: using sysnative instead of system32, because the 32-bits application is not allowed to have access to the system32 directory (64-bit application). By using this alias sysnative it wil work.

Resources