<no location info>: error: can't find file - macos

I am using haskell version 8.2.2 on a mac and currently have a problem to compile a file :
My terminal:
$ls
try.hs
$ ghc -o try try.hs
<no location info>: error: can't find file: try.hs
Terminal after ls -l :
total 0
-rw-rw-r--# 1 <> <> 0 Mar 23 15:54 try.hs
Terminal after ls -l# :
total 0
-rw-rw-r--# 1 <> <> 0 Mar 23 15:54 try.hs
com.apple.TextEncoding 15
com.apple.metadata:_kMDItemUserTags 42
com.apple.metadata:kMDLabel_z4p7jqbpj7dblx5lt33gtc742u 105

I suspect you have symlink'd a non-existent file to try.hs. Here is a sample of how things look in my test directory where I can see the same error as you:
% ls try.hs
try.hs
% ghc try.hs
<no location info>: error: can't find file: try.hs
% ls -l
total 0
lrwxrwxrwx. 1 <redacted> <redacted> 5 Mar 23 10:57 try.hs -> wrong
As you can see by the l at the beginning of the permissions and the -> wrong after the file name, here try.hs is a symlink to wrong. But there is no file named wrong.

Related

Gearman worker in shell hangs as a zombie

I have a Gearman worker in a shell script started with perp in the following way:
runuid -s gds \
/usr/bin/gearman -h 127.0.0.1 -t 1000 -w -f gds-rel \
-- xargs /home/gds/gds-rel-worker.sh < /dev/null 2>/dev/null
The worker only does some input validation and calls another shell script run.sh that invokes bash, curl, Terragrunt, Terraform, Ansible and gcloud to provision and update resources in GCP like this:
./run.sh --release 1.2.3 2>&1 >> /var/log/gds-release
The script is intended to run unattended. The problem I have is that after the job finishes successfully (that's both shell scripts run.sh and gds-rel-worker.sh) the Gearman job remains executing, because the child process becomes zombie (see last line below).
root 144748 1 0 Apr29 ? 00:00:00 perpboot -d /etc/perp
root 144749 144748 0 Apr29 ? 00:00:00 \_ tinylog -k 8 -s 100000 -t -z /var/log/perp/perpd-root
root 144750 144748 0 Apr29 ? 00:00:00 \_ perpd /etc/perp
root 2492482 144750 0 May14 ? 00:00:00 \_ tinylog (gearmand) -k 10 -s 100000000 -t -z /var/log/perp/gearmand
gearmand 2492483 144750 0 May14 ? 00:00:08 \_ /usr/sbin/gearmand -L 127.0.0.1 -p 4730 --verbose INFO --log-file stderr --keepalive --keepalive-idle 120 --keepalive-interval 120 --keepalive-count 3 --round-robin --threads 36 --worker-wakeup 3 --job-retries 1
root 2531800 144750 0 May14 ? 00:00:00 \_ tinylog (gds-rel-worker) -k 10 -s 100000000 -t -z /var/log/perp/gds-rel-worker
gds 2531801 144750 0 May14 ? 00:00:00 \_ /usr/bin/gearman -h 127.0.0.1 -t 1000 -w -f gds-rel -- xargs /home/gds/gds-rel-worker.sh
gds 2531880 2531801 0 May14 ? 00:00:00 \_ [xargs] <defunct>
So far I have traced the problem to run.sh, because if I replace its call with something simpler (e.g. echo "Hello"; sleep 5) the worker does not hang. Unfortunately, I have no clue what is causing the problem. The script run.sh is rather long and complex, but has been working without a problem so far. Tracing the worker process I see this:
getpid() = 2531801
write(2, "gearman: ", 9) = 9
write(2, "gearman_worker_work", 19) = 19
write(2, " : ", 3) = 3
write(2, "gearman_wait(GEARMAN_TIMEOUT) ti"..., 151) = 151
write(2, "\n", 1) = 1
sendto(5, "\0REQ\0\0\0'\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
recvfrom(5, "\0RES\0\0\0\n\0\0\0\0", 8192, MSG_NOSIGNAL, NULL, NULL) = 12
sendto(5, "\0REQ\0\0\0\4\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 1 ([{fd=5, revents=POLLIN}])
sendto(5, "\0REQ\0\0\0'\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
recvfrom(5, "\0RES\0\0\0\6\0\0\0\0\0RES\0\0\0(\0\0\0QH:terra-"..., 8192, MSG_NOSIGNAL, NULL, NULL) = 105
pipe([6, 7]) = 0
pipe([8, 9]) = 0
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fea38480a50) = 2531880
close(6) = 0
close(9) = 0
write(7, "1.2.3\n", 18) = 6
close(7) = 0
read(8, "which: no terraform-0.14 in (/us"..., 1024) = 80
read(8, "Identity added: /home/gds/.ssh/i"..., 1024) = 54
read(8, 0x7fff6251f5b0, 1024) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=2531880, si_uid=1006, si_status=0, si_utime=0, si_stime=0} ---
read(8,
So the worker continues reading standard output even though the child has finished successfully and presumably closed it. Any ideas how to catch what causes this problem?
I was able to solve it. The script run.sh was starting ssh-agent, which opens a socket and since Gearman redirects all outputs the worker continued reading the open file descriptor even after the script successfully completed.
I found it by examining the open file descriptors for the Gearman worker process after it hang:
# ls -l /proc/2531801/fd/*
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/0 -> /dev/null
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/1 -> 'pipe:[9356665]'
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/2 -> 'pipe:[9356665]'
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/3 -> 'pipe:[9357481]'
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/4 -> 'pipe:[9357481]'
lrwx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/5 -> 'socket:[9357482]'
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/8 -> 'pipe:[9369888]'
Then identified the processes using file node for the pipe in file descriptor 8 that German worker continued reading:
# lsof | grep 9369888
gearman 2531801 gds 8r FIFO 0,13 0t0 9369888 pipe
ssh-agent 2531899 gds 9w FIFO 0,13 0t0 9369888 pipe
And finally listed files opened by ssh-agent and found what stands behind file descriptor 3:
# ls -l /proc/2531899/fd/*
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/0 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/1 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/2 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/3 -> 'socket:[9346577]'
# lsof | grep 9346577
ssh-agent 2531899 gds 3u unix 0xffff89016fd34000 0t0 9346577 /tmp/ssh-0b14coFWhy40/agent.2531898 type=STREAM
As a solution I added kill of the ssh-agent before exit from run.sh script and now there are no jobs hanging due to zombie process.

how can find directory

when starting the tcl script a directory is created via a bash command. at the end of my script i want to read the directory name of the latest dirs. but my script does not find the newest directory but only the 2nd newest
bind pub "-|-" !aa pub:aaa
proc pub:aaa {nick host handle channel arg} {
set home "/home/user"
set bb [exec bash -c "start.sh"]
after 3000
set latest [exec bash -c "ls -td $home/jpg/*/ | head -n1"]
putnow "PRIVMSG $channel :$latest"
}
before starting it has the following folders in the directory:
drwxr-xr-x 2 user user 4096 Jun 24 18:30 aaa
drwxr-xr-x 2 user user 4096 Jun 24 18:14 bbb
after starting it has the following folders in the directory
drwxr-xr-x 2 user user 4096 Jun 24 18:30 aaa
drwxr-xr-x 2 user user 4096 Jun 24 18:14 bbb
drwxr-xr-x 2 user user 4096 Jun 24 18:35 ccc
output is :
<#testbot> aaa
it should be so
<#testbot> ccc
he finds the directory created during which the tcl script is not running
how can I display the newest, newly created directory?
regards
Instead of trying to exec out to a shell to find the most recently modified directory, I'd do it in pure tcl:
proc latest_directory {path {time mtime}} {
set dirs {}
foreach dir [glob -nocomplain -type d $path/*] {
file stat $dir s
lappend dirs $s($time) $dir
}
if {[llength $dirs] == 0} {
error "No directories found in $path"
} else {
return [lindex [lsort -integer -decreasing -stride 2 $dirs] 1]
}
}
# Then in pub:aaa
set latest [latest_directory $home/jpg]
As for why you're not getting ccc... hard to say for sure without seeing your start.sh script, but if it ends up running stuff in the background that continues after it exits, maybe it takes more than 3 seconds to create that directory?

mkdir recursion with permissions

It's a simple question, I'm writing a bash script called from cron grouping files in tar file and classifing into a dir structure.
These dir needs a special owner and permissions, and I call mkdir command thru su:
#!/bin/bash
... # shortened code
$PERMS=750
$DIR=/home/luser/0/01/012/0123
$OWNER=luser
... # shortened code
su -c "mkdir -m $PERMS -p $DIR" $OWNER
Output for ll -R /home/luser/0
/home/luser/0:
total 4
drwxr-xr-x 3 luser luser 4096 Jan 7 18:13 01
/home/luser/0/01:
total 4
drwxr-xr-x 3 luser luser 4096 Jan 7 18:13 012
/home/luser/0/01/012:
total 4
drwxr-x--- 2 luser luser 4096 Jan 7 18:13 0123
/home/luser/0/01/012/0123:
total 0
Only the deepest dir has permissions (750) setting rightly.
I don't know how deep it's the last directory and set permissions for all home's file it's too hard (too much files).
PS: I'm googled about that, but I find nothing.
You can restrict the permissions on the parent directories via umask. Here is an example:
PERMS=750
UMASK=$(echo "$PERMS" | tr "01234567" "76543210")
DIR=/home/luser/0/01/012/0123
OWNER=luser
su -c "umask $UMASK; mkdir -m $PERMS -p $DIR" $OWNER
In action:
> PERMS=750
> UMASK=$(echo "$PERMS" | tr "01234567" "76543210")
> (umask $UMASK; mkdir -m $PERMS -p 1/2/3/4)
> ll -R .
.:
drwxr-x--- 3 luser luser 4096 Jan 7 1:38 1/
./1:
drwxr-x--- 3 luser luser 4096 Jan 7 1:38 2/
./1/2:
drwxr-x--- 3 luser luser 4096 Jan 7 1:38 3/
./1/2/3:
drwxr-x--- 2 luser luser 4096 Jan 7 1:38 4/

Why does `Dir[directory_path].empty?` return `false` all the time?

Dir[directory_path].empty? returns false all the time. The behavior is the same whether or not I run irb as root:
$ ll
total 12
drwxrwxrwx 2 ndefontenay ndefontenay 4096 Aug 12 12:11 ./
drwxrwxrwx 4 ndefontenay ndefontenay 4096 Aug 5 11:45 ../
-rw-rw-r-- 1 ndefontenay ndefontenay 8 Aug 12 12:11 test
$ irb
> Dir["/opt/purge_entitlement/in"].empty?
=> false
> exit
$ sudo irb
> Dir["/opt/purge_entitlement/in"].empty?
=> false
If someone could shed some light on this problem, it would be pretty helpful.
Dir[].empty? returns false all the time
It should,because it always contains the parent directory (..), and the directory itself (.),that you didn't take care of.
This is not an answer to your question, but to avoid the problem of getting . and .. in the list, use Dir.glob instead of Dir.[]. You will probably get true for this:
Dir.glob("/opt/purge_entitlement/in/*").empty?

How to get last modified file in a directory to pass to system commands using Ruby?

I'm trying to do some dev-ops. I need to grab the last modified file in a directory to pass the filename to another command.
If I had a list of files outputted with ls -la in Ruby:
-rw-r--r-- 1 163929215 2012-11-26 00:02 appname_20121126_000002.tgz
-rw-r--r-- 1 164051752 2012-11-27 00:02 appname_20121127_000002.tgz
-rw-r--r-- 1 164160113 2012-11-28 00:02 appname_20121128_000002.tgz
-rw-r--r-- 1 164284597 2012-11-29 00:02 appname_20121129_000004.tgz
-rw-r--r-- 1 164342795 2012-11-30 00:02 appname_20121130_000003.tgz
-rw-r--r-- 1 164448312 2012-12-01 00:02 appname_20121201_000003.tgz
-rw-r--r-- 1 164490727 2012-12-02 00:02 appname_20121202_000002.tgz
-rw-r--r-- 1 164546124 2012-12-03 00:02 appname_20121203_000001.tgz
-rw-r--r-- 1 164594711 2012-12-04 00:02 appname_20121204_000002.tgz
How could I scan this with Ruby and pull the last file?
Is something like this even possible?
There's no need to shell out to ls and parse its output at all. Ruby gives you standard library methods to fetch directory contents and examine file mtimes. Here's a ruby method to return the name of a file in a directory with the latest mtime.
def last_modified_in dir
Dir.glob( File.join( dir,'*' ) ).
select {|f| File.file? f }.
sort_by {|f| File.mtime f }.
last
end
irb> system 'mkdir -p /tmp/foo'
irb> system 'rm /tmp/foo/*'
irb> ('a'..'c').each { |f| system "touch /tmp/foo/#{f}"; sleep 1; }
irb> puts last_modified_in '/tmp/foo'
# => /tmp/foo/c

Resources