Why does Heroku deny access to /dev/stdout and /dev/stderr? - heroku

I'd like to configure an application to write logs to /dev/stdout so they end up in Heroku's logplex. However I get Permission Denied trying to write to /dev/stdout.
I'm not looking for workarounds. I can cook those up aplenty. I'd really just like to know why Heroku denies opening and writing to /dev/stdout, if anybody has insight into that. Thanks!

The reason for the Permission Denied error is that the /dev/stdout symlink dangles rather than terminating in an allocated pseudo tty in /dev/pts:
~ $ ls -l /dev/stdout /proc/self/fd/1 /dev/pts/0
lrwxrwxrwx 1 root root 15 2014-03-29 17:21 /dev/stdout -> /proc/self/fd/1
lrwx------ 1 u59417 59417 64 2014-03-29 17:25 /proc/self/fd/1 -> /dev/pts/0
ls: cannot access /dev/pts/0: No such file or directory
Since the dangling symlink terminates in a directory (/dev/pts) to which the user doesn't have write permission, the OS denies creating the non-existent /dev/pts/0
Regarding why /dev/pts/0 doesn't exist: Heroku's virtualization is based on LXC (see https://devcenter.heroku.com/articles/dynos#technologies). This doesn't preclude pseudo tty allocation but in this case the pseudo tty is probably allocated in the host OS, and the guest container isn't inheriting it in /dev/pts (whether intentionally or inadvertently, which would be a related question).

Heroku is running Ubuntu. Ubuntu blocks access to /dev/stdout, etc., by default (apparmor). Heroku encourages you to write to STDOUT / STDERR directly with your application stream, as opposed to manually writing to the STDOUT streams in /dev.
UPDATE: This is incorrect -- #Aron Griffis got it right above -- thanks!

Related

Find who is holding cryptsetup/LUKS encrypted home (some KDE/X vs common sense madness)

I'm fighting some ridiculous no-so-eeasy to debug case with my cryptsetup/LUKS encrypted home directory.
The setup: I have partition that is dedicated to my user home directory and encrypted with cryptsetup/LUKSv2 (lets call this user "crypted"). The directory is automatically mounted on user logon with pam_mount module and unmounted as soon as last session of this user is closed. This seems to work pretty well even for KDE/Plasma session that is started by SDDM.
Unless another user (lets call it "plane") login into KDE/Plasma session while user with crypted (and mounted) home is still active. If so, pam_mount will fail to unmount crypted home on "crypted" user logout giving me:
(mount.c:72): Device sdaX_dmc is still in use
(mount.c:72): ehd_unload: Device or resource busy
(mount.c:887): unmount of /dev/sdaX failed
cryptsetup close sdaX_dmc will give same error preventing me from freeing the device.
This will last until "plane" user will logout and close KDE/Plasma session. Only then I will be able to close crypted device and login with "crypted" user again.
So, ok, not a problem, I thought and did a try to find who is guilty using lsof while "plane" user is still logged in and "crypted" user attempted logout with unmount failed, but:
lsof | grep '/home/<mountpoint>'
lsof | grep 'sdaX_dmc'
gave me exactly nothing. No process is accessing this directory.
Then I did a try with:
ofl /home/<mountpoint>
with no success.
SDDM itself is not a problem as I'm able to unmount "crypted" user home while SDDM active and after SDDM restart.
Any ideas how to find the process who is accessing/holding some third-party user home directory? Looks like some KDE/Waylan/X11 is in respond.
Have you tried
lsof +D '/home/<mountpoint>'
I get a report that looks like this (giving process and user):
root#OasisMega1:~# lsof +D .
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Xorg 1793 root mem REG 8,3 1310728 12320774 ./.cache/mesa_shader_cache/index
mate-term 2918 root cwd DIR 8,3 4096 12320769 .
mate-term 2918 root mem REG 8,3 10974 12323714 ./.config/dconf/user
mate-term 2918 root mem REG 8,3 2 12321632 ./.cache/dconf/user
bash 8829 root cwd DIR 8,3 4096 12320769 .
lsof 8851 root cwd DIR 8,3 4096 12320769 .
lsof 8852 root cwd DIR 8,3 4096 12320769 .
root#OasisMega1:~#
Maybe something you aren't expecting is keeping the device busy.
Similarly, examining the device directly:
lsof `df . | grep '/dev/' | awk '{ print $1 }' `

How can I run a command in a separate terminal using sudo without further user interaction

I am trying to automate the running of several tasks, but I need to run them as sudo.
I want to run them in separate terminals so I can watch the output of each.
Here is a sort of minimal example I have setup (because what I am trying to do is more complicated)
Setup two files - note that data is readable as root only and contains 3 lines of example text:
-rw------- 1 root root 33 Nov 15 09:29 data
-rwxrwxrwx 1 root root 11 Nov 15 09:30 test.sh*
test.sh looks like:
#!/bin/bash
cat data
read -p "Press enter to continue"
Also I have user level variable called "SESSION_MANAGER" that is setup in the bash startup... which seems to cause some issues (see later example)
So now I want to spawn various terminals running this script. I tried the following:
Attempt 1
xfce4-terminal -e './test.sh'
output:
cat: data: Permission denied
Press enter to continue
Attempt 2 - using sudo at the start
~/src/sandbox$ sudo xfce4-terminal -e './test.sh'
Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined
(xfce4-terminal:6755): IBUS-WARNING **: The owner of /home/openbts/.config/ibus/bus is not root!
output:
this is some data
more data
end
Press enter to continue
here you can see that the output of the data file is print ok, but I had some issue with the session manager variable.
Attempt 3 - using sudo in the command
~/src/sandbox$ xfce4-terminal -e 'sudo ./test.sh'
output:
[sudo] password for openbts:
this is some data
more data
end
Press enter to continue
here you can see that everything was well... but I had to enter my password again, which somewhat kills my automation :(
Attempt 4 - start as root
~/src/sandbox$ sudo su
root#openbts:/home/openbts/src/sandbox# xfce4-terminal -e './test.sh'
Failed to connect to session manager: Failed to connect to the session manager: SESSION_MANAGER environment variable not defined
output:
this is some data
more data
end
Press enter to continue
Here, again the output looks good, but I have this SESSION_MANAGER issue... Also the new xfce4-terminal comes out with messed up font/look - I guess this is the root users settings.
Questions
How can I run multiple instances of test.sh each in a new terminal and not have to enter passwords (or interact at all). I can enter the password once at the start of the process (in the original terminal)?
As you can see I got this sort-of working when going in a sudo su, but this issues here are the SESSION_MANAGER variable - not sure if that is an issue, but its very messy looking, but also the xcfe4-terminal looks bad (I guess I can change the root settings to the same as my user settings). So how can I avoid the SESSION_MANAGER issue when running as root?
If you change user-id before you launch your separate terminal, you will see the session-manager issue. So the solution is to run the sudo in the terminal.
You do not want to type passwords in the sudo. You can do that by adding
yourname ALL=(ALL) NOPASSWD: ALL
to /etc/sudoers (at least on slackware). You could also try to set the permissions on the files correct so you would not need root all the time.
Note that adding that line has security implications; you might want to allow just cat without password (in your example), or make even more elaborate rules for sudo. The line I gave is just an example. Personally, I would look at file-permissions.

Where to find sshd logs on MacOS sierra

I want to install Pseudo-Distributed HBase environment on my Mac OS Sierra (10.12.4), and it requires ssh installed and can log with ssh localhost without password. But sometimes I came across with error when I use ssh to log in. Above all are question background, and the actual question is where can I find debug logs of sshd so I could know why logging is failed in further?
As I know, Mac OS already have sshd installed and use launchd to manage it, and I know one way to output debug logs by sshd -E /var/log/sshd.log, but when I reviewed /etc/ssh/sshd_config configuration and there are two lines:
#SyslogFacility AUTH
#LogLevel INFO
I guess these two lines are used to config debug mode, then I removed # before them and set LogLevel to DEBUG3 and then restarted sshd:
$ launchctl unload -w /System/Library/LaunchDaemons/ssh.plist
$ launchctl load -w /System/Library/LaunchDaemons/ssh.plist
And then I set log path in /etc/syslog.conf:
auth.*<tab>/var/log/sshd.log
<tab> means tab character here, and reloaded the config:
$ killall -HUP syslogd
But sshd.log file can not be found in /var/log folder when I executed ssh localhost. I also tried config the /etc/asl.log:
> /var/log/sshd.log format=raw
? [= Facility auth] file sshd.log
And the result was the same, can someone help me?
Apple, as usual, decided to re-invent the wheel.
In super-user window
# log config --mode "level:debug" --subsystem com.openssh.sshd
# log stream --level debug 2>&1 | tee /tmp/logs.out
In another window
$ ssh localhost
$ exit
Back in Super-user window
^C (interrupt)
# grep sshd /tmp/logs.out
2019-01-11 08:53:38.991639-0500 0x17faa85 Debug 0x0 37284 sshd: (libsystem_network.dylib) sa_dst_compare_internal <private>#0 < <private>#0
2019-01-11 08:53:38.992451-0500 0xb47b5b Debug 0x0 57066 socketfilterfw: (Security) [com.apple.securityd:unixio] open(/usr/sbin/sshd,0x0,0x1b6) = 12
...
...
In super-user window, restore default sshd logging
# log config --mode "level:default" --subsystem com.openssh.sshd
You can find it in /var/log/system.log. Better if you filter by "sshd":
cat /var/log/system.log | grep sshd
Try this
cp /System/Library/LaunchDaemons/ssh.plist /Library/LaunchDaemons/ssh.plist
Then
vi /Library/LaunchDaemons/ssh.plist
And add your -E as shown below
<array>
<string>/usr/sbin/sshd</string>
<string>-i</string>
<string>-E</string>
<string>/var/log/system.log</string>
</array>
And lastly restart sshd now you will see sshd logs in /var/log/system.log
launchctl unload /System/Library/LaunchDaemons/ssh.plist && launchctl load -w /Library/LaunchDaemons/ssh.plist
I also had an ssh issue that I wanted to debug further and was not able to figure out how to get the sshd debug logs to appear in any of the usual places. I resorted to editing the /System/Library/LaunchDaemons/ssh.plist file to add a -E <log file location> parameter (/tmp/sshd.log, for example). I also edited /etc/ssh/sshd_config to change the LogLevel. With these changes, I was able to view the more verbose logs in the specified log file.
I don't have much experience with MacOS so I'm sure there is a more correct way to configure this, but for lack of a better approach this got the logs I was looking for.
According to Apple's developer website, logging behavior has changed in macOS 10.12 and up:
Important:
Unified logging is available in iOS 10.0 and later, macOS 10.12 and later, tvOS 10.0 and later, and watchOS 3.0 and later, and supersedes ASL (Apple System Logger) and the Syslog APIs. Historically, log messages were written to specific locations on disk, such as /etc/system.log. The unified logging system stores messages in memory and in a data store, rather than writing to text-based log files.
Unfortunately, unless someone comes up with a pretty clever way to extract the log entries from memory or this mysterious "data store", I think we're SOL :/
There is some sshd log in
/var/log/system.log
for example
Apr 26 19:00:11 mac-de-mamie com.apple.xpc.launchd[1] (com.openssh.sshd.7AAF2A76-3475-4D2A-9EEC-B9624143F2C2[535]): Service exited with abnormal code: 1
Not very instructive. I doubt if more can be obtained. LogLevel VERBOSE and LogLevel DEBUG3 in sshd_config do not help.
According to man sshd_config :
"Logging with a DEBUG level violates the privacy of users and is not recommended."
By the way, I relaunched sshd not with launchctl but with System preference Sharing, ticking Remote login.
There, I noticed the option : Allow access for ...
I suspect this settings to be OUTSIDE /etc/ssh/sshd_config
(easy to check but I have no time).
Beware that Mac OS X is not Unix : Apple developpers can do many strange things behind the scene without any care for us command line users.

flock permission denied bash

I have written a little test script to prevent running my script simultaneously with flock:
#!/bin/bash
scriptname=$(basename $0)
lock="/var/run/${scriptname}"
umask 0002
exec 200>$lock
flock -n 200 || exit 1
## The code:
sleep 60
echo "Hello world"
When I run the script with my user and try to run the script with another user I got following error message with the lock file.
/var/run/test.lock: Permission denied
Any idea?
Kind regards,
Andreas
In a comment, you mention that
other user is in the same group. file permissions are -rw-r--r--
In other words, only the first user has write permissions on the lock file.
However, your script does:
exec 200>$lock
which attempts to open the lockfile for writing. Hence the "permission denied" error.
Opening the file for writing has the advantage that it won't fail if the file doesn't exist, but it also means that you can't easily predict who the owner of the file will be if your script is being run simultaneously by more than one user. [1]
In most linux distributions, the umask will be set to 0022, which causes newly-created files to have permissions rw-r--r--, which means that only the user which creates the file will have write permissions. That's sane security policy but it complicates using a lockfile shared between two or more users. If the users are in the same group, you could adjust your umask so that new files are created with group write permissions, remembering to set it back afterwards. For example (untested):
OLD_UMASK=$(umask)
umask 002
exec 200>"$lock"
umask $OLD_UMASK
Alternatively, you could apply the lock with only read permissions [2], taking care to ensure that the file is created first:
touch "$lock" 2>/dev/null # Don't care if it fails.
exec 200<"$lock" # Note: < instead of >
Notes:
[1]: Another issue with exec 200>file is that it will truncate the file if it does exist, so it is only appropriate for empty files. In general, you should use >> unless you know for certain that the file contains no useful information.
[2]: flock doesn't care what mode the file is open in. See man 1 flock for more information.
I was trying to use flock on a file with shared group permissions with a system account. Access permissions changed in Ubuntu 19.10 due to an updated kernel. You must be logged in as the user who owns the file, and not a user whose group matches the file permissions. Even sudo -u will show 'permission denied' or 'This account is currently not available'. It affects fifo files like the ones used by the flock command.
The reason for the change is due to security vulnerabilities.
There is a workaround to get the older behaviour back in:
create /etc/sysctl.d/protect-links.conf with the contents:
fs.protected_regular = 0
Then restart procps:
sudo systemctl restart procps.service
Run the whole script by sudo /path/script.sh instead of only /path/script.sh

How to monitor en0 network interface on a Mac, without having to use sudo?

I have crafted a script (python+bash) which makes use of tcpdump to monitor and filter the TCP headers that flow through a network interface. It works smoothly for all interfaces but when it comes to ethernet en0 interface, Mac requires for tcpdump to be executed as root user (sudo).
Is there any programatic solution by which I can bypass the need to run it with sudo?
I find that tools like wireshark is able to do it without requesting the user for sudo password.
Any solution without requiring sudo would be great.
Is there any programatic solution by which I can bypass the need to run it with sudo?
What do you mean by "programatic"?
The way Wireshark does this is that its installer
creates an access_bpf group and puts the user into it;
installs a StartupItem that changes the group owner of the current BPF devices to access_bpf and changes the permissions on them to rw-rw---- (as per the ls -l /dev/bpf* output in jonschipp's answer);
so that the user who installs Wireshark can run programs that use BPF (all programs using libpcap use BPF on OS X; tcpdump and Wireshark both use libpcap) without having to run them as root (at least as long as the program doesn't need a new BPF device; they're automatically created as needed, but they're created with permissions rw------- and owned by user and group root).
So if you've installed Wireshark, you can run not only Wireshark (and TShark, and the dumpcap program that both of them use to do packet capturing) as an ordinary user and capture traffic, you can also, for example, run tcpdump as an ordinary user and capture traffic.
I.e., it's not something in the Wireshark code that enables this, so it's not "programatic" in that sense, it's something installed by the Wireshark installer that enables this, and it enables it for all programs.
If you do not need to be in promiscuous mode then you can use tcpdump as a normal user. Use the '-p' option to disable promiscuous mode.
tcpdump -nni en0 -p
If you need to set your interface in promiscuous mode then you could enable the root account and become root via su and then proceed to run your script.
su root -
python myscript.py
Or
su -
python myscript.py
With sudo defaults it can be done like (presuming an admin account called Administrator)
su Administrator
sudo su
python myscript.py
If you're concerned about the password prompt sudo can avoid it by configuring the /etc/sudoers file to use the NOPASSWD option. You can then run your script as a normal user
without a password prompt.
You may also try giving the bpf device files read permission for other users.
Note: I haven't tested this.
$ ls -l /dev/bpf*
crw-rw---- 1 root access_bpf 23, 0 Aug 4 22:17 /dev/bpf0
crw-rw---- 1 root access_bpf 23, 1 Aug 4 22:16 /dev/bpf1
crw-rw---- 1 root access_bpf 23, 2 Aug 4 22:16 /dev/bpf2
crw-rw---- 1 root access_bpf 23, 3 Aug 4 22:16 /dev/bpf3
e.g.
chmod o+r /dev/bpf*

Resources