How to detect if the current Go process is running in a headless (non-GUI) environment? - go

I have a Go program which wants to install a trayicon. In case the process is headless, that is, it will not be able to create a graphical user interface, the Go program still makes sense and shall run, but obviously it shall not install the trayicon.
What is the way in Go how to detect whether the current Go process is headless?
Currently, I use the following code:
func isHeadless() bool {
_, display := os.LookupEnv("DISPLAY")
return !(runtime.GOOS == "windows" || display)
}
This code works just fine on a "normal" Windows, Linux, or Mac OS X, and I bet it will also run just fine on FreeBSD, NetBSD, Dragonfly and many others.
Still, that code obviously has a lot of problems:
It assumes that Windows is never headless (wrong, what if the process was started without a user logged in, and also, there's Windows 10 IoT Core which can be configured to headless https://learn.microsoft.com/en-us/windows/iot-core/learn-about-hardware/headlessmode)
It doesn't support Android (of which there also is a headless version for IoT).
It assumes that everything non-Windows has an X-Server and thus a DISPLAY environment variable (wrong, for example, Android)
So, what is the correct way in Go to detect whether the current process is headless / running in a headless environment?
I'm not looking for workarounds, like adding a --headless command line switch to my program. Because, I already have that anyway for users who have heads but want the program to behave as if it were headless.
In some other programming environments, such capabilities exist. For example, Java has java.awt.GraphicsEnvironment.isHeadless(), and I'm looking for a similar capability in Go.
Some people have suggested to simply try creating the UI, and catch the error. This does not work, at least not with the library that I use. I use github.com/getlantern/systray. When systray.Run() cannot create the UI, the process dies. My code to setup the system tray looks like this:
func setupSystray() { // called from main()
go func() {
systray.Run(onReady, nil)
}()
}
func onReady() {
systray.SetTitle("foo")
// ...
}
When I run this code on Linux with DISPLAY unset, the output is as following:
$ ./myapp-linux-amd64
Unable to init server: Could not connect: Connection refused
(myapp-linux-amd64:5783): Gtk-WARNING **: 19:42:37.914: cannot open display:
$ echo $?
1
It could be argued that this is a flaw in the library (and I have created a ticket on the library https://github.com/getlantern/systray/issues/71), but nonetheless some other APIs and environments provide a function isHeadless(), and I'm looking for an equivalent in Golang.

I think you might be attacking this problem from a wrong angle.
Detecting reliably that your program really sees a headless machine is, IMO, rather futile for a number of reasons.
Hence I think I'd adopt an approach usually sported in, say, working with filesystems:
Try to perform an operation.
If it fails, collect the error.
Analyze the error and act accordingly.
That is, just try to explicitly initialize the client (yours) side of whatever works with the GUI stack in your code, trap any possible error and analyze it. If it says it failed to initialize the subsystem, then just raise a relevant flag and proceed.

In the perceived absence of a library/solution for this, I've created one myself. https://github.com/christianhujer/isheadless
Example Usage:
package main
import (
. "fmt"
. "github.com/christianhujer/isheadless"
. "os"
)
func main() {
headless := IsHeadless()
Fprintf(Stderr, "%s: info: headless: %v\n", Args[0], headless)
Exit(map[bool]int{true: 0, false: 1}[headless])
}
Example runs:
$ ./isheadless ; echo $?
./isheadless: info: headless: false
1
$ DISPLAY= ./isheadless ; echo $?
./isheadless: info: headless: true
0

Well, the answer to the question precisely as it was stated
is to just look at what Java does in its isHeadless().
Here is what OpenJDK 10 does.
I cannot copy the code as it would supposedly breach its license,
but in essense, the breakdown is as follows:
Get system property "java.awt.headless"; use it, if found.
Get system property "javaplugin.version"; if it exists,
the session is not headless. Use this value.
Get system property "os.name". If it literally contains
the substring "OS X" and the system property "awt.toolkit"
equals the string "sun.awt.HToolkit", the session is not headless.
Use this value.
Check whether the system property "os.name"
equals one of "Linux", "SunOS", "FreeBSD", "NetBSD", "OpenBSD"
or "AIX", and if so, try to find an environment variable "DISPLAY";
if it's absent, the session is headless.
As you can see, in reality the check is pretty lame
and I fail to see any special treatment of Windows.
Still, this answers your question precisely.

Related

How to simulate a TTY while also piping stdio?

I'm looking for a cross-platform solution for simulating a TTY (PTY?) in Rust while also piping stdio.
The frontend is based on web technologies where an interactive terminal is shown. Users can run commands and all their inputs will be sent to the Rust backend, where the commands get executed. Std{in,out,err} are sent back end forth to allow for an interactive experience.
Here's a simplified example (piping only stdout):
let mut child = Command::new(command)
.stdout(Stdio::piped())
.spawn()
.expect("Command failed to start");
loop {
let read = reader.read(&mut chunk);
if let Ok(len) = read {
if len == 0 {
break;
}
let chunk = &chunk[..len];
send_chunk(chunk); // send chunk to frontend
} else {
eprintln!("Err: {}", read.unwrap_err());
}
}
Currently, running the command tty prints: not a tty, but ideally, it should output a file name (e.g /dev/ttys002). And programs, such as atty should return true.
Running only the backend in a terminal, with stdio inherited works, but then I can't send the stdio back to the frontend.
Define "cross platform". As far as PTYs is concerned, those are pseudo devices supported by the kernel, complete with ioctls and everything. As a matter of fact a lot of the things your terminal emulator will have to do, is implementing the receiving end of those ioctls.
As long as you're on a machine with the BSD API (which includes Linux), the best course of action would be to openpty and roll with that. If you want to be portable to non BSD PTY capable systems, you'll have to hook the tty functions in the child process (by preloading a helper library).

Why does the golang.org/x/sys package encourage the use of the syscall package it's meant to replace?

I have read some Go code making use of syscall for low-level interaction with the underlying OS (e.g. Linux or Windows).
I wanted to make use of the same package for native Windows development, but reading its documentation says it's deprecated in favor of golang/x/sys:
$ go doc syscall
package syscall // import "syscall"
Package syscall contains an interface to the low-level operating system
primitives.
...
Deprecated: this package is locked down. Callers should use the
corresponding package in the golang.org/x/sys repository instead. That is
also where updates required by new systems or versions should be applied.
See https://golang.org/s/go1.4-syscall for more information.
Now, reading the documentation for golang/x/sys and inspecting its code, it relies heavily on and encourages the use of the syscall package:
https://github.com/golang/sys/blob/master/windows/svc/example/beep.go
package main
import (
"syscall"
)
var (
beepFunc = syscall.MustLoadDLL("user32.dll").MustFindProc("MessageBeep")
)
func beep() {
beepFunc.Call(0xffffffff)
}
and
https://godoc.org/golang.org/x/sys/windows#example-LoadLibrary
...
r, _, _ := syscall.Syscall(uintptr(proc), 0, 0, 0, 0)
...
Why does golang/x/sys rely and encourage the use of the package it's meant to replace?
Disclaimer: I'm pretty new to Go specifically (though not to low-level OS programming). Still, the path here seems clear.
Go, as an ecosystem—not just the language itself, but all the various libraries as well—tries1 to be portable. But direct system calls are pretty much not portable at all. So there is some tension here automatically.
In order to do anything useful, the Go runtime needs various services from the operating system, such as creating OS-level threads, sending and receiving signals, opening files and network connections, and so on. Many of these operations can be, and have been, abstracted away from how it is done on operating systems A, B, and C to generic concepts supported by most or all OSes. These abstractions build on the actual mechanisms in the various OSes.
They may even do this in layers internally. A look at the Go source for the os package, for instance, shows file.go, file_plan9.go, file_posix.go, file_unix.go, and file_windows.go source files. The top of file_posix.go showss a +build directive:
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build aix darwin dragonfly freebsd js,wasm linux nacl netbsd openbsd solaris windows
Clearly this code itself is not completely portable, but the routines it implements for os, which are wrapped by the os.File abstraction, suffice for all POSIX-conformant systems. That reduces the amount of code that has to go in the Unix/Linux-specific files_unix.go file for instance.
To the extent that OS-level operations can be wrapped into more-abstract, more-portable operations, then, the various built-in Go packages do this. You don't need to know whether there's a different system call for opening a device-file vs a text-file vs a binary-file, for instance, or a long pathname vs a short one: you just call os.Create or os.Open and it does any work necessary behind the scenes.
This whole idea just doesn't fly with system calls. A Linux system call to create a new UID namespace has no Windows equivalent.2 A Windows WaitForMultipleObjects system call has no real equivalent on Linux. The low level details of a stat/lstat call differ from one system to another, and so on.
In early versions of Go, there was some attempt to paper over this with the syscall package. But the link you quoted—https://golang.org/s/go1.4-syscall—describes this attempt as, if not failed, at least overstretched. The last word in the "problems" section is "issues".
The proposal at this same link says that the syscall package is to be frozen (or mostly frozen) as of Go 1.4: don't put new features into it. But the features that are in it are sufficient to implement the new, experimental golang.org/x/sys/* packages, or at least some of them. There's no harm in the experimental package borrowing the existing, formally-deprecated syscall package if that does what the experimental new package needs.
Things in golang.org/x/ are experimental: feel free to use them, but be aware that there are no compatibility promises across version updates, unlike things in the standard packages. So, to answer the last line of your question:
Why does golang/x/sys rely [on] and encourage the use of the package it's meant to replace?
It relies on syscall because that's fine. It doesn't "encourage the use of" syscall at all though. It just uses it when that's sufficient. Should that become insufficient, for whatever reason, it will stop relying on it.
Answering a question you didn't ask (but I did): suppose you want Unix-specific stat information about a file, such as its inode number. You have a choice:
info, err := os.Stat(path) // or os.Lstat(path), etc
if err != nil { ... handle error ... }
raw, ok := info.Sys().(*syscall.Stat_t)
if !ok { ... do whatever is appropriate ... }
inodeNumber := raw.Ino
or:
var info unix.Stat
err := unix.Stat(path, &info) // or unix.Lstat, etc
if err != nil { ... handle error ... }
inodeNumber := unix.Ino
The advantage to the first block of code is that you get all the other (portable) information about the file—its mode and size and time-stamps, for instance. You maybe do, maybe don't get the inode number; the !ok case tells you whether you did. The primary disadvantage here is that it takes more code to do this.
The advantage to the second block of code is that it says just what you mean. You either get all the information from the stat call, or none of it. The disadvantages are obvious:
it only works on Unix-ish systems, and
it uses an experimental package, whose behavior might change.
So it's up to you which of these matters more to you.
1Either this is a metaphor, or I've just anthropomorphized this. There's an old rule: Don't anthropomorphize computers, they hate that!
2A Linux UID namespace maps from UIDs inside a container to UIDs outside the container. That is, inside the container, a file might be owned by UID 1234. If the file is in a file system that is also mounted outside the container, that file can be owned by a different owner, perhaps 5678. Changing the ownership on either "side" of the container makes the change in that side's namespace; the change shows up on the other side as the result of mapping, or reverse-mapping, the ID through the namespace mapping.
(This same trick also works for NFS UID mappings, for instance. The Docker container example above is just one use, but probably the most notable one these days.)

How to demonstrate memory visibility problems in Go?

I'm giving a presentation about the Go Memory Model. The memory model says that without a happens-before relationship between a write in one goroutine, and a read in another goroutine, there is no guarantee that the reader will observe the change.
To have a bigger impact on the audience, instead of just telling them that bad things can happen if you don't synchronize, I'd like to show them.
When I run the below code on my machine (2017 MacBook Pro with 3.5GHz dual-core Intel Core i7), it exits successfully.
Is there anything I can do to demonstrate the memory visibility issues?
For example are there any specific changes to the following values I could make to demonstrate the issue:
use different compiler settings
use an older version of Go
run on a different operating system
run on different hardware (such as ARM or a machine with multiple NUMA nodes).
For example in Java the flags -server and -client affect the optimizations the JVM takes and lead to visibility issues occurring.
I'm aware that the answer may be no, and that the spec may have been written to give future maintainers more flexibility with optimization. I'm aware I can make the code never exit by setting GOMAXPROCS=1 but that doesn't demonstrate visibility issues.
package main
var a string
var done bool
func setup() {
a = "hello, world"
done = true
}
func main() {
go setup()
for !done {
}
print(a)
}

drmDropMaster requires root privileges?

Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...

track every system and external library call on an OS X app

I want to examine every system and external library call of a given application, together with the data structures that are passed around. (The application in question is some kind of packaged software based on OpenSSL and around OS X keychain, and I want to see if I could get a hold of the private key, which is marked as non-extractable in Keychain Access.)
How could I do that on OS X?
I think DTrace comes to mind, but I couldn't find any sample tricks to do the above.
To examine every system call and external library call, the DTrace script is like this:
#!/usr/sbin/dtrace -s
syscall:::entry
/ pid == $1 /
{
}
pid$1:lib*::entry
{
}
The usage is like: ./check.d pid (The process ID). For the input parameters, you can use arg0...argN (uint64_t type) to refer them.
But I think you should find the related syscall and library functions firstly, else the output is very huge and hard to debug.
Hope this can help!

Resources