Xvfb on DISPLAY :0 and connect Xsession to DISPLAY :0 from chroot - x11

My situation:
Host computer on ArchLinux. And launched inside chroot-enviornment (ArchLinux) with systemd-nspawn container technology.
I need connect nomachine client to chrooted system.
I have simple script
#!/bin/sh
COOKIE=`ps -ef | md5sum | cut -f 1 -d " "`
sudo xauth -f /var/run/Xvfb-0.auth add :0 MIT-MAGIC-COOKIE-1 $COOKIE
xauth add :0 MIT-MAGIC-COOKIE-1 $COOKIE
Xvfb :0 -auth /var/run/Xvfb-0.auth -screen 0 1680x1050x24 &
DISPLAY=:0 /etc/X11/Xsession startxfce4 &
That script - true way to activate frame buffer on DISPLAY=:0 and connect to X11 session with new release of nomachine client (4+)
But that script unable to work in chroot. Xvfb and startxfce4 started fine, but nomachine client says me - sessions on remote server not found.
I try to start Xvfb on host system and connect to host DISPLAY=:0 from chroot-container - poorly. Problem with auth mit magik cookies.
In fact - I do not understand how work my simple script. Can anybody explained to how this code works?
How activate X11 session on DISPLAY=:0 from chroot (systemd-nspawn, ArchLinux). And connect to that session from nomachine client (nomachine.com, version >= 4)?

Problem with Invalid MIT-MAGIC-COOKIE solved with
rm ~/.Xauthority && touch ~/.Xauthority
Nomachine says "No session on remote resver".
You need to restart nomachine server after create Xvf display.
/usr/NX/bin/nxserver --restart

Related

Change user using gotty cli

I created a user on my os(linux). I want to start the gotty with that user. But couldn't done it.
gotty -w -p (port) "su - username" didnt work. (Actually work but runned and close the connection)
I can only interact with gotty actively with this command: gotty -w -p (port) bash.
How can i change user with gotty and go on like bash on browser with using gotty cli?
Tihs core linux command run commands for a specific user
runuser -l $1 -c "gotty -w -once -p 3000

needrestart behaves differently when run by ansible instead of a manual ssh connection

I am trying to run the needrestart tool by ansible to check for processes with outdated libraries.
When I run needstart with the command or shell modules from ansible it says that I need to restart my ssh daemon. When I run needrestart manually it says that there are no processes with outdated libraries.
When I restart the ssh daemon it does not make a difference. But after rebooting the remote server the ssh daemon is not listed as a service I should restart anymore.
So I really do not understand the difference between the ssh connection from ansible and my manual ssh connection that causes the different behavior of needrestart.
Any help would be appreciated!
Thank you in advance and best regards
Max
My local machine
$ python -V
Python 2.7.13
$ ansible --version
ansible 2.2.0.0
$ cat ansible.cfg
[defaults]
inventory = hosts
ask_vault_pass = True
retry_files_enabled = False
I am using a ssh proxy to connect to the server:
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q user#jumphost.example.com"'
The remote server
$ cat /etc/debian_version
8.6
$ python -V
Python 2.7.9
Using ansible
$ ansible example.com -m command -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
$ ansible example.com -m shell -a 'needrestart -b -l -r l'
Vault password:
example.com | SUCCESS | rc=0 >>
NEEDRESTART-VER: 1.2
NEEDRESTART-SVC: ssh.service
Using SSH
$ ssh example.com 'needrestart -b -l -r l'
NEEDRESTART-VER: 1.2
Killed by signal 1.
It looks like you have an active connection with older version of ssh process. When ssh restarts it does not terminate current copies which keeps active connections. If it would do this, than ssh servers sudo service ssh restart would kill active connection and you'll have a broken server.
So, when you do systemctl restart sshd, you restart only ssh-part, which accepts new connection. All existing connections are served by old ssh.
Why do ansible keep ssh old ssh connection between runs? Because of the ControlMaster feature. It keeps active ssh connection between runs to speed up new runs.
What to do? Close active ssh connections on your machine. Try ps aux|grep ssh and you'll see a process which serves as ControlMaster. Kill it, and outdated connection should be closed.

PostgreSQL - Scripting installation from Source

I'm trying to install PostgreSQL from source and script it for automatic installation.
Installing dependances, downloading and compiling PostgreSQL works good. But there are 3 commands that I need to run as Postgres User
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
/usr/local/pgsql/bin/createdb test
I saw this link but it doesn't work in my script here is the output :
Success. You can now start the database server using:
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data/
or
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ -l logfile start
server starting
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
admin#ip-172-31-27-106:~$ LOG: database system was shut down at 2015-03-27 10:09:54 UTC
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
And the script :
sudo su postgres <<-'EOF'
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data/ start
/usr/local/pgsql/bin/createdb pumgrana
EOF
After that, I need to press enter and the server is running. My database is not created. It seems like the script tries to create the database then run the server but I'm not sure. Can someone help me?
There are a few things wrong with that script:
pg_ctl should get a -w argument, making sure it waits until PostgreSQL has started before exiting.
You don't have any error checking, so it'll just keep going if something doesn't work. At minimum you should use set -e at the start.
I also suggest using sudo rather than su, which is kind of obsolete these days. You never need sudo su, that's what sudo -u is for. Using sudo also makes it easier to pass environment variables in. So I'd write something like (untested):
sudo -u postgres PATH="/usr/local/pgsql/bin:$PATH" <<-'EOF'
set -e
initdb -D /usr/local/pgsql/data/
pg_ctl -D /usr/local/pgsql/data/ -w start
createdb pumgrana
EOF
You might want to pass PGPORT or some other relevant env vars into the script too.
Completely separately to this ... why? Why do this? If you're automating an install from source, why not just build a .deb or .rpm automatically instead, then install that?

X11 connection rejected because of wrong authentication

I am getting a error while accessing the firefox using X11Forwarding.
[root#station2 ~]# firefox
KiTTY X11 proxy: wrong authorisation protocol attemptedKiTTY X11 proxy: wrong authorisation protocol attemptedError: cannot open display: localhost:10.0
setup the following values: /etc/ssh/sshd_config
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
** Installed the package**
#yum install xorg-x11-xauth
#yum -y install xauth
[root#station2 .ssh]# echo $DISPLAY
localhost:10.0
#mkxauth -c
adding key for station2.example.com to /root/.Xauthority ... done
export XAUTHORITY=$HOME/.Xauthority
This fix worked for me
There is a hard, if not even impossible, to find (by search engine) scenario that may may cause that error message.
Preliminary note: The topic of this answer is not to discuss if it is a safety
risc or recommondable at all to use a graphical desktop as root on an remote, display-less, webserver.
Scenario:
A remote internet connected Linux server S has assigned the domain
name example.com to it's public IP4-address 192.0.2.1.
The /etc/hostname file on S contains the single line example.
The /etc/hosts
file on S contains the line 127.0.0.1 localhost example.com example.
The (remote) ssh access to S is by (sshd-) configuration (on S) forbidden
for root by the line DenyUsers root in /etc/ssh/sshd_config, but
allowed for a dummy user user1. From a client computer C a ssh
connection, using the ssh parameter -X or -Y, is established to S
as user user1.
Then, in a remote terminal on S owned by user1,
if any X11 related command is tried to be executed as root, may it be by
su, then trying to start the X11 desktop environment
or, as in the concrete case executing a script containing
#!/bin/bash
su --preserve-environment -c "xfce4-session &" root
the error message
X11 connection rejected because of wrong authentication.
is output and the start of any X11 related program fails.
The DISPLAY variable of root's environment contains
example.com:10.0
then.
One solution to the problem is, in this special case, to modify the line
127.0.0.1 localhost example.com example
in /etc/hosts to
127.0.0.1 localhost
Solution: run the application with the same user you are SSHing.
I have also encounter such errors while using X11.
The source of my problem was that i used SSH with my own username (which was not root).
Then, once logged in i tired running stuff with X11 while doing "su" or doing "sudo",
the problem with that is that the SSH session is configured with your own username - e.g: Raj, but then you switch to user root which is not part of the X11 session.
So what you should do is simply try to run the application (firefox in your case) with the same user you started the X11 session.
Hope this helps.
Talel.
I ran into this running gvim over ssh -t -Y and the solution that worked for me was:
xauth add $(xauth -f ~<logon_user>/.Xauthority list | tail -1) ; export NO_AT_BRIDGE=1 # gvim X11 fix for remote GUI failure after su
I do not know where I stumbled on this answer so I cannot give credit to the author.

Why my rpm installation hang while played remotely

I have an AIX 6.1 server where I want to uninstall a rpm.
This uninstallation can be done directly on the server :
[user#server]$ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
This uninstallation is working.
I have a script lauching this unstallation :
Uninstall.sh
#!/usr/bin/bash
set -x
sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
I can play this script on the server without any problem :
[user#server]$ cd /where/is/the/script;./Uninstall.sh
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
_MyRPM-1.0.0 has been uninstalled successfully
But when I'm playing this script remotely the rpm hang :
[user#client]$ ssh user#server "cd /where/is/the/script;./Uninstall.sh"
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
And this command hang, I need to kill it in order to end the ssh.
PS : I have exactly the same comportment for installation or uninstallation.
EDIT :
The problem seems coming from the sudo. The hang problem appears also when I'm doing anithing with a sudo.
For example with a new script :
test.sh
#!/usr/bin/bash
set -x
sudo env
Sudo normally requires a user authenticate as themselves, and if I recall it can act different via remote execution due to the way the terminal is handled.
I don't have a system to test this on at the moment, but but you could try ssh's -t or -T switches:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
I suspect you could get this to work by adding the script you're remotely executing into /etc/sudoers:
{user} ALL=NOPASSWD:/where/is/the/script/Uninstall.sh
Then try:
"ssh -t user#server /where/is/the/script/Uninstall.sh"
EDIT:
Found some details to help explain why sudo is behaving differently when executed remotely:
http://www.sudo.ws/sudoers.man.html
The sudoers security policy requires that
most users authenticate themselves before they can use sudo. A
password is not required if the invoking user is root, if the target
user is the same as the invoking user, or if the policy has disabled
authentication for the user or command.
Perhaps it's hanging because it's trying to authenticate, whereas locally it wouldn't need to do so.

Resources