How to shutdown/reboot a Ubuntu system using Ruby [closed] - ruby

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a problem using system 'reboot' on my Sinatra application.
I got a success response, but the reboot action doesn't happen.
I also tried exec 'reboot', but got the same response.
If I run this without the Docker container it works.

Use Semaphore Flags to Signal the Host
You can't directly reboot a host from within a Docker container. That would violate basic security principles. Pragmatically, though, you can mount host directories inside your Docker container. That opens up a number of possibilities.
In your case, if you're sure you want to do this and that it won't create disruption for other containers, services, or users, then one approach would be to:
Mount a subdirectory from the host’s /tmp or /var/tmp inside your Docker container.
Have your Sintra application write an empty semaphore file like trigger_reboot into the mounted subdirectory when the route is triggered.
Have a cron job on the host that looks for the semaphore file each minute, and then executes a privileged shutdown script when it's present.
Caveats and Considerations
This will work, but be aware of a few simple caveats:
Your privileged script should be separate from the semaphore file to avoid executing arbitrary commands on the host from within a container.
Your priviliged script should remove the semaphore file before triggering the shutdown or reboot.
You should have an init script, #reboot cron job, or initialization routine in your web application to ensure that:
The necessary directory structures in your temporary directory are recreated when the computer or web application restarts.
The semaphore flag was really removed after a restart, so you don't get stuck in a reboot loop.
Your privileged reboot script will need to run SUID root, or at least as a pre-authorized command defined in your sudoers file.
It's possible to do what you want with semaphores and a related set of processes on the host system, but you definitely need to take security and robustness into consideration to do this safely and reliably.

Related

Crontab and haproxy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a strange behavior which i do not understand:
I'm running haproxy as loadbalancer and security shield in front of other web containers.
The haproxy is running fine and uses my configured letsencrypt file.
So far so fine.
When the ssl certificate is running out, a new one including all needed key files is generated and replaces the key files.
After that, the haproxy must reload its config.
Now: when i call
cd /etc/haproxy
service haproxy reload
or the script itself from commandline, everything runs absolutely fine.
As soon as i call it via cron, it doesn't work!?!?
There is no error, and the reconfigure script is run till its end.
/etc/haproxy/bin/request_letsencrypt_certificate.sh:
#!/bin/bash
cd /etc/haproxy
service haproxy reload
crontab -e as root:
# LetsEnrcrypt | recert
* * * * * /etc/haproxy/bin/request_letsencrypt_certificate.sh
(i changed to run every minute for testing purposes)
When using echo test>run.txt the file is created every minute, so the script is started successfully, but the service command seems to be not executed?
What could be thy problem, why on commandline it works, but from cron not?
Both action are taken with root permissions as root itself (and when dumping the user in cron-call via whoami in the script, "root" is confirmed at runtime)
Does it work if you change service to /sbin/service to ensure that it's in the PATH? You also probably don't need cd /etc/haproxy in the script.

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

send argument/command to already running Powershell script

Until we can implement our new HEAT SM system i am needing to create some workflows to ease our currently manual user administration processes.
I intend to use Powershell to execute the actual tasks but need to use VBS to send an argument to PS from an app.
My main question on this project is, Can an argument be sent to an already running Powershell process?
Example:
We have a PS menu app that we will launch in the AM and leave running all day.
I would love for there to be a way to allow PS to listen for commands/args and take action on them as they come in.
The reason I am wanting to do it this way is because one of the tasks needs to disable exchange features and the script will need to establish a connection a remote PSsession which, in our environment, can take between 10-45 seconds. If i were to invoke the command directly from HEAT (call-logging software) it would lock up while also preventing the tech from moving on to another case until the script terminates.
I have searched all over for similar functionality but i fear that this is not possible with PS.
Any suggestions?
I had already setup a script to follow this recommendation but i was curious to see if there was a more seamless approach
As suggested by one of the comments by #Tony Hinkle
I would have the PS script watch for a file, and then have the VBScript script create a file with the arguments. You would either need to start it on another thread (since the menu is waiting for user input), or just use a separate script that in turn starts another instance of the existing PS script with a param used to specify the needed action

How to run an application inside docker safely

I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.

how to run a bash script at startup with a specific user on Ubuntu 12.04 (stable)

Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.

Resources