Running a BASH command on a file located on the Puppet server and not the client - bash

I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server). My basic idea is something like:
class security::selinux_module {
exec { 'selinux_module_check':
command => "grep module selinux_module_source.te | awk '{print $3}' | sed 's/;//' > /tmp/selinux_module_check.txt",
source => 'puppet:///modules/security/selinux_module_source.te',
}
}
Though when trying to run it on the client server I get:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter source on Exec[selinux_module_check] at /etc/puppet/environments/master/security/manifests/selinux_module.pp:3 on node client.domain.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Any assistance on this would be greatly appreciated.

You can use Puppet's generate() function to run commands on the master during catalog compilation and capture their output, but this is rarely a good thing to do, especially if the commands in question are expensive. If you intend to transfer the resulting output to the client for some kind of use there, then you also need to pay careful attention to ensuring that it is appropriate for the client, which might not be the case if the client differs too much from the server.
I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server
The simplest approach would be to run the needed command directly, once for all, from an interactive shell, and to put the result in a file from which the agent can obtain it, via Puppet or otherwise. Only if the type enforcement file were dynamically generated would it make any sense to compile it every time you build a catalog.
I suggest, however, that you build a package (RPM, DEB, whatever) containing the selinux policy file and any needed installation scripts. Put that package in your local repository, and manage it via a Package resource.

Related

How to source environment variables from a script on a remote host when using remote mode in clion?

At my company we use a number of linux servers dedicated to compilation of our codebase. I would like to use CLion's remote working capabilities, but so far I am unable to find a way for CLion to source my ~/.bashrc file, which sources other files that set the env and toolchain.
Is there a way to make clion source a file, .bashrc to be specific, after making the ssh connection to the remote server?
I did find a workaround:
I created a bash script that looks like this:
export PATH=/home/user/Qt/5.15.1/gcc_64/bin/:$PATH
/home/user/Documents/cmake-3.19.0-rc3-Linux-x86_64/bin/cmake $#
And set my toolchain's cmake to that file. Now whenever there is a configuration or building operation the script would pick up the arguments, setup the environment and then call cmake with passed parameters.
Unfortunately, as of 13.11.2020 there is no way to source a script from the remote machine while working in CLion remotely via ssh.
There is a ticket for this functionality to be added so if you are reading this in the future, check the web, maybe the situation has changed.

RPM Build, how to get the path where the RPM package been "rpm -ivh" at shell script running by '%post' macro

I'm a newbie in rpm build. and i did my best i can to describe the little complicated question with my amature english...
i have a script(.sh) with some code,what the script do is to setup the code,and it need some user input.
sadly i found out scripts running by rpm can not get user input.
and i know that's not right usage. i'm not trying to get user input
anymore.
my question is:
i'm now trying to get those input with a config file along with the rpm package,but i don't know how to get the rpm package path at the SPEC file macros or the script file running by SPEC file macros.
rpm packages are not supposed to "adapt" themselves to user input. I would recommend you to make sure the installation of the package is always the same. Once the package is installed, you can tell users how to configure the program.
Take git for example: it provides /etc/gitconfig which contains the default packaged configuration. Users can then make their changes to the configuration and save those in ~/.gitconfig. Thus the user configuration is separated from the packaged configuration, so you can keep updating git without losing your configuration.

Google Cloud Functions and shared libraries

I'm trying to use wkhtmltopdf on GCF for PDF generation.
When my function tries to spawn the child process I get the following error:
Error: ./services/wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or director
The problem is clearly due to the fact that wkhtmltopdf binary depends on external shared libraries which are not installed in GCF environment.
Is there a way to solve this issue or should I give up and use other solutions (AWS Lambda o GAE)?
Thank you in advance
Indeed, I’ve found a way to solve this issue by copying all required libraries in the same folder (/bin for me) containing wkhtmltopdf binary. In order to let the binary file use uploaded libraries I added the following lines to wkhtmltopdf.js:
wkhtmltopdf.command = 'LD_LIBRARY_PATH='+path.resolve(__dirname, 'bin')+' ./bin/wkhtmltopdf';
wkhtmltopdf.shell = '/bin/bash';
module.exports = wkhtmltopdf;
Everything worked fine for a while. At a sudden I receive many connection errors from GCF or timeouts but I think it’s not related to my implementation but rather to Google.
I’ve ended up setting a dedicated server.
I have managed to get it working, there are 2 things needed to be done, as wkhtmltopdf won't work if:
libXrender.so.1 can't be loaded
you are using stdout to collect resulting pdf. Wkhtmltopdf has to write the result into a file
First you need to obtain correct version of libXrender.
I have found out, which docker image Cloud functions are using as base for nodejs functions. I've ran it locally, installed libxrender and copied the library into my function's directory.
docker run -it --rm=true -v /tmp/d:/tmp/d gcr.io/google-appengine/nodejs bash
Then, inside the runing container:
apt update
apt install libxrender1
cp /usr/lib/x86_64-linux-gnu/libXrender.so.1 /tmp/d
I have put this into my function's project directory and under lib sub directory. In my function's source file, I then set-up LD_LIBRARY_PATH to include the /user_code/lib directory (/user_code is the directory, where at last your function will end up being put by google):
process.env['LD_LIBRARY_PATH'] = '/user_code/lib'
This is enough for wkhtmltopdf to be able to execute. It will fail, as it won't be able to write to stdout and the function will eventually timeout and be killed (as Matteo experienced). I think this is because google runs the containers without a tty (just speculation), I can run my code in their container, if I run it with docker run -it flags. To solve this, I am invoking wkhtmltopdf so that it writes the output into a file under /tmp (this is in-memory tmpfs). I then read the file back and send it as my response body. Note that the tmpfs might be reused between function calls, so you need to use unique file every time.
This seems to do the trick and I am able to run wkhtmltopdf as Google CloudFunction.

How to get stand-alone ohai to recognize custom plugin_path?

I have chef configured to add "/etc/chef/ohai_plugins" to Ohai::Config[:plugin_path]. However, the Chef documentation says:
"The Ohai executable ignores settings in the client.rb file when Ohai is run independently of the chef-client."
So, how can I get a stand-alone run of ohai to load and use the plugins in that custom path?
(Background: I have a custom plugin that reports some information that we keep track of for a fleet of servers, like whether a server has been patched for heartbleed or shellshock. I want to be able to run "ssh somehost ohai", parse the JSON that gets sent back, and extract the information I need.)
Thanks.
Outside of chef, you can add an additional plugin path using the -d switch, e.g.
$ ohai -d /etc/chef/ohai_plugins
The relevant source code is at:
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L25-L28
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L78-L80
The option to specify a custom config file for Ohai was sadly removed last year with https://github.com/chef/ohai/commit/ebabd088673cf3e36d600bd96aeba004077842f1
Hope this answers your question.
This will be possible soon via the implementation of Chef RFC 53: https://github.com/chef/chef-rfc/blob/master/rfc053-ohai-config.md

How to setup Pydevd remote debugging with Heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?
Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

Resources