How to source environment variables from a script on a remote host when using remote mode in clion? - clion

At my company we use a number of linux servers dedicated to compilation of our codebase. I would like to use CLion's remote working capabilities, but so far I am unable to find a way for CLion to source my ~/.bashrc file, which sources other files that set the env and toolchain.
Is there a way to make clion source a file, .bashrc to be specific, after making the ssh connection to the remote server?

I did find a workaround:
I created a bash script that looks like this:
export PATH=/home/user/Qt/5.15.1/gcc_64/bin/:$PATH
/home/user/Documents/cmake-3.19.0-rc3-Linux-x86_64/bin/cmake $#
And set my toolchain's cmake to that file. Now whenever there is a configuration or building operation the script would pick up the arguments, setup the environment and then call cmake with passed parameters.

Unfortunately, as of 13.11.2020 there is no way to source a script from the remote machine while working in CLion remotely via ssh.
There is a ticket for this functionality to be added so if you are reading this in the future, check the web, maybe the situation has changed.

Related

Ros Environment in root

I have a ros (kinetic) environment set up on a raspberry pi 3 and am trying to get ros to execute upon startup via a simple bash script which calls roslaunch. Ros works in the user domain but fails when called from root.
Here is my launch_ros.sh script:
#!/bin/bash
source /home/pi/ros_catkin_ws/devel/setup.bash
export PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages
roslaunch my_pkg pkg_launch.launch
When I run sudo /home/pi/Desktop/ros_launch.sh the roscore crashes with
ERROR: cannot launch node of type [rosout/rosout]: can't locate node
[rosout] in package [rosout] failed to start core service [/rosout]
The traceback for the exception was written to the log file
But, if I comment out
source /home/pi/ros_catkin_ws/devel/setup.bash
and execute /home/pi/Desktop/ros_launch.sh, ros works fine.
Also worthy of noting is if I leave the above source line uncommented when running in the user domain I get the same error as I do in the root. I think this might be pointing me to the solution but I am still very new to ros.
Has anyone come across this issue and found a solution?
In order to run a node as root after changing your shell to root using commands like sudo -i, You can source your current bash profile thats located inside your normal user .bashrc and use it inside root shell.
Try the following code:
#!/bin/bash
source /opt/ros/kinetic/setup.bash
source /home/pi/ros_catkin_ws/devel/setup.bash
export PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages
roslaunch my_pkg pkg_launch.launch
You need to source your workspace devel to be able to find your own package.
But, you need to source ROS devel to be able to use roscd, roslaunch, ...
In the code below I added:
source /opt/ros/kinetic/setup.bash
to source ROS and be able to use it.
PS: If it's still not working you should try a short delay before running roslaunch.
I was accidentally in a conda environment (base only) and it was messing up big-time. Try disabling any Python virtual environments.
It really worked and it inspired me to report that I didn't have rosnode as a program when I was planning to call ROS scripts in my own applications. Calling source .bashrc directly from the Raspberry Pi's system would refresh the terminal, but there was no way for my program to take over. The solution was to place the required ROS environment scripts in a separate script like name init_env.sh and then call source init_env.sh before any other ROS scripts were executed.

Run Jenkins' Cygwin script as user

I have Jenkins running on Windows, and I have a build that works fine under CygWin bash from the CygWin terminal, so I now want to automate it. However, using this script:
#!C:\cygwin\bin\bash.exe
whoami
make
The system reports me as nt authority\system, not the ken that I get when using an interactive shell. Is there an easy way to persuade Jenkins or CygWin to run as me?
Most likely you are running jenkins with default installation. You have two options. First is mentioned in the comment. Change the "Service account" to be same as yours.
Second option is derived from best practices. Run the jenkins master on a system with backup etc. Configure slave node with your account credentials. Change the project configuration to build on the specific node.
(It is possible to run slave and master on same machine with different credentials - just in case you want to try out things)
The real problem I was having was not that the shell script was running as the wrong user, but that the shell script was not executing the default /etc/profile. So, the solution was simply:
#!C:\cygwin\bin\bash.exe -l
whoami
make
I was still nt authority\system, but now I had the correct environment set up and could run make successfully.
Note also that if I create a /home/system directory I can add .bash_profile, etc, to that directory to further customise the build environment.

Jekyll private deployment?

I have created jekyll site. Regarding the deployment I don't want to host on github pages. To host private domain I came know from documentation to copy the all files from _site folder. That's all wicked.
Question:
Each time I add new blog post, I am running cmd>jekyll build then I am copying newly created html to hosted domain. Is there any easy way to update without compiling each time ?
The reason, Why I am asking is because it will updated by non technical person
Thanks for the help!!
If you don't want to use GitHub Pages, AFAIK there's no other way than to compile your site each time you make a change.
But of course you can script/automate as much as possible.
That's what I do with my own blog as well. I'm hosting it on my own webspace instead of GitHub Pages, so I need to do these steps for each update:
Compile on local machine
Upload via FTP
I can do this with a single click (okay, a single double-click).
Note: I'm on Windows, so the following solution is for Windows.
But if you're using Linux/MacOS/whatever, of course you can use the tools given there to build something similar.
I'm using a batch file (the Windows equivalent to a shell script) to compile my site and then call WinSCP, a free command-line FTP client.
WinSCP allows me to store session configurations, so I saved the connection to my server there once.
Because of this, I didn't want to commit WinSCP to my (public) repository, so my script expects WinSCP in the parent folder.
The batch file looks like this:
call jekyll build
echo If the build succeeded, press RETURN to upload!
pause
set uploadpath=%~dp0\_site
%~dp0\..\winscp.com /script=build-upload.txt /xmllog=build-upload.log
pause
The first parameter in the WinSCP call (/script=build-upload.txt) specifies the script file which contains the actual WinSCP commands
This is in the script file:
option batch abort
option confirm off
open blog
synchronize remote -delete "%uploadpath%"
close
exit
Some explanations:
%~dp0 (in the batch file) is the folder where the current batch file is
The set uploadpath=... line (in the batch file) saves the complete path to the generated site into an environment variable
The open blog line (in the script file) opens a connection to the pre-saved session configuration (which I named blog)
The synchronize remote ... line (in the script file) uses the synchronize command to sync from the local folder (saved in %uploadpath%, the environment variable from step 2) to the server.
IMO this solution is suitable for non-technical persons as well.
If the technical person in your case doesn't know how to use source control, you could even script committing & pushing, too.
There are a number of options available which are mentioned in the documentation: http://jekyllrb.com/docs/deployment-methods/
If you are using Git, I would recommend the Git Post-Receive Hook approach. It simply builds the site after the new code is received:
GIT_REPO=$HOME/myrepo.git
TMP_GIT_CLONE=$HOME/tmp/myrepo
PUBLIC_WWW=/var/www/myrepo
git clone $GIT_REPO $TMP_GIT_CLONE
jekyll build -s $TMP_GIT_CLONE -d $PUBLIC_WWW
rm -Rf $TMP_GIT_CLONE
exit
Since you mentioned that it will be updated by a non-technical person, you might try something like rack-jekyll to automatically rebuild when new files are FTP'd.

Set global environment variables inside Xcode build phase run script

I'm using Jenkins to do continuous integration builds. I have quite a few jobs that have much of the same configuration code. I'm in the midst of pulling this all out into a common script file that I'd like to run pre and post build.
I've been unable to figure out how to set some environment variables within that script, so that both the Xcode build command, and the Jenkins build can see them.
Does anyone know if this is possible?
It is not possible to do exactly what you ask. A process cannot change the environment variables of another process. The pre and post and actual build steps run in different processes.
But you can create a script that sets the common environment variables and share that script between all your builds.
The would first call your shell to execute the commands in the script and then call xcodebuild:
# Note the dot in the beginning of the next line. It is not a typo.
. set_environment.sh
xcodebuild myawesomeapp.xcodeproj
The script could look like this:
export VARIABLE1=value1
export VARIABLE2=value2
How exactly your jobs will share the script depends on your environment and use case. You can
place the script in some well-known location on the Jenkins host or
place the script in the version controlled source tree if all your jobs share the same repository or
place the script in a repository of its own and make a Jenkins build which archives the script as a build artifact. All the other jobs would then use Copy Artifact plugin to get a copy of the script from the artifacts of script job.
From Apple's Technical Q&A QA1067 it appears that if you create the file /Users/YOU/.MacOSX/environment.plist and populate it with your desired environment variables that all processes (launched by the user with the environment.plist file in their home dir) will pick up these environment variables. You may need to restart your computer (or just log out and back in) before a newly launched process will pick up the variables.
This article also claims that Xcode will also pass these variables to a build phase script. I have not tested it yet but next time I restart my MacBook I will let you know if it worked.
From http://developer.apple.com/library/mac/#/legacy/mac/library/qa/qa1067/_index.html
Q: How do I set environment for all processes launched by a specific
user?
A: It is actually a fairly simple process to set environment variables
for processes launched by a specific user.
There is a special environment file which loginwindow searches for
each time a user logs in. The environment file is:
~/.MacOSX/environment.plist (be careful it's case sensitive). Where
'~' is the home directory of the user we are interested in. You will
have to create the .MacOSX directory yourself using terminal (by
typing mkdir .MacOSX). You will also have to create the environment
file yourself. The environment file is actually in XML/plist format
(make sure to add the .plist extension to the end of the filename or
this won't work).

Alternative uses for makefiles

A makefile is typically used for source compilation; however, as a dependency mechanism, make can have many more uses.
For a minor example, I have a script that runs daily, and it might update or create some '*.csv.gz' files in a directory based on some web-scraping; all the gzipped files need to be consolidated into one file, and if there are new files, obviously the consolidation process needs to be run.
In my case, the following makefile does the job:
consolidation: datasummary.pcl
datasummary.pcl: *.csv.gz
consolidate.py
The cron job runs the update process, and then make consolidation; if the datasummary.pcl file is older than any *.csv.gz file, consolidate.py runs.
I'm very interested in ideas about unusual (i.e. not about source compiling) uses of a makefile. What other interesting examples of makefile usage can you give?
Let's assume we talk about GNU make; if otherwise, please specify version.
I remember seeing something several years ago about booting Linux systems using Makefiles. Individual system components were set as targets and make would load up the dependencies first, like make does. I believe they got impressive boot speeds out of it. That's what led to the dependency-based boot in Debian/Ubuntu.
On a system I administer at work we use a makefile and some scripts to generate config files for named, dhcpd and pxe booting. The input file is along the lines of:
ipaddr name alias1 alias2 # model os printer
for example:
192.168.0.1 battledown nfs dns ldap # x3550 RHEL5u4 brother-color
We then have a makefile which runs that input file though various scripts to generate the appropriate configurations. It will then restart any daemons whose configs have changed.

Resources