I'm improving the continuos integration of a project. And we decided to take an extra step and start using cocoapods. All the rvm installation is legacy and indeed I have a lot of troubles installing ruby 2.2.0. The thing is that, when I test my build script using terminal it works fine, but when I try to run them without opening a terminal window (called from applescript, jenkins or another ruby script). The command is not found.
Already tried adding the path to .rvm/scripts to the PATH variable in both .bashrc and .bash_profile
Have you try to reconnect the server after you installed the cocoapods? sometimes it doesn't see the new vars till it disconnected and reconnected.
Also make suer that the vars that you see through the terminal are available for jenkins user. you can check that through the slave "Script Console"
If it still don't work, try to set the path in the "execute shell", just before you run the pod install.
This is how it works for me:
echo "Running pod install"
cd ${WORKSPACE}
export LANG=en_US.UTF-8
pod install
Related
I wanted to reinstall flutter but I am getting this error, how can I resolve it?
Run the following command to see if there are any dependencies you need to install to complete the setup (for verbose output, add the -v flag):
flutter doctor
You can update your PATH variable for the current session at the command line, as shown in Get the Flutter SDK. You’ll probably want to update this variable permanently, so you can run flutter commands in any terminal session.
The steps for modifying this variable permanently for all terminal sessions are machine-specific. Typically you add a line to a file that is executed whenever you open a new window. For example:
Determine the directory where you placed the Flutter SDK. You need this in Step 3.
Open (or create) the rc file for your shell. Typing echo $SHELL in your Terminal tells you which shell you’re using. If you’re using Bash, edit $HOME/.bash_profile or $HOME/.bashrc. If you’re using Z shell, edit $HOME/.zshrc. If you’re using a different shell, the file path and filename will be different on your machine.
export PATH="$PATH:[PATH_TO_FLUTTER_GIT_DIRECTORY]/flutter/bin"
I'm currently working on a build pipeline that uses Jenkins and GitLab to trigger builds for the project. Basically, the build is triggered when someone pushes to the repository. Also, some Ruby scripts are executed as part of the build process. These scripts run some checks on the projects and perform some fixes, like synchronizing an Xcode project with added and deleted files from the source directory - in this case they are not the same.
I'm using several tools to configure the pipeline. The builds run on a machine that is physically located on the build slave. Jenkins is deployed to an AWS machine. For this reason, I used pritunl to connect the two on a virtual network. I can use local IPs to communicate between the machines and SSH is working fine both ways.
When I push to the remote the build starts correctly on the slave, but it fails to complete. However, if I manually access using SSH through the terminal, the build performs fine. This is the output I get from Jenkins:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- xcodeproj (LoadError)
from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /Users/jenkins/workspace/Core/platform/ios/scripts/pbxsync.rb:58:in `<main>'
As you can see, it fails to require Xcodeproj, causing the build to fail. Still, this only happens if the build is triggered by Jenkins, not manually.
This makes me think that Jenkins is using some different installation of Ruby, or at least a different environment. Basically what I need is to install gems for the same Ruby environment that Jenkins is using, but I don't know which one that is. Any ideas?
Jenkins has a console that runs Groovy scripts on the remote slave. I've been playing with it a bit, but not many conclusions so far. Maybe that helps.
This may be important; this is the shebang I'm using for the Ruby scripts: #!/usr/bin/env ruby
On the terminal, I'm using the same user as Jenkins is to access the slave machine. It's called "jenkins".
One thing I forgot to mention is that the output is telling me the right version: /Users/jenkins/.rvm/rubies/ruby-2.4.0. At least that's the path it's indicating it's trying to load the gem from. So I tried the following:
: /Users/jenkins/.rvm/rubies/ruby-2.4.0/bin/ruby
require 'xcodeproj'
Then I press ctrl+D and get no output - that installation of ruby is finding the gem properly.
If you are using Jenkins Slave plugin to communicate between Jenkins Master and Jenkins Slave, every command that u specify will be run in non-interactive shell. That means that Jenkins will only have access to system ruby in your case.
So if you want to install something that needs to be installed you have to do it in system ruby. You are using rvm so: rvm use system and you can install gem to system ruby.
If you want to use different Ruby version than system ruby you need to add RVM to $PATH for non-interactive shell. Here is basic setup that should help: https://rvm.io/rvm/basics
I finally managed it. As #Cosaquee indicated in another response, it's important to distinguish between interactive and non-interactive shells. The main reason for this is because, depending on how you call SSH, it makes a difference. As the man page indicates:
If command is specified, it is executed on the remote host instead of
a login shell.
This is meaningful, because the Launch Command for the node I have set for Jenkins is this one:
ssh jenkins#x.x.x.x java -jar ~/bin/slave.jar
In the meanwhile, I was logging in with the standard ssh jenkins#x.x.x.x from the terminal, which starts a login shell. It makes sense that I was getting different results because the two shells load different initial scripts. Basically, if you use ssh jenkins#x.x.x.x to login into the machine ~/.bash_profile is loaded, while if you specify a command, such as ssh jenkins#x.x.x.x whatever, then ~/.bashrc is loaded instead. As such, I added this line to ~/.bashrc:
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
Without it I got:
RVM is not a function, selecting rubies with 'rvm use ...' will not
work.
The advantage was that I could now use RVM from the same environment Jenkins was using. The rest is easy:
ssh jenkins#x.x.x.x rvm --default use 2.3
And:
ssh jenkins#x.x.x.x
rvm --default use 2.3
And both are now using the same version of ruby.
I'm pretty sure I've tracked the issue down to Node.js not seeing Sass, but I have no clue why...
If I push from my laptop using:
git push lamp somebranch:master, the server remotely checks it out fine, runs npm install without error, and starts processing the gruntfile, but then aborts with "remote: Warning: spawn ENOENT Use --force to continue."
However, (after I push from my laptop like above) I can ssh in, cd into my hooks directory and run ./post-receive and it finishes "Done, without errors." I also tried running grunt in the website's root and it also completed without error.
Any ideas as to what might be going on? I'm completely stumped. Should I set paths to the sass gem in the hook? I scrapped down my gruntfile to use the same target locally as well as on the server to rule out the gruntfile. It compiles fine locally, compiles fine on the server, but fails only when using git push lamp somebranch:master.
Some may wonder why I just didn't compile locally and dump the css into the web root from the devel box... perhaps I should. This time though, I really wanted Push-to-deploy all the way through, compiles and all. For anyone attempting the same thing and running into the same problem, this should help.
First off, it probably wouldn't hurt to scrub the system of any versions of ruby and sass that were installed via the distro's package manager. Then I scrubbed any remnants of previous tinkering with rvm implode and removed traces from .bashrc, etc. Next I ran \curl -sSL https://get.rvm.io | bash -s stable --ruby --auto-dotfiles and pressed ctrl-c to fix any errors first. Once the install script was happy, I let it download and install as normal. I did not have to use rvm install n.n.n,rvm use n.n.n, or rvm use n.n.n --default as 2.2.1 was pulled in like I wanted anyway and seemed fine. After rvm had setup ruby, I then ran gem install sass
Now, the end-all-be-all... using PermitUserEnvironment, like had been mentioned here: How to use sshd-config permituserenvironment option was the way to go. I saw that there were security concerns with that method, but it was the only thing that worked and I won't be trying to run limited shells. It is normal behavior for SSH to not allow the env vars when not using a login shell. I assumed, however, that the git hooks had full access to the user's normal vars (with ruby paths, etc.) and that assumption was incorrect. Add PermitUserEnvironment yes to the server's /etc/ssh/sshd_config or the like and restart the ssh daemon. As the user on the server, I ran env and copied that into .ssh/environment and cleaned up what wasn't needed. After that, I did my git push from the devel box and it found and ran the sass compiler just fine.
My slave machine of Jenkins is Mac 10.8.
Jenkins run job on my slave machine and run shell command of cocoapods below:
pod install
and got error from console output of Jenkins job, please check below:
pod: command not found
I tried to run this command "pod install" in local's terminal of this slave machine and gets succeeds.
could you kindly help on me how to fix this problem?
Thanks.
In my case, my ruby is installed by rvm. I need to load rvm in to find the pod command.
What I did is add this line #!/bin/bash -l in the beginning of the jenkins job.
Ran into the same problem today, but neither of the solutions worked. What did work was changing the install command.
/usr/local/bin/pod install
It seems the user with which Jenkins is running is not getting the path to the pod command that you are able to successfully execute from the Node's command shell. All you need to do is explicitly add the path in PATH variable in your Node's configuration page. To do this, Go to Jenkins > Manage Jenkins > Manage Nodes > Select the Node where your job is running > Click on Configure > Enable Node Properties. Refer the screenshot below:
Just add the path to your pod command in PATH variable. For ex., if the pod command is present in /usr/bin, then in the name field, enter PATH and in the value field, enter /usr/bin/:$PATH
I have not worked on Mac but hopefully, the above command should work there too. If it doesn't work, you can put the following line in $HOME/.bash_profile file of the user with which Jenkins is running: PATH=path_to_pod:$PATH
You can find the path to pod command by typing which pod on command line.
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
/usr/local/bin/pod install
This worked for me
I'm trying to setup an automated "build" server for my rails projects using Hudson CI. SO far it's able to run specs and do metrics on the code but I have 2 different projects dependent on 2 different versions of ruby. So i'm trying to use RVM to run multiple copies of ruby then switch back and forth in a pre-build step.
I found a couple posts like this one that try and explain how to make this work, but I'm not running a startup script for hudson, it starts on boot which is how it worked out of the box when i installed it via the debian instructions.
The problem seems to be that even though hudson runs under the "hudson" account and that account has rvm installed (and working) when it tries to run a shell based prebuild step to call rvm switch 1.8.7 it fails with the error "rvm: command not found"
Not sure what I'm doing wrong. Hudson is using SH as its shell but i also tried using bash. no luck.
Has anyone gotten this working before in this setup?
edit the "/etc/init.d/hudson" (!) and change the line:
SU=/bin/su
... change to:
SU="/bin/su -"
... and add rvm setup in the /home/hudson/.profile
I had the same symptoms as you.
After a couple of hours of headbanging, check your $HOME environment variable for Hudson (viewable at http://yourserver/hudson/systemInfo).
Under Ubuntu, the Tomcat 6 start script doesn't set $HOME. Somehow, Hudson inherited my $HOME instead!
I added HOME=$CATALINA_HOME to the /etc/init.d/tomcat6 script just under the rest of the ENV declarations, and now it all works. Very annoying issue, to be sure.