How to install ruby locally via rbenv/ruby-build? - ruby

I require installing ruby without internet access. As ruby-build docs suggest I can change the mirror URL via specifying the environment variable RUBY_BUILD_MIRROR_URL. I did this and although it looks at my local repo for ruby it still attempts to connect to online repo to install yaml.
env RUBY_BUILD_MIRROR_URL=http://10.10.161.39/platforms/common/ruby-2.0.0-p247.tar.gz#3e71042872c77726409460e8647a2f304083a15ae0defe90d8000a69917e20d3 /opt/rbenv/bin/rbenv install 2.0.0-p247
Downloading yaml-0.1.6.tar.gz...
-> http://10.152.161.39/platforms/proteus/common/ruby-2.0.0-p247.tar.gz#3e71042872c77726409460e8647a2f304083a15ae0defe90d8000a69917e20d3/7da6971b4bd08a986dd2a61353bc422362bd0edcc67d7ebaac68c95f74182749
-> http://pyyaml.org/download/libyaml/yaml-0.1.6.tar.gz
error: failed to download yaml-0.1.6.tar.gz
BUILD FAILED (RedHatEnterpriseServer 5.10 using ruby-build 20150928)
I tried placing the yaml-0.1.6.tar.gz file in my local repo however that makes no difference besides it will fail since the sha2 checksum provided in the URL is for ruby-2.0.0-p247.tar.gz file.
How can install ruby offline with rbenv?
Update 1
I discovered that you can modify the lookup config file to point to a local mirror instead. i.e: /opt/rbenv/plugins/ruby-build/share/ruby-build/2.0.0-p247
install_package "yaml-0.1.6" "http://10.10.161.39/platforms/common/yaml-0.1.6.tar.gz#7da6971b4bd08a986dd2a61353bc422362bd0edcc67d7ebaac68c95f74182749" --if needs_yaml
install_package "openssl-1.0.1p" "ttp://10.10.161.39/platforms/common/openssl-1.0.1p.tar.gz#bd5ee6803165c0fb60bbecbacacf244f1f90d2aa0d71353af610c29121e9b2f1" mac_openssl --if has_broken_mac_openssl
install_package "ruby-2.0.0-p247" "http://10.10.161.39/platforms/common/ruby-2.0.0-p247.tar.gz#3e71042872c77726409460e8647a2f304083a15ae0defe90d8000a69917e20d3"
Is there a better way or is this is the best way forward?

So here's how I got it to work:
Update the contents of the download file in /opt/rbenv/plugins/ruby-build/share/ruby-build/<ruby-version> to point to your local repo.
You will also notice how each file has a long hash valued after the '#' symbol in the URL. For Example:
install_package "yaml-0.1.6" "http://10.10.161.39/platforms/common/yaml-0.1.6.tar.gz#7da6971b4bd08a986dd2a61353bc422362bd0edcc67d7ebaac68c95f74182749" --if needs_yaml
This hash value is the sha256sum the file which rbenv will use to validate if it is the expected file.
So you will need to generate the value by running sha256sum <filename> and appending to each file in the URL path.
Complete example below:
install_package "yaml-0.1.6" "http://10.10.161.39/platforms/common/yaml-0.1.6.tar.gz#7da6971b4bd08a986dd2a61353bc422362bd0edcc67d7ebaac68c95f74182749" --if needs_yaml
install_package "openssl-1.0.1p" "ttp://10.10.161.39/platforms/common/openssl-1.0.1p.tar.gz#bd5ee6803165c0fb60bbecbacacf244f1f90d2aa0d71353af610c29121e9b2f1" mac_openssl --if has_broken_mac_openssl
install_package "ruby-2.0.0-p247" "http://10.10.161.39/platforms/common/ruby-2.0.0-p247.tar.gz#3e71042872c77726409460e8647a2f304083a15ae0defe90d8000a69917e20d3"
In the example above we have a dedicated repository server at http://10.10.161.39/platforms/common. If your packages are locally available, you will need to point to the local path and verify if it works.

Related

Poetry `FileNotFoundError` when using cached venv in CI build

I'm trying to optimize our poetry build times by storing a tar.gz of the venv on the end of the build in an azure storage blob (an S3) with key generated by doing a md5sum of the poetry.lock and the pyproject.toml. (this works fine)
So, in the beginning of another build, I would hash those files and try to find if that blob exists in storage. If yes, download it and extract its contents to the .venv/ dir of the project.
This worked fine at the beginning but after a while I started getting builds that were throwing this error when trying to run a command through poetry: (any command)
+ poetry run reorder-python-imports --diff-only <SOME_FILES>
FileNotFoundError
[Errno 2] No such file or directory
at /usr/local/lib/python3.8/os.py:591 in _execvpe
587│ argrest = (args,)
588│ env = environ
589│
590│ if path.dirname(file):
→ 591│ exec_func(file, *argrest)
592│ return
593│ saved_exc = None
594│ path_list = get_exec_path(env)
595│ if name != 'nt':
I have confirmed that there is no .venv folder already in the project dir and that the contents are exactly the same as if it were going to install it from the network.
If I just run poetry install on top of the cached venv, it says it has no more dependencies to install, but it throws the error above.
If I delete the venv and install everything again, commands work fine.
I have no more ideas on how to debug and solve this issue. Help would be very appreciated! :)
Ok, I think I've understood that virtualenvs have some hardcoded paths that are not easy to move around.
For what it says in this post Can I move a virtualenv? we should not move venvs as a good practice.
And there it goes my way of speeding up builds...

private repo - go 1.13 - `go mod ..` failed: ping "sum.golang.org/lookup" .. verifying package .. 410 gone

I am using golang 1.13 .
I have a project that depends on a private gitlab project.
I have the ssh keys for the same.
When I try to retrieve the dependencies for a newly created module, I am getting the following error:
$ go version
go version go1.13 linux/amd64
$ go mod why
go: downloading gitlab.com/mycompany/myproject v0.0.145
verifying gitlab.com/mycompany/myproject#v0.0.145: gitlab.com/mycompany/myproject#v0.0.145: reading https://sum.golang.org/lookup/gitlab.com/mycompany/myproject#v0.0.145: 410 Gone
I have no idea why it is trying to ping sum.golang.org/lookup since it is a private gitlab project.
My ~/.gitconfig contains the following (based on my looking up in google search for similar errors)
# Enforce SSH
[url "ssh://git#github.com/"]
insteadOf = https://github.com/
[url "ssh://git#gitlab.com/"]
insteadOf = https://gitlab.com/
[url "ssh://git#bitbucket.org/"]
insteadOf = https://bitbucket.org/
[url "git#gitlab.com:"]
insteadOf = https://gitlab.com/
The error still persists.
I would expect the package to be downloaded from my private gitlab project repository to the current project.
Is there anything I need to do in my private gitlab project repository to make it ready for 'go get' ?
The private gitlab project repository already contains the go.sum and go.mod for the project as well.
Anything that I am missing ?
edit: 1) The private repo name and the company name contains no asterisks or any other special characters. only alphabets and not even numeric characters.
Answering my own question after looking up,
Setting the GOPRIVATE variable seems to help.
GOPRIVATE=gitlab.com/mycompany/* go mod why
"
The new GOPRIVATE environment variable indicates module paths that are not publicly available. It serves as the default value for the lower-level GONOPROXY and GONOSUMDB variables, which provide finer-grained control over which modules are fetched via proxy and verified using the checksum database.
" from https://golang.org/doc/go1.13
Aliter:
Setting the env variable GONOSUMDB also seems to work.
Specifically, invoking the following command seems to help.
GONOSUMDB=gitlab.com/mycompany/* go mod why
The above env variable prevents the ping to sum.golang.org/lookup for a checksum match. It also prevents leaking the names of private repos to a public checksum db. [ Source - https://docs.gomods.io/configuration/sumdb/ ]
Also - here at
* GONOSUMDB=prefix1,prefix2,prefix3 sets a list of module path prefixes, again possibly containing globs, that should not be looked up using the database.
source: https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md
Related Issues:
https://github.com/golang/go/issues/32291
https://github.com/golang/go/issues/33985
["Go 1.13 has been released, and this issue was filed well after the freeze window. The proposed changes will not happen in 1.13, but don't assume they will necessarily happen in 1.14 either." from issue 33985 above. ]
Basically it failed to verify private repository. However I don't like turning off checksum, but you can easily set GOSUMDB to off before trying to get module. something like this:
GOSUMDB=off go get github.com/mycompany/myproject
ref: https://github.com/golang/go/issues/35164#issuecomment-546503518
A second and better solution is to set GOPRIVATE environment variable that controls which modules the go command considers to be private (not available publicly) and should therefore NOT use the proxy or checksum database. The variable is a comma-separated list of glob patterns (same syntax of Go's path.Match) of module path prefixes. For example,
export GOPRIVATE=*.corp.example.com,rsc.io/private
Or
go env -w GOPRIVATE=github.com/mycompany/*
Last solution you can try is to turn off such checks for all private repositories that you don't want to go public or being verified through sum.golang.org/lookup/github.com/mycompany/...
GONOSUMDB=gitlab.com/mycompany/* go mod why
Note that:
If you have issues fetching modules or repos over https, you may want to add the following to your ~/.gitconfig to make go get/fetch repositories using ssh instead of https
[url "ssh://git#github.com/"]
insteadOf = https://github.com/
Change following go variable's setting and then upgrade your package,
$ export GO111MODULE=on
$ export GOPROXY=direct
$ export GOSUMDB=off
$ go get -u <your dependency package>
I have this scenario too and this works for me.
edit your .git/config and add two lines in it.( I have this in a global .gitconfig in home dir)
[url "ssh://youprivate.com"]
insteadOf = https://yourprivate.com
export GOSUMDB=off
Then everything will OK.

Puppet: generate statement fails when trying to retrieve default path of an executable

I have built a stanza to remove a ruby gem package from our servers. The problem is that the ruby gem executable is installed in different paths on the servers, so on one server it could be in /opt/ruby/bin/gem on other servers it's in /usr/local/rvm/rubies/ruby-2.0.0-p353/bin/gem
My stanza uses the generate function in puppet to pull out the default ruby gem installation as follows:
$ruby_gem_location = generate('which', 'gem')
exec { "remove-remote_syslog":
command => "gem uninstall remote_syslog",
path => "$ruby_gem_location:/opt/ruby/bin:/usr/bin:/usr/sbin",
onlyif => "$ruby_gem_location list|grep remote_syslog"
}
When I run puppet agent I get the following error:
Generators must be fully qualified at ****redacted*
I have also tried to provide a default path for the which command as follows:
$ruby_gem_location = generate('/usr/bin/which', 'gem')
and now the error says : Could not evaluate: Could not find command '/usr/bin/gem
I checked the target server and the gem command is in
/usr/local/rvm/rubies/ruby-2.0.0-p353/bin/gem
What am I doing wrong?
How can I pull out the default ruby gem location on our servers?
Thank you in advance
Your code
$ruby_gem_location = generate('/usr/bin/which', 'gem')
will generate a full path to your gem command (if it succeeds). From the result you describe, I think it is generating '/usr/bin/gem', which is perhaps a symlink to the real gem command. You are putting that into your command path instead of just the directory part, and that will not be helpful. It is not, however, the source of the error message you report.
The real problem here is that generate(), like all DSL fucntions, runs during catalog building. I infer from your results that you are using a master / agent setup, so generate() is giving you a full path to gem -- evidently /usr/bin/gem -- on the master. Since the whole point is that different servers have gem installed in different places, this is unhelpful. The actual error message arises from an attempt to execute your onlyif command with the wrong path to gem.
Your best way forward is probably to create a custom fact with which each node can report the appropriate location of the gem binary. You can then use that fact's value in your Exec, maybe:
exec { "remove-remote_syslog":
command => "$::ruby_gem_path uninstall remote_syslog",
onlyif => "$::ruby_gem_path list | grep remote_syslog"
}
Note that you don't need a path attribute if you give a complete path to the executable in the first place.
Details on creating the $::ruby_gem_path custom fact depend on a number of factors, and in their full generality they are rather too broad for SO, but PL provides good documentation.

J language's "load" command

I'm working through the J primer, and getting stuck when it comes to the load command.
In particular, there are times when the next step in a tutorial is load 'foo' and I'll get an error like the following:
load 'plot'
not found: /users/username/j64-801/addons/graphics/plot/plot.ijs
|file name error: script
| 0!:0 y[4!:55<'y'
When I do ls /users/username/j64/addons/ I only have config and ide in there, so it's sensible that graphics is not found.
My question:
if given an example that says load 'foo', how do I go about finding and installing foo?
I'd recommend simply installing all the JAL packages ("Addons"). There aren't too many, so the download won't take long, and you'll have access to everything you need to run the Labs, Wiki examples, and any code posted by the community (e.g. on the J Forums).
To install all available Addons, type the following into Jconsole (you could theoretically type it into JHS or JQT instead, but since those are distributed as Addons, you might not be able to upgrade them while they're running):
load'pacman' NB. J PACkage MANager
install'all'
The package manager will start running, and you'll see output like:
Updating server catalog...
Installing 52 packages
Downloading base library...
Installing base library...
Downloading api/gl3...
Installing api/gl3...
Downloading api/ncurses...
Installing api/ncurses...
Then stop and restart Jconsole, and run:
load 'pacman'
'update' jpkg 'all'
To make sure all recursive dependencies were satisfied and all packages are up to date (in particular, the base library). Ultimately, you want to see something like:
Updating server catalog...
Local JAL information was last updated: <datetime>
All available packages are installed and up to date.
Then stop & restart J one last time. When that's done, you should have everything you need to run the Labs.
To answer your final question, if you see a line like:
load'foo'
The first thing you should do is run getscripts_j_ 'foo'. In your example:
getscripts_j_ 'plot'
+--------------------------------------------------------------+
|c:/users/user/j64-801/addons/graphics/plot/plot.ijs|
+--------------------------------------------------------------+
Here, you can see the fully-qualified path of where J expects the package to live.
In particular, you can see it where it is relative to the addons directory, which will always be in the form addons/category/module/foo.ijs. The category and module name indicate which addon you need to install, so all you have to do pick the desired entry from the catalog visible in the package manager.

Update Local Workspace to SVN Repository 'Head' With Ruby SVN Bindings

I am writing a program in Ruby that necessitates downloading the most current version of my team's software from SVN upon start up.
The checkout function (from the Ruby SVN bindings) is what I believe I want to use, because an update would not ADD any files that do not exist on my machine's local "trunk" workspace. A checkout statement would both update files that do not match to HEAD, and it would download ones that don't exist at all. Effectively, after running a fully recursive checkout, I would hope to have an exact copy of the most recent SVN repository.
According to this API, a checkout statement basically takes the following:
an exact SVN URL
a local root project directory
a revision (I would be using the string 'HEAD')
recursive (integer 1 or 0)
a pool object (I cannot determine what this is for exactly, but I don't think it affects me)
Here's what I wrote, inside a block that iterates for each file in the SVN repository:
if status != NORMAL #any file that changed or is 'missing'
ctx.checkout(status.entry.url, ROOTDIR, 'HEAD', 0, nil) #update abnormal file to HEAD
end
As a test, I erased a directory from my local workspace, and attempted to restore it with this command. It runs through until it reaches one of the missing files, at which point it raises an error:
`svn_client_checkout3': subversion/libsvn_fs_fs/tree.c:663: Svn::Error::FsNotFound: File not found: revision 0, path '/trunk/project-gadfly/SocketServer/DiscoveryServer.cpp' (Svn::Error::FsNotFound)
I do not understand why this error would be raised, because I thought that a checkout statement would see that the directory (i.e. file) does not exist locally and then create it. Perhaps I am doing something wrong?
Looking back on what I've written, I think all of this was a long-winded way of asking the following simple question: How do I get the most current version of SVN repository onto my local hard drive with an SVN Ruby command?
Thanks in advance,
Elwood Hopkins
I don't know about Ruby-specific part of the question, but it's clear that you asked SVN API to checkout "status.entry.url" at revision 0, which of course doesn't exist here.
It's also strange that you looked into Perl documentation for writing in Ruby. I would recommend you to look at Subversion sources instead.
Here's Ruby method declaration:
http://svn.apache.org/repos/asf/subversion/branches/1.7.x/subversion/bindings/swig/ruby/svn/client.rb
def checkout(url, path, revision=nil, peg_rev=nil,
depth=nil, ignore_externals=false,
allow_unver_obstruction=false)
revision ||= "HEAD"
Client.checkout3(url, path, peg_rev, revision, depth,
ignore_externals, allow_unver_obstruction,
self)
end
So as you can see, you've specified 0 as peg revision. But you should specify HEAD instead.
What about pools --- they are parts of SVN memory managements. Here's the explanation: http://subversion.apache.org/docs/community-guide/conventions.html#apr-pools

Resources