Singularity 3.6.2 Installation - go

I have problems with installation of singularity 3.6.2 in linux mint, I followed the instructions of https://sylabs.io/guides/3.0/user-guide/installation.html.
I installed the dependencies and Go.
Then I run the command for install the latest version:
export VERSION=3.6.2 && # adjust this as necessary \
mkdir -p $GOPATH/src/github.com/sylabs && \
cd $GOPATH/src/github.com/sylabs && \
wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz && \
cd ./singularity && \
./mconfig
The error is:
Configuring for project `singularity' with languages: C, Golang
=> running pre-basechecks project specific checks ...
=> running base system checks ...
checking: host C compiler... cc
checking: host C++ compiler... c++
checking: host Go compiler (at least version 1.13)... not found!
mconfig: could not complete configuration
I have go (go version)
go version go1.15.2 linux/amd64
I don't know what happend!
Thanks so much!

I was struggling with the same error. All the suggestions say that probably you have an older version of Go and that's why. But turned out it's even more important to place Go and Singularity in the right locations.
I found these docs https://github.com/hpcng/singularity/blob/release-3.5/INSTALL.md are the most useful and correct about where to put what in terms of directories.
The key is to clone Singularity in a directory which is GOPATH:
You won't have this directory by default so create it first
$ mkdir -p ${GOPATH}/src/github.com/sylabs && \
cd ${GOPATH}/src/github.com/sylabs && \
git clone https://github.com/sylabs/singularity.git && \
cd singularity
Make sure your singularity is here: {GOPATH}/src/github.com/sylabs/singularity
To summarize:
The Go itself is located here /usr/local/go
GOPATH would be something like home/your_username/go and the singularity will be located inside in e.g. home/your_username/go/src/github.com/sylabs/singularity

The issue was reported in 5099.
# 5320 also mentions:
I deleted the PPO python 3.6 and this worked fine!
Make sure nothing is executed as root, which would have a $PATH different from your current user.

If someone faces this issue, follow this installation guide.
sudo apt-get update && \
sudo apt-get install -y build-essential \
libseccomp-dev pkg-config squashfs-tools cryptsetup
sudo rm -r /usr/local/go
export VERSION=1.13.15 OS=linux ARCH=amd64 # change this as you need
wget -O /tmp/go${VERSION}.${OS}-${ARCH}.tar.gz https://dl.google.com/go/go${VERSION}.${OS}-${ARCH}.tar.gz && \
sudo tar -C /usr/local -xzf /tmp/go${VERSION}.${OS}-${ARCH}.tar.gz
echo 'export GOPATH=${HOME}/go' >> ~/.bashrc && \
echo 'export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin' >> ~/.bashrc && \
source ~/.bashrc
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh |
sh -s -- -b $(go env GOPATH)/bin v1.21.0
mkdir -p ${GOPATH}/src/github.com/sylabs && \
cd ${GOPATH}/src/github.com/sylabs && \
git clone https://github.com/sylabs/singularity.git && \
cd singularity
git checkout v3.6.3
cd ${GOPATH}/src/github.com/sylabs/singularity && \
./mconfig && \
cd ./builddir && \
make && \
sudo make install
singularity version

Related

Docker Build Image -- cant cd into directory and run commands

Docker Version: 17.09.1-ce
I am beginner in docker and I am trying to build docker image on centos. The below is the snippet of docker file i am having
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel && \
WORKDIR /usr/src && \
git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && \
WORKDIR /usr/src/samba && \
WORKDIR /usr/src/winexe-winexe-wafgit/source && \
head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build
I tried with the normal cd which not work. WORKDIR does not work. How I can set working directory in Dockerfile?
I am getting an error like below using the above Dockerfile
/bin/sh: WORKDIR: command not found
The command '/bin/sh -c yum -y install samba-common && yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && yum -y remove libbsd-devel && WORKDIR /usr/src && git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit && WORKDIR /usr/src/samba && git reset --hard a6bda1f2bc85779feb9680bc74821da5ccd401c5 && WORKDIR /usr/src/winexe-winexe-wafgit/source && head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && echo -e '\t'"lib='dl gnutls'", >> wscript_build && echo -e '\t'")" >> wscript_build && rm -rf tmp.txt && ./waf --samba-dir=../../samba configure build' returned a non-zero code: 127
When I tried with normal cd instead work WORKDIR then I got below error
/bin/sh: line 0: cd: /usr/src/samba: No such file or directory but with sudo i can go into it. Then I tried to include sudo cd directory in docker file then it said no sudo found
UPDATE 1:
This is how I started build
sudo docker build -t abwinexeimage -f ./abwinexeimage . The build got successfully but unfortunately when i list images i dont see any image with tag namme of abwinexeimage.
I dont understand what is that first entry with tag name as none. what it represents ? it shows size of 1.23 GB, Do I really need this image or can i safely delete ?
When I started build first line showed that Sending build context to Docker daemon 303.9MB that means in that image list repository named centos with tag name latest is one the right image which I built ? I assuming so as the size says 202 MB ?
Then I issued docker ps, but no container running, then issued docker ps -a to see stopped containers
Then I tried to run image as container..
Now tried to issue docker ps to check whether container is running
Now i can tell you why i am so concerned about multiple containers present. Actually I wanted to manually CD into cd /usr/src/samba this inside docker container to verify if changes done via docker file got updated correctly or not. Now since i have multiple containers, really not sure which container I need to look into. In that stunt, i tried to start all containers, then manually issue
docker exec -it CONTAINER_NAME [bash | sh] to verify if i am able to find that file system there. This is the reason why I asked whether I can have single container so that i can easily find the file system there,my understanding is since multiple RUN statements created different layers, then its difficult for me to find in which container my file system resides, so that I can CD into it.. sorry for big explanation... I am trying to understand concepts better. Your comments please..
You need to use WORKDIR as a Dockerfile instruction, instead of using it together with run instruction.
RUN has 2 forms:
RUN (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows) RUN
["executable", "param1", "param2"] (exec form)
WORKDIR
WORKDIR /path/to/workdir The WORKDIR instruction sets the working
directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that
follow it in the Dockerfile
FROM centos
RUN yum -y install samba-common && \
yum -y install gcc perl mingw-binutils-generic mingw-filesystem-base mingw32-binutils mingw32-cpp mingw32-crt mingw32-filesystem mingw32-gcc mingw32-headers mingw64-binutils mingw64-cpp mingw64-crt mingw64-filesystem mingw64-gcc mingw64-headers libcom_err-devel popt-devel zlib-devel zlib-static glibc-devel glibc-static python-devel && \
yum -y install git gnutls-devel libacl1-dev libacl-devel libldap2-dev openldap-devel && \
yum -y remove libbsd-devel
WORKDIR /usr/src
#use git clone with RUN not with WORKDIR
RUN git clone git://xxxxxxxx/p/winexe/winexe-waf winexe-winexe-wafgit
#So start it with new line
WORKDIR /usr/src/samba
WORKDIR /usr/src/winexe-winexe-wafgit/source
#start RUN with new line
RUN head -n -3 wscript_build > tmp.txt && cp -f tmp.txt wscript_build && \
echo -e '\t'"stlib='smb_static bsd z resolv rt'", >> wscript_build && \
echo -e '\t'"lib='dl gnutls'", >> wscript_build && \
echo -e '\t'")" >> wscript_build && \
rm -rf tmp.txt && \
./waf --samba-dir=../../samba configure build

Failed to Call Access Method Exception when Creating a MedicationOrder in FHIR

I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.

Regarding "make install"

I was installing OpenGV and it is said there that
At least under Linux and OSX, the installation on the host OS (including the headers) can be activated by simply setting INSTALL_OPENGV to ON.
Is this meant for the make install? At least that is how I understand it.
If that is the case, why in the dockerfile of the OpenSfM (this library depends on the OpenGV), it is like this?
# Install opengv from source
RUN \
mkdir -p /source && cd /source && \
git clone https://github.com/paulinus/opengv.git && \
cd /source/opengv && \
mkdir -p build && cd build && \
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON=ON && \
make install && \
cd / && \
rm -rf /source/opengv
The flag for INSTALL_OPENGV is not set to on and yet, it is ok to make install. Looking at the CMakeLists.txt file of the OpenGV, the INSTALL_OPENGV flag is default to OFF.
Judging from CMakeLists.txt, when INSTALL_OPENGV is OFF, only headers are installed.
When the flag is ON, it also installs binaries produced by the opengv target.
CMake's install target is a default target that gets generated even if there are no install() calls in CMakeLists.txt. In that case make install would simply do nothing.

How to change the version of Ruby in a Docker image (replace 2.2.0 with 2.0.0 )

The Heroku Docker image heroku/ruby installs ruby 2.2.3.
How do I use that image, but use ruby 2.0.0 instead (trying to Dockerize a Rails 3.2 app).
I know that the location of the Heroku buildpack for 2.0.0 is
https://heroku-buildpack-ruby.s3.amazonaws.com/cedar-14/ruby-2.0.0.tgz
but cannot see how to modify my Dockerfile so that it will use that version of Ruby instead.
I tried:
# Dockerfile
FROM heroku/ruby
# Install Ruby
ONBUILD RUN curl -s --retry 3 -L https://heroku-buildpack-ruby.s3.amazonaws.com/cedar-14/ruby-2.0.0.tgz | tar xz -C /app/heroku/ruby/ruby-2.2.0
which I'd hoped might overwrite the 2.2.0 with 2.0.0 (keeping the path etc the same) but that command gets ignored when I run docker-compose build
This is what I ended up doing (ruby and node) on the same docker file reproducing heroku environment:
FROM heroku/heroku:16
# Ruby dependencies
RUN apt-get update -qq && \
apt-get install -y -q --no-install-recommends \
build-essential\
libpq-dev\
libxml2-dev\
libxslt1-dev\
nodejs\
npm \
qt5-default\
libqt5webkit5-dev\
gstreamer1.0-plugins-base\
gstreamer1.0-tools\
gstreamer1.0-x\
xvfb \
&& rm -rf /var/lib/apt/lists/* \
&& truncate -s 0 /var/log/*log
# Ruby heroku
RUN apt remove -y --purge ruby && curl -s --retry 3 -L https://heroku-buildpack-ruby.s3.amazonaws.com/heroku-16/ruby-2.3.4.tgz | tar -xz
# Node heroku
RUN export NODE_VERSION=6.11.0 && \
curl -s --retry 3 -L https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz -o /tmp/node-v$NODE_VERSION-linux-x64.tar.gz && \
tar -xzf /tmp/node-v$NODE_VERSION-linux-x64.tar.gz -C /tmp && \
rsync -a /tmp/node-v$NODE_VERSION-linux-x64/ / && \
rm -rf /tmp/node-v$NODE_VERSION-linux-x64*
WORKDIR /var/app
You need to build an image yourself with the right versions. Change this Dockerfile as necessary - https://github.com/heroku/docker-ruby/blob/master/Dockerfile

Installing rbenv on docker ubuntu/debian

I want to install rbenv on Docker which seems to work but I can't reload the shell.
FROM node:0.10.32-slim
RUN \
apt-get update \
&& apt-get install -y sudo
RUN \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd r \
&& useradd r -m -g r -g sudo
USER r
RUN \
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv \
&& echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc \
&& echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN rbenv # check if it works...
When I run this I get:
docker build .
..
Step 5 : RUN rbenv
/bin/sh: 1: rbenv: not found
From what I understand, I need to reload the current shell so I can install ruby versions. Not sure if I am on the right track.
Also see:
Using rbenv with Docker
The RUN command executes everything under /bin/sh, thus your bashrc is not evaled at any point.
use this
&& export PATH="$HOME/.rbenv/bin:$PATH" \
which would append rbenv to /bin/sh's PATH.
Full Dockerfile
FROM node:0.10.32-slim
RUN \
apt-get update \
&& apt-get install -y sudo
RUN \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd r \
&& useradd r -m -g r -g sudo
USER r
RUN \
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv \
&& echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc \
&& echo 'eval "$(rbenv init -)"' >> ~/.bashrc \
&& export PATH="$HOME/.rbenv/bin:$PATH"
RUN rbenv # check if it works...
I'm not sure how Docker works, but it seems like maybe you're missing a step where you source ~/.bashrc, which is preventing you from having the rbenv executable in your PATH. Try adding that right before your first attempt to run rbenv and see if it helps.
You can always solve PATH issues by using the absolute path, too. Instead of just rbenv, try running $HOME/.rbenv/bin/rbenv.
If that works, it indicates that rbenv has installed successfully, and that your PATH is not correctly set to include its bin directory.
It looks from reading the other question you posted that docker allows you to set your PATH via an ENV PATH command, like this, for example:
ENV PATH $HOME/.rbenv/bin:/usr/bin:/bin
but you should make sure that you include all of the various paths you will need.

Resources