Having an Ansible problem where I cannot change directories outside of /tmp and I have a folder in temp where I need to run configure, make and make install just like I was in the folder.
Some questions:
When I run configure outside the folder, the Makefile appears in my CWD. How do I get around this?
Is make -C ./directory/ the way to go for making?
Any ideas on how to accomplish these commands just like I was in the folder itself?
Related
I'm trying to add sqlpackage to a Laravel Sail Docker. While this is normally not really difficult, Sail makes it kinda hard.
I have the following section in my Dockerfile
RUN curl -L https://go.microsoft.com/fwlink/?linkid=2157202 -o /usr/local/bin/sqlpackage.zip
RUN mkdir /usr/local/bin/sqlpackage
RUN unzip /usr/local/bin/sqlpackage.zip -d /usr/local/bin/sqlpackage
RUN rm /usr/local/bin/sqlpackage.zip
RUN echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
RUN chmod a+x /usr/local/bin/sqlpackage/sqlpackage
First off, I'm not happy with the install path I've chosen (/usr/local/bin). But it's the best I could think of. Any suggestions are welcome.
My second, and more important issue is that I can't add run the echo to the path when installing. The install script can't reach the home path. Would really like a solution for this. But I get this error:
cannot create /home/sail/.bashrc: Directory nonexistent
However it exists. So it's a rights issue for the installing user. Any suggestions are welcome.
cannot create /home/sail/.bashrc: Directory nonexistent
It looks like user sail doesn't exist so
before
RUN echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
Create user like below
# add user
RUN useradd sail
# and then
RUN echo "export PATH=\"\$PATH:/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
Also $HOME is not required, because during build time it become
export PATH="$PATH:/root/usr/local/bin/sqlpackage"
Try below on terminal ( for example I am as root user ):
$ echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\""
export PATH="$PATH:/root/usr/local/bin/sqlpackage"
Similar way during build process it will use current user that is root.
First off, I'm not happy with the install path I've chosen
(/usr/local/bin). But it's the best I could think of. Any suggestions
are welcome.
/usr/local/bin is the location for all add-on executables that you add to the system to be used as common system files by all users. Locally installed software must be placed within /usr/local.
# Used for non-system libraries and executables
/usr/local/bin
usr stands for User System Resources. This is the location that system programs and libraries are stored.
local represents resources that were not shipped with the standard distribution and, usually, compiled and maintained on a per site basis.
bin represents binary compiled executables.
So keep in mind these three
/usr/bin: User commands.
/usr/sbin: System administration commands.
/usr/local/bin: Locally customized software.
/opt is a directory for installing unbundled packages that is packages not part of the Operating System distribution, but provided by an independent source. I usually put all 3rd party packages in /opt
let's supposse I have a Dockerfile like this:
FROM debian:stretch
RUN apt update
RUN apt install -y wget
RUN wget https://stackoverflow.com/
# I know the wget is useless. Is just an example :)
CMD ["echo", "hello-world"]
I want to put over the wget statement, a new RUN statement. After this change, when I rebuild, It will re-run all the commands from my modification to down, so the wget will be executed again. The problem is that the wget command takes so much time to finish because on my real file, the file is a very big file.
The question is, can be docker "tweaked" somewhere in order to avoid on building the execution again of the wget layer? If I already built it, can that layer be used again even changing a statement over it?
Thank you.
AFAIK this is not possible, as docker only reuses the layers up until your change and starts to build again from there on out.
This is because the new layers get tested on the previously built layers (so your RUN wget layer is tested and built on the layers from FROM to RUN apt install -y wget). So if you'd enter another RUN instruction above the RUN wget instruction, you'd get a changed environment for your RUN wget instruction, so it needs to be executed again.
I don't think there's a way to fidget with it manually so it would reuse the layer built on a "different" environment and neither would I recommend it.
Using docker-compose, or the -v flag when running docker run you can mount a volume that will persist between runs. Change your wget to a script that conditionally runs in absence of the file.
That won’t cache the later but will make that step faster.
You may need to modify the folder where you store that file depending on the rest of your script and how your environment is set up.
I’m using compose for volume mounting here: https://github.com/jaydorsey/ghgvcR/blob/master/docker-compose.yml
Look at the bin/download-files.sh file in that repo for a bash example.
I'm trying to find a way to integrate Kirby CMS with Dropbox running on Openshift using these tutorials:
http://getkirby.com/blog/kirby-meets-dropbox
http://getkirby.com/forum/how-to/topic:561
I already get stuck installing Dropbox, since I assume I don't really have permission while SSHing:
http://www.dropbox.com/install?os=lnx
So my question: Is there even any way of achieving all that greatness? If no, not even if we get reaaaally creative? If NO, why not? If yes, how?
Thanks a bunch!
I have no experience with Kirby, but here's how to get Dropbox working on Openshift.
The following is a combination of doing a Dropbox install on a server and doing it in a non-standard location. Everything gets done in $OPENSHIFT_DATA_DIR because that's where you have write privileges.
First, make sure you're in $OPENSHIFT_DATA_DIR
cd $OPENSHIFT_DATA_DIR
Next, download the appropriate version of Dropbox:
wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf -
This should give you the .dropbox-dist folder in $OPENSHIFT_DATA_DIR.
Next, tell Dropbox to start the installation process, but tell it that your home directory is actually the $OPENSHIFT_DATA_DIR:
HOME=$OPENSHIFT_DATA_DIR ./.dropbox-dist/dropboxd start -i
Follow the instructions to link your Dropbox account to the Openshift server. After it's linked, it should start syncing everything in your Dropbox account to $OPENSHIFT_DATA_DIR/Dropbox. This might be a bad thing for you because you have too much data in your Dropbox account. If so, then you should exclude folders.
You can do that with the CLI script that Dropbox provides. Still in $OPENSHIFT_DATA_DIR, download it:
wget -O dropbox.py "https://www.dropbox.com/download?dl=packages/dropbox.py"
Make sure it's executable:
chmod +x dropbox.py
You need to run it the same way you would Dropbox:
HOME=$OPENSHIFT_DATA_DIR $OPENSHIFT_DATA_DIR/dropbox.py -h
Hope that helps.
You should be able to download/compile/install things into your OPENSHIFT_DATA_DIR (app-root/data) on your gear by using something like ./configure --prefix=~/app-root/data/dropbox, i tried that but i ran into missing the nautilus-whatever package, which i assume you could download and install in the same fashion, but i did not try past that point. As long as whatever you are running can be installed into the app-root/data, and does not require root permissions to run, you should be able to do it. If you get it going, you could also create a downloadable cartridge to run install it more easily.
I am trying to get to install stardog on mac 10.8.5 using the instructions provided at http://docs.stardog.com/quick-start/.
The export path particular directory has been created and for which echo’ed to make sure that environmental variable is set up. The license key that is provided is also in the correct directory. When I try to run “$ ./stardog-admin server start” the command is not recognized. So I tried to create an export PATH to stardog’s bin, which did not work either.
I have also tried manually adding the path in the following:
- ~/.bash_profile
- ~/.profile
Still no luck, any ideas?
Using zsh I had a similar problem. For some reason, the docs suggest that from the stardog-directory-name directory you can run the command, but it didn't work until you cd into the bin directory. Once there ./stardog-admin server start should run correctly.
It sounds like you simply have something incorrect in your .bash_profile or .profile. If you run either of the stardog scripts from it's bin directory, it will work. If you're getting a command not recognized error, that sounds like bash cannot find the stardog-admin script.
I am currently having a problem in creating a folder at startup inside /tmp directory and executing a command for a server restart. What file do I have to modify in order to do this? I have heard about bash profile and some files are there to achieve this but I do not know what to do or whether changing those files suit my current need. Please help me to get rid of this problem.
As far as I can tell, Ubuntu favors Upstart over rc files. Documentation for Ubuntu upstart can be found here. Looks like 12.04 has a file /etc/init/mounted-tmp.conf which looks like it has some code that execute after /tmp is mounted.
You can add the command that creates the directory to /etc/rc.local, so it gets executed upon every reboot.