Add to PATH from Laravel Sail Dockerfile - laravel

I'm trying to add sqlpackage to a Laravel Sail Docker. While this is normally not really difficult, Sail makes it kinda hard.
I have the following section in my Dockerfile
RUN curl -L https://go.microsoft.com/fwlink/?linkid=2157202 -o /usr/local/bin/sqlpackage.zip
RUN mkdir /usr/local/bin/sqlpackage
RUN unzip /usr/local/bin/sqlpackage.zip -d /usr/local/bin/sqlpackage
RUN rm /usr/local/bin/sqlpackage.zip
RUN echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
RUN chmod a+x /usr/local/bin/sqlpackage/sqlpackage
First off, I'm not happy with the install path I've chosen (/usr/local/bin). But it's the best I could think of. Any suggestions are welcome.
My second, and more important issue is that I can't add run the echo to the path when installing. The install script can't reach the home path. Would really like a solution for this. But I get this error:
cannot create /home/sail/.bashrc: Directory nonexistent
However it exists. So it's a rights issue for the installing user. Any suggestions are welcome.

cannot create /home/sail/.bashrc: Directory nonexistent
It looks like user sail doesn't exist so
before
RUN echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
Create user like below
# add user
RUN useradd sail
# and then
RUN echo "export PATH=\"\$PATH:/usr/local/bin/sqlpackage\"" >> /home/sail/.bashrc
Also $HOME is not required, because during build time it become
export PATH="$PATH:/root/usr/local/bin/sqlpackage"
Try below on terminal ( for example I am as root user ):
$ echo "export PATH=\"\$PATH:$HOME/usr/local/bin/sqlpackage\""
export PATH="$PATH:/root/usr/local/bin/sqlpackage"
Similar way during build process it will use current user that is root.
First off, I'm not happy with the install path I've chosen
(/usr/local/bin). But it's the best I could think of. Any suggestions
are welcome.
/usr/local/bin is the location for all add-on executables that you add to the system to be used as common system files by all users. Locally installed software must be placed within /usr/local.
# Used for non-system libraries and executables
/usr/local/bin
usr stands for User System Resources. This is the location that system programs and libraries are stored.
local represents resources that were not shipped with the standard distribution and, usually, compiled and maintained on a per site basis.
bin represents binary compiled executables.
So keep in mind these three
/usr/bin: User commands.
/usr/sbin: System administration commands.
/usr/local/bin: Locally customized software.
/opt is a directory for installing unbundled packages that is packages not part of the Operating System distribution, but provided by an independent source. I usually put all 3rd party packages in /opt

Related

Is gvfs-trash installed? in Atom

When I tried to remove a file in local machine to check files are synchronous with vagrant development server it pops up an error:
The following file couldn't be moved to the trash.
Is gvfs-trash installed?
For solving it I created a trash directory that can be accessed from outside the user’s home directory:
# Create a Trash directory (with some subdirectories) in root
sudo mkdir -p /.Trash-1000/{expunged,files,info}
# Give ownership of this to your user:
sudo chown -R $USER /.Trash-1000
Still I can't remove the file from local machine. But If I delete a file at vagrant development server it automatically deletes at local machine, opposite is not happening and ends-up with this error "Is gvfs-trash installed? "
Like YuriAFGomes said, everything seemed to work fine in my system: trash folder had the right permissions and gvfs-trash worked flawlessly from command line, yet atom 1.45 said it couldn't delete any file. Tried to start atom with sudo and it didn't fix anything. Tried creating the .Trash-1000 directories in several places, and nothing, same error related to gvfs-trash. I'm pretty sure this used to work fine in my atom setup and suddenly it stopped doing so, and I have no idea why. I went to their releases list and tried downgrading to several of them until I settled with version 1.30, which doesn't seem to have this issue and is compatibles with my local packages. If you have this problem and tried everything said around the web, I suggest you try downgrading to different versions until the problem goes away.
There is an issue on GitHub reporting this problem. According to the report, a missing .Trash-1000 can cause this problem, so you can create it as follows.
mnt=/; id=$(id -u); sudo mkdir -p "$mnt/.Trash-$id"/{expunged,files,info} \
&& sudo chown -R $USER:$USER "$mnt/.Trash-$id"/ \
&& sudo chmod -R o-rwx "$mnt/.Trash-$id"/
Set mnt to the mount point, where gvfs-trash is expecting it.
Simply cd to the directory which will be opened in atom and execute df ..
This will give something like this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 960380628 463122460 448403708 51% /mnt/vol
In this example, the mount point and the value of mnt would be /mnt/vol.
What solved this issue for me was uninstalling atom via dpkg and installing it via apt from the following PPA: https://launchpad.net/~webupd8team/+archive/ubuntu/atom . I have no clue why this works, though. I have noticed that the PPA installs atom 1.26, while the version where the issue arised, installed via dpkg, is 1.45.
Before doing that, I have tried creating the .Trash-1000 directories in root, in home and in project folder, with the proper permissions. gvfs-trash was installed, updated and working as expected all the time, but the problem persisted. Really odd.
The real problem is that atom/electron are/were using gvfs-trash which has been deprecated for almost 5 years. Electron which is the platform on which Atom is built has fixed this in the development branch but hasn't backported it to the 2.0 branch on which Atom is based.
Solution/Workaround as of now?
Use an environment variable $ELECTRON_TRASH and set it to gio or one of the alternatives
See if you are missing the .Trash-1000 folder (assuming your uid is 1000)
Install an alternate gvfs-trash script to take over the missing functionality
Delete the file/folder outside of atom
I had a similar problem on Windows using Atom, where I couldnt delete the files. So I resorted to deleting them manually from the directory (outside of Atom).
Turns out atom cannot "move to trash" if u checked in recycle bin this option:
"Don't move files to the Recycle Bin. Remove files immediately when deleted."
Just set the other option (to move files to actual recycle bin) and should work.

Installing dropbox (and use Kirby CMS) on openshift

I'm trying to find a way to integrate Kirby CMS with Dropbox running on Openshift using these tutorials:
http://getkirby.com/blog/kirby-meets-dropbox
http://getkirby.com/forum/how-to/topic:561
I already get stuck installing Dropbox, since I assume I don't really have permission while SSHing:
http://www.dropbox.com/install?os=lnx
So my question: Is there even any way of achieving all that greatness? If no, not even if we get reaaaally creative? If NO, why not? If yes, how?
Thanks a bunch!
I have no experience with Kirby, but here's how to get Dropbox working on Openshift.
The following is a combination of doing a Dropbox install on a server and doing it in a non-standard location. Everything gets done in $OPENSHIFT_DATA_DIR because that's where you have write privileges.
First, make sure you're in $OPENSHIFT_DATA_DIR
cd $OPENSHIFT_DATA_DIR
Next, download the appropriate version of Dropbox:
wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf -
This should give you the .dropbox-dist folder in $OPENSHIFT_DATA_DIR.
Next, tell Dropbox to start the installation process, but tell it that your home directory is actually the $OPENSHIFT_DATA_DIR:
HOME=$OPENSHIFT_DATA_DIR ./.dropbox-dist/dropboxd start -i
Follow the instructions to link your Dropbox account to the Openshift server. After it's linked, it should start syncing everything in your Dropbox account to $OPENSHIFT_DATA_DIR/Dropbox. This might be a bad thing for you because you have too much data in your Dropbox account. If so, then you should exclude folders.
You can do that with the CLI script that Dropbox provides. Still in $OPENSHIFT_DATA_DIR, download it:
wget -O dropbox.py "https://www.dropbox.com/download?dl=packages/dropbox.py"
Make sure it's executable:
chmod +x dropbox.py
You need to run it the same way you would Dropbox:
HOME=$OPENSHIFT_DATA_DIR $OPENSHIFT_DATA_DIR/dropbox.py -h
Hope that helps.
You should be able to download/compile/install things into your OPENSHIFT_DATA_DIR (app-root/data) on your gear by using something like ./configure --prefix=~/app-root/data/dropbox, i tried that but i ran into missing the nautilus-whatever package, which i assume you could download and install in the same fashion, but i did not try past that point. As long as whatever you are running can be installed into the app-root/data, and does not require root permissions to run, you should be able to do it. If you get it going, you could also create a downloadable cartridge to run install it more easily.

Why does npm need sudo for EVERYTHING?

I don't know how I've managed it but npm seems to need sudo for absolutely every command, even npm help does not work without sudo. If I use a command without sudo, I do not see am EACCESS error, but instead my terminal session hangs and then just closes that tab (I use iTerm on Mac).
I have tried changing the ownership of my local .npm folder, outlined here and also done the same on my /usr/local/bin folder where node is installed but none of these allow me to just run npm without sudo, even when installing local packages...! It seems to me that something has screwed along the way, can anyone help?
Many thanks
I encountered the same error after a fresh install of 0.12.4 today; this solved the problem for me:
sudo chown -R $(whoami):admin /usr/local/lib/node_modules
In my particular case, I noticed that this folder was owned by '{some-large-integer-account}:wheel'...YMMV
If that doesn't solve it, take a look at the ownership of the folders that are being blocked as mentioned in the EACCESS error trace. If you're not sure what the ownership should be, you can usually infer it from the sibling dirs' ownership.
I had this as well on my machine. What I did to fix it (there are probably much less extreme ways) was to completely remove npm, and then did a fresh installation node.js (with which npm is included) from http://nodejs.org/ making sure I didn't install as root. That then allowed me to use npm without root (except for global installs).
Take ember project for example, I give all related project directory root:
neil#neil-System-Product-Name:~/Projects/ember-quickstart$ sudo chown -R $(whoami) /home/neil/Projects/ember-quickstart/
neil#neil-System-Product-Name:~/Projects/ember-quickstart$ ember s
Could not start watchman
Visit https://ember-cli.com/user-guide/#watchman for more info.
Livereload server on http://localhost:7020
Build successful (10679ms) – Serving on http://localhost:4200/
Slowest Nodes (totalTime => 5% ) | Total (avg)
----------------------------------------------+---------------------
Babel (18) | 7561ms (420 ms)
Concat (8) | 1872ms (234 ms)
Rollup (1) | 629ms
Use the below option.
Open the terminal and cd to your Home directory and run the below command.
mkdir "${HOME}/.npm-packages"
Then this command after that.
npm config set prefix "${HOME}/.npm-packages"
Next, open your .zshrc file using the open -t .zshrc command and add the following to it.
NPM_PACKAGES="${HOME}/.npm-packages"
export PATH="$PATH:$NPM_PACKAGES/bin"
# Preserve MANPATH if you already defined it somewhere in your config.
# Otherwise, fall back to `manpath` so we can inherit from `/etc/manpath`.
export MANPATH="${MANPATH-$(manpath)}:$NPM_PACKAGES/share/man"

Sudo Won't Work after change/mistake in Path env

I'm a Mac newbie and just upgraded to Node.js 0.67. After running node, the installer says "Make sure that /usr/local/bin is in your $PATH."
And I try to run node but as expected, it doesn't run without the path change.
So not really knowing what I'm doing (yes!), after some research I do this:
export "PATH=/usr/local/bin"
And node runs. But sudo doesn't. Which I think means I screwed up the environment variables.
sudo: command not found
Then in another Terminal window (that was open when I messed this up), sudo does respond; both windows have the same path. But in that window, npm is no longer available.
Can anyone help get me back to sudo stability?
sudo on a Macintosh lives in /usr/bin.
Make sure /usr/bin is in your $PATH environment and you should be okay.
And to do that, in the context of your question above, do something like:
export "PATH=$PATH:/usr/local/bin"
The idea here being that you are appending a new search path to the already existing list in your PATH environment variable.
Here is a potentially useful tutorial you can refer to.

How can I use the /home directory on Mac OS X

I've got a Mac that I can run either the Leopard (10.5) or Snow Leopard (10.6) version of OS X on. I'm using it to do web development/testing before publishing files to my production host.
On the production host my site's doc root is under the home directory (e.g. /home/stimulatingpixels/public_html) and I'd like to duplicate that location on the Mac. Unfortunately, their is a hidden and lock placeholder on the Mac that looks like a mounted drive with nothing in it sitting in the /home location.
I know from experience that it's unwise to move this and drop in your own /home directory because upgrades can cause it to be erased (and it doesn't get stored in the TimeMachine backup, by the way).
So, the question, is there anyway to safely use /home on a Mac either Leopard or Snow Leopard?
(Note: I realize this is very Mac specific and will be asking it in an Apple forum as well. Just wanted to ask here in addition to cover all the bases.)
Update: To help describe why I want to do this, in addition to the front end web site, I've got a series of scripts that I'd like to run as well. One of the main goals with being able to use the /home directory (and more specifically the same path from the servers root) is so that can use the same output paths on the development mac as well be used on the production server. I know there are ways to work around this, but I'd rather not have to deal with it. The real goal is to have all the files on the development Mac have the same filepath from the / root of the directory tree as the production server.
Another Update: The other reason that I forgot to mention earlier for this is setting up .htaccess paths when using basic authentication. Since those paths are from the file system root instead of the website docroot, they end up going through "/home" when that's part of the tree.
NOTE: As of 2015, I no longer use or recommend this method. Instead I use Vagrant to setup virtual machines for dev and testing. It's free, relatively easy, and allows better matching of the production environment. It completely separates the development environment and you can make as many as you need. Highly recommended. I'm leaving the original answer below for posterity's sake.
I found an answer here on the Apple forums.
In order to reclaim the /home directory, edit the /etc/auto_master file and comment out (or remove) the line with /home in it. You'll need to reboot after this for the change to take effect (or, per nilbus' comment, try running sudo automount -vc). This works with Mac OS X 10.5 (Leopard). Your millage may vary for different versions, but it should be similar.
As noted on that forum post, you should also be aware that Time Machine automatically excludes the /home directory and does not back it up.
One note of warning, make sure to back up your /home directory manually before doing a system update. I believe one of the updates I did (from 10.6 to 10.7 for example) wiped out what I has stored in /home without warning. I'm not 100% sure that's what happened, but it's something to be on the lookout for.
Putting it all together from the tips and hints above:
edit /etc/auto_master # comment out the line with /home in it.
remount:
sudo automount -vc
make a softlink to the mac-ified dir:
sudo ln -s $HOME /home/$USER
At that point, your paths should match-up to your production paths. env vars will still point to /Users/xxxx, but anything you hard-code in a path in your .bashrc --or say, in ~/.pip/pip.conf-- should be essentially equivalent. Worked for me.
re: "The real goal is to have all the files on the development Mac have the same filepath from the / root of the directory tree as the production server."
On production, my deploy work might happen in /opt/projects/projname, so I'll just make sure my account can write into /opt/projects and go from there. I'd start by doing something like this:
sudo mkdir /opt/projects
sudo chown $USER /opt/projects
mkdir /opt/projects/projname
cd /opt/projects/projname
With LVM, I'll set a separate partition for /opt/, and write app data there instead of $HOME. Then, I can grow the /opt file system in cases where I need more disk space for a project (LVM is your friend.)
I tried it on Yosemite (OS X 10.10.1) the sudo automount -vc didn't work, I had to use sudo umount /home.
Therefore my workflow would be:
# comment out line starting with /home
sudo vi "+g/^\/home/s/\//#\//" "+x" /etc/auto_master
sudo umount /home
# link actual home directory (/Users/<user>) to new 'home' (/home/<user>)
ln -s $HOME /home/$USER
I adapted the previous solutions to Big Sur (macOS 11.2), which is a bit more complicated due to the APFS file system changes. I managed to change /home by following these steps:
As recommended by Alan W. Smith, comment out the /home entry in /etc/auto_master.
As suggested by Marco Torchiano, run
sudo umount /home
Since /home is currently a read-only link to /System/Volumes/Data/home, you have to change the latter. I did it with the following commands:
cd /System/Volumes/Data/
sudo rmdir home
sudo ln -s <some other directory> home
Why don't you just run MAMP and use the Sites directory? You can develop off localhost and just have a bunch of aliases for your sites. I'm not sure why you specifically need to use the home directory.
EDIT:
Ok, I think you are going about solving your problem the wrong way.
If it's HTML paths you are worried about, the begin everything with a slash "/" which will default it to the home dierectory.
If it's the references in your PHP, then you need to create a global (or similar) and set it as the root of your site. Then you can reference everything from the global and when you move the site from dev to production all you need to change is the global.
Trying in a round-about way to develop from /home because it looks more like the production server is a bad idea.
Install MAMP, create the global somewhere high in the hierarchy and start re-referencing. It'll be less pain in the long run.

Resources