I am trying to make Elasticsearch start automatically when I restart the server by following the steps here.
The problem is that When I restart the server, /tmp is being mounted with the noexec option and I need to run mount -o remount,exec /tmp and manually start Elasticsarch again.
Someone told me that I need to remove noexec from /etc/fstab but noexec is not there.
Edit:
I think that the noexec option might be added by /scripts/securetmp
When I run mount I see:
/usr/tmpDSK on /tmp type ext3 (rw,relatime,data=ordered)
/usr/tmpDSK on /var/tmp type ext3 (rw,nosuid,noexec,relatime,data=ordered)
Solved by deactivating /scripts/securetmp. For more information, look at this post.
I extracted the steps just in case the post disappears in the future.
Run
# /scripts/securetmp
Is going to appear this:
Would you like to secure /tmp & /var/tmp at boot time? (y/n)
Type n
Is going to appear this:
securetmp will not be added to system startup at this time.
Would you like to disable securetmp from the system startup? (y/n)
Type y
Is going to appear this:
Would you like to secure /tmp & /var/tmp now? (y/n)
Type n
Is going to appear this:
/tmp & /var/tmp will not be secured at this time.
Related
Running bash on windows 10, the simple syntax below works when I SSH to my webserver, but not when I exit out and am on my local machine. It doesn't give me an error, but I can see permissions are unchanged. I have to checked that I am set up as an administrator on my computer. Is this an error or is this just a consequence of the local operating system being windows? IF the later, it makes me question the value of using bash on windows if common operations such as this won't work.
$chmod 644 filename
To enable changing file owners & permissions, you need to edit /etc/wsl.conf and insert the below config options:
[automount]
options = "metadata"
Do this inside the WSL shell, potentially needing sudo to edit/create the file.
This may require restarting WSL (such as with wsl --shutdown which is a Windows command, not one within WSL) or the host machine to take effect. This has been possible since 2018:
You can now set the owner and group of files using chmod/chown and modify read/write/execute permissions in WSL. You can also create special files like fifos, unix sockets, and device files. We’re introducing new mounting options with DrvFs for projecting permissions onto files alongside providing new Linux metadata on files and folders.
[cite: Microsoft Dev Blog]
You can also temporarily re-mount a drive with the following commands:
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata
...but please note, the command only takes effect in session scope. If you exit current bash, you'll lose your settings (credit: answerer Amade).
Reference:
Automatically Configuring WSL
There was an update to WSL recently (source), which lets you change permissions to files (Insider Build 17063).
All you have to do is to run:
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata
Both Amades and Chaos answers are correct.
But it only works for local drives not for mapped network drives. Z: is one of my network drives. Same operation on /mnt/c/Users/xxx/ works fine.
$sudo mount -t drvfs Z: /mnt/z -o metadata
$touch test
$chmod +w test
chmod: changing permissions of 'test': Operation not permitted
This is a known issue, see drvfs: metadata (chmod\chown) possible for mounted SMB drives?
This question already has answers here:
What's the best way to share files from Windows to Boot2docker VM?
(5 answers)
Closed 6 years ago.
Basically, when you open boot2docker app, inside it you can cd /c/Users, right? Now I want to be able to cd /d to access my D:\ directory.
I don't know squat about VM so please explain like you would to a 5-years old.
This is in a way related to this other question on how to move docker images to another drive. The whole idea is to free up the system disk since docker stuff takes so much space over time.
Answer
In windows CMD(only once):
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
In the Boot2Docker VM terminal(every time you boot):
mount -t vboxsf -o uid=1000,gid=50 d-share /d
If you always want to mount your D:\ to /d you can instead add the following entry to /etc/fstab (if you can edit fstab in boot2docker, not sure on this):
d-share /d vboxsf uid=1000,gid=50 0 0
How I came about this answer, as it may change in the future:
From the Boot2Docker README.md in their git repo
Alternatively, Boot2Docker includes the VirtualBox Guest Additions
built in for the express purpose of using VirtualBox folder sharing.
The first of the following share names that exists (if any) will be
automatically mounted at the location specified:
Users share at /Users
/Users share at /Users
c/Users share at /c/Users
/c/Users share at /c/Users
c:/Users share at /c/Users
If some other
path or share is desired, it can be mounted at run time by doing
something like:
$ mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
There's your command structure.
From VirtualBox Guest Additions Docs on Shared Folders
From the command line, you can create shared folders using VBoxManage,
as follows:
VBoxManage sharedfolder add "VM name" --name "sharename" --hostpath "C:\test"
and
To mount a shared folder during boot, add the following entry to
/etc/fstab:
sharename mountpoint vboxsf defaults 0 0
The default boot2docker vm name is boot2docker-vm (imaginative) and you want to mount the D directory D:\. Lets call our share d-share.
Possible Dupe:
Can be found here, with a slightly differently explained answer to almost the same question.
I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"
I am new to debian (on a Raspberry Pi), and it comes with mistakes...
Trying to give chmod permissions over the /usr/ files to my login (pi), I made a mistake, confusing "-" with "+". I executed the commande line :
$ sudo chmod -rwx /usr pi
wich gets me in a bad situation :
I cannot execute anythong anymore because bash won't load.
After reboot, and logged as pi, same issue with this errors :
ERROR: ld.so: object '/usr/lib/arm-linux-gnuabihf/libcofi_rpi.so' from /etc/ld.so.preload cannot be preloaded: ignored
- bash: id: command not found
- bash: [: : integer expression expected
- bash: /usr/share/bash-completion/bash_completion: Permission denied
pi#raspberrypi:~$
and from there, my attemps to give chmod permissions to /usr/ are useless, because I don't have permissions at all...
most commands dont't work (startx, or else), as I get an error :
- bash: startx: command not found
How can I get out of that situation without restarting from scratch ?
Thanks a lot for your help !
Update
I actually found a list with many username/password combinations for different distributions often used on Raspberry. So check first, if your distribution is in there (I guess either Debian or Raspbian) and try the passwords there at the login prompt. If they do not work on SSH, try them directly (root login via SSH could be disabled).
Old entry
The Debian distribution for raspberry does not seem to have a password for root set by default. Thus, you cannot login as root. I guess, due to the access changes you cannot execute sudo?
So, the whole problem has to be solved from another operating system: Insert the SD card into another PC. If you do not have linux, you can boot it with a live CD like Ubuntu or Knoppix.
From there you can mount the SD card:
mount /dev/sdX? /mnt
sudo chmod 0755 /mnt/usr
Here X is variable and you have to find it out. Best is, you insert the SD card after the whole system has booted. Then the SD card should have the highest letter (e.g. d, if you have three other harddisks in your PC). The question mark ? has to be replaced with a number (probably 1).
You will have to log in as root. so that you can ignore the permissions you have set, and then run:
chmod 0755 /usr
I created an EC2 instance (Ubuntu 64 bit) and attached a volume from a publicly available snapshot to the instance. I successfully mounted the volume. I am supposed to be able to run a script from this attached volume using the following steps as explained in the tutorial:
Log in to your virtual machine.
mkdir /space
mount /dev/sdf1 /space
cd /space
./setup-script
The problem is that, when I try: ./setup-script I got the following message:
-bash: ./setup-script: No such file or directory
What is the problem ? How can I search for the ./setup-script in the whole machine ? I'm not very familiar with linux system. Please, help.
For more details about the issue: Look at my previous post:
Error when mounting drive
# Is it a script or an executable ?
file /space/setup-script
# Show us it is readable and marked executable
ls -l /space/setup-script
# Mark it executable
chmod a+x /space/setup-script
# Then try running it again? If you know it is shell script you can:
bash /space/setup-script
If still not working, then we get into why it wont execute.
grep space /proc/mounts
Does the options it have noexec ?
Try mount -o remount,exec /space now try your instructions again.
NOTE: All commands presume you are 'root' user or you can 'sudo' each command.
It is possible that you have mounted the wrong device. I've just recalled a trick you can use to find the device name of an EBS volume in Linux, since it is often different from the device name reported in the AWS console. First unmount the device in Linux, then detach it from the instance using the AWS console, so we go back to the original state. Now run this command in Linux:
cat /proc/partitions
The command will show the volumes currently attached. The next step is to attach the volume to the instance using the AWS console, and then to run that same command again in Linux. You should see an additional line appear. This line will tell you the name of the device to mount. For example, I get this output in my Ubuntu instance:
major minor #blocks name
202 1 8388608 xvda1
202 80 8388608 xvdf
The first line was already there before I attached the volume, so I know this is my root volume. The second line is the one that appeared, so in this case, the device to mount would be /dev/xvdf.