I'm trying to make my system more robust to intrusions and one thing I'd like to do is log all the commands that are run inside a docker image. So this is different than just mounting the .bash_history file, say. It's about sending all shell commands to a log outside the image.
Is there a trick for this?
Once they're on the main system, I'll back them up remotely on a regular basis.
Related
I am trying to build me a Dockerfile for my ROS project.
In ROS it is required that you source a setup bash in every terminal before starting to work.
(You can replace this by putting the source command in your bashrc file)
So, what I do is to source the file in the Dockerfle so that it gets run when the container is built. It works fine on that terminal
However when I open another terminal , predictably it seems that that file is not sourced and I have to do it manually.
Is there any way I can avoid this?
As I said in a non docker way, you put this into a file that gets called everytime a terminal is open but how do you do this with docker?
(in other words, how do you make sure a sh file is executed everytime I execute (or attach to) a docker container)
In your Dockerfile, copy your script to Docker WORKDIR:
COPY ./setup.bash .
Then set the entry point to run that script at container launch:
ENTRYPOINT ["/bin/bash", "-c", "./setup.bash"]
Note that with this approach, you won't be able to start your container in an interactive terminal with docker run -it. You'll need to do a few more things if that's what you want. Also, this will overwrite your original image's ENTRYPOINT (which you can find by docker image history), so make sure that is not essential. Otherwise, sourcing the script may be the better option for both cases:
RUN source ./setup.bash
Just add the script to startup configuration files in bash...
COPY ./setup.bash /etc/
RUN echo "source /etc/setup.bash" >> /etc/bash.bashrc
ENTRYPOINT /bin/bash
The file /etc/bash.bashrc might be named /etc/bashrc, or you might want to use /etc/profile.d directory, depending if you want the file to be sourced in interactive shells or not. Read the relevant documentation about startup files in bash.
I have 60 ec2 instances which share the same folder structure are similar to one another but not completely identical. The incorrect file was uploaded to all 60 instances and I was wondering what would be the best way to replace that file with the correct one? The file is named the same and is placed in the same location throughout all the instances. Am new to using AWS in general so any help would be much appreciated.
Assuming you don't want to use something like ansible, have access to the servers and want to use just bash you could do something like:
Put all your IP addresses of your servers into a file, one on each line - like so:
IpAddresses.txt
10.20.15.1
10.20.15.44
10.20.15.65
Then create a script:
myscript.sh
#!/bin/bash
while read line; do
ssh -i path_to_key.pem ec2-user#$line 'sudo rm -rf /path_to_directory | command 2 | command 3'
done < IpAddresses.txt
Maybe you could do something like the above to first remove the directories you don't want and then do an scp to copy the correct file in.
Depends on the commands you need to correct the problem, but this is an option.
Note, I haven't tested this command exactly - so you may need to correct/test a bit.
Refs:
https://www.shellhacks.com/ssh-execute-remote-command-script-linux/
If your EC2 instances have the correct IAM permissions, you could use the Simple Systems Manager (SSM) console, using the Run Command service. Click 'Run a command', then select AWS-RunShellScript from the list of command documents. In the text box you can specify a shell command to run, and below that you can choose the set of instances you want to run the command on.
This is the recommended way to update and administer large fleets of instances such as you have.
I have a remote script on a machine (B) which works perfectly when I run it from machine (B). I wanted to run the script via ssh from machine (A) using:
ssh usersm#${RHOST} './product/2018/requests/inbound/delDup.sh'
However, machine (A) complains about the contents of the remote script (2018req*.txt is a variable defined at the beginning of the script):
ls: cannot access 2018req*.txt: No such file or directory
From the information provided, it's hard to do more than guess. So here's a guess: when you run the script directly on machine B, do you run it from your home directory with ./product/2018/requests/inbound/delDup.sh, or do you cd into the product/2018/requests/inbound directory and run it with ./delDup.sh? If so, using 2018req*.txt will look in different places; basically, it looks in the directory that you were in when you ran the script. If you cded to the inbound directory locally, it'll look there, but running it remotely doesn't change to that directory, so 2018req*.txt will look for files in the home directory.
If that's the problem, I'd rewrite the script to cd to the appropriate directory, either by hard-coding the absolute path directly in the script, or by detecting what directory the script's in (see "https://stackoverflow.com/questions/59895/getting-the-source-directory-of-a-bash-script-from-within" and BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place").
BTW, anytime you use cd in a script, you should test the exit status of the cd command to make sure it succeeded, because if it didn't the rest of the script will execute in the wrong place and may do unexpected and unpleasant things. You can use || to run an error handler if it fails, like this:
cd somedir || {
echo "Cannot cd to somedir" >&2
exit 1
}
If that's not the problem, please supply more info about the script and the situation it's running in (i.e. location of files). The best thing to do would be to create a Minimal, Complete, and Verifiable example that shows the problem. Basically, make a copy of the script, remove everything that isn't relevant to the problem, make sure it still exhibits the problem (otherwise you removed something that was relevant), and add that (and file locations) to the question.
First of all when you use SSH, instead of directly sending the output (stdout and stderr) to the monitor, the remote machine/ssh server sends the data back to the machine from which you started the ssh connection. The ssh client running in your local machine will just display it (except if you redirect it of course).
Now, from the information you have provided, it looks like the files are not present on server (B) or not accessible (last but not least, are you sure your ls target the proper directory? ) you could display the current directory in your script before running the ls command for debugging purpose.
Currently I am running docker with more than 15 containers with various apps. I am exactly at the point that I am getting sick and tired of looking into my docs every time the command I used to create the container. Trying to create scripts and alias commands to get this procedure easier I encountered this problem:
Is there a way to get the container's name from the host's shared folder?
For example, I have a directory "MyApp" and inside this I start a container with a shared folder "shared". It would be perfect if:
a. I had a global script somewhere and an alias command set respectively and
b. I could just run something like "startit"/"stopit"/"rmit" from any of my "OneOfMyApps" directory and its subdirectories. I would like to skip docker ps-> Cp -> etc etc every time, and just get the container's name from the script. Any ideas?
Well, one solution would be to use environment variables to pass the name into the container and use some pre-determined file in the volume to store the name. So, you would create the container with -e flag
docker create --name myapp -e NAME=myapp myappimage
And inside the image entry point script you would have something like
cd /shared/volume
echo $NAME >> .containers
And in your shell script you would do something like
function stopit() {
for name in `cat .containers`; do
docker stop $name;
done;
}
But this is a bit fragile. If you are going to script the commands anyway, I would suggest using docker ps to get a list of containers and then using docker inspect to find which ones use this particular shared volume. You can do all of it inside the script, so what is the problem.
I have a requirement to archive files on remote location. i.e., I need to write a shell script that will connect to remote path copy(move) files from this path and then paste them on another location in the same system (The target system could be either a Unix system or a windows system).
This script will be scheduled to run once a day without manual intervention.
Unison should fit your bill. rsync and scp would work as well but they can be a bit cryptic to set up.
There are implementations of the Secure Shell (SSH) for both targeted systems. The Secure Shell comes with a secure copy program, named scp which would allow you to run commands like
scp localfile user#remotehost:directory/remotefilename
As lynxlynxlynx pointed out, another option is the rsync suite. Both SSH and rsync will require some configuration (rsync less so). See the respective home pages.