Total newb to Docker. My question is, is there a way of searching a Docker image to see if there are any Jar files present.
If I understand correctly Docker images are built in layers and each layer does not necessarily belong to that image alone. My initial approach was to "save image > image.tar" to bring all the layers together and then recursively try and search the resulting tar for the pattern "*.jar". With a standard tar if, for example, I "less filename.tar" I can see right into all of the sub-directories of the tar. This does not seem to be the case with an image which has been tarballed.
Is there a way of doing this or am I misunderstanding fundamentally how an image is built and what happens to it when it is tarballed?
Edit: I actually want to do this specifically to the image rather than launching a docker container of the image and then searching that. Is this even possible?
Edit: OK I'll try and make this clearer. I have tried to take this approach:
1) Create a tarball from a docker image: docker save image > image.jar
2) Then I've tried to search the tar for jar files with this simple script:
#Will list and extract all jars from a tar file
arc=$1.tar; file='*.jar';
tar tvf $arc | grep -E "$file" && tar xvf $arc "$file"
3) This will work for a standard tarball but not for a docker tarball
Again I am totally new to docker and I just want to know if there is another way of doing this.
Do the following
Run the image as docker run -it yourimage /bin/bash
Now you are in interactive mode.
Then use find / -name *.jar (if find command not available install it)
The docker client has an "export" option. From a shell that has the environment configured to use the docker client run
docker export <your image> | tar tf - | grep etc/issue
which yields the following:
etc/issue
Related
How to create a Dockerfile from scratch for Windows EXE file,
which does not have any dependencies?
I just did it on Linux, but can not find how to do it on windows (docker).
I did on Linux:
FROM scratch
ADD hello /
ENTRYPOINT ["/hello"]
g++ -o hello -static hello.cc
and it worked.
But how to make it work on windows?
Why it is impossible ?
How Microsoft create their base images ?
There is no scratch base image for Windows docker containers.
Microsoft publishes base images on hub.docker.com that you can use as an alternative.
The first line changes the escape character from \ to avoid escaping windows style paths.
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ADD hello.exe C:\hello.exe
ENTRYPOINT ["C:\hello.exe"]
I think I found an answer how Microsoft creates base images:
docker import creates one image from one tarball which is not even an image (just a filesystem you want to import as an image)
Like this:
tar --numeric-owner --exclude=/proc --exclude=/sys -cvf centos6-base.tar /
cat centos6-base.tar | docker import - centos6-base
docker run -i -t centos6-base cat /etc/redhat-release
I am getting this error:
pandoc: sh: openBinaryFile: does not exist (No such file or directory)
when trying to build some assets with Pandoc in a Gitlab CI bash script.
I have a repo, Finnito/Science, that is serving a Gitlab Pages site using Hugo. I am trying to set up a Gitlab CI pipeline to build my HTML slides and PDFs docs from my Markdown source when I commit to the repo so that I don't have to build them locally.
I have been trying out different Docker images of pandoc but decided pandoc/latex is my best bet because it's "official" and built on Alpine which is nice and lightweight. But I can't seem to make heads or tails of this error.
I have tried various different incantations for pandoc but they don't seem to work.
My Gitlab CI job looks like this:
assets:
image: pandoc/latex
script:
- chmod +x ci-build.sh
- sh ci-build.sh
and my ci-build.sh script looks like this:
#!/bin/sh
modulesToBuild=(
"/builds/Finnito/science/content/10sci/5-fire-and-fuels"
"/builds/Finnito/science/content/10scie/6-geology"
"/builds/Finnito/science/content/11sci/4-mechanics"
"/builds/Finnito/science/content/11sci/5-genetics"
"/builds/Finnito/science/content/12phy/2-mechanics"
"/builds/Finnito/science/content/12phy/3-electricity"
)
for i in "${modulesToBuild[#]}"; do
# Navigate to the directory.
cd $i
# Build the HTML slides and
# PDFs for all markdown docs.
for filename in markdown/*.md; do
file=${filename##*/}
name=${file%%.*}
pandoc/latex pandoc -s --mathjax -i -t revealjs "markdown/$name.md" -o "$name.html"
pandoc/latex pandoc "markdown/$name.md" -o "$name.pdf" --pdf-engine=pdflatex
done
done
Honestly, I'm just pretty lost with how to successfully call pandoc within the Docker container. I am very new to this and it all makes very little sense!
Any help would be most appreciated!
The image has /usr/bin/pandoc set as an entry point. This means that one doesn't has to specify the pandoc command when running the container; providing a command it will cause pandoc to try to read an input file with the name of the given command, which causes the error you are seeing.
For a while, images used a custom entrypoint script which tried to detect if a different binary should be executed. But this was reverted as it proved unreliable and confusing.
Currently I am running docker with more than 15 containers with various apps. I am exactly at the point that I am getting sick and tired of looking into my docs every time the command I used to create the container. Trying to create scripts and alias commands to get this procedure easier I encountered this problem:
Is there a way to get the container's name from the host's shared folder?
For example, I have a directory "MyApp" and inside this I start a container with a shared folder "shared". It would be perfect if:
a. I had a global script somewhere and an alias command set respectively and
b. I could just run something like "startit"/"stopit"/"rmit" from any of my "OneOfMyApps" directory and its subdirectories. I would like to skip docker ps-> Cp -> etc etc every time, and just get the container's name from the script. Any ideas?
Well, one solution would be to use environment variables to pass the name into the container and use some pre-determined file in the volume to store the name. So, you would create the container with -e flag
docker create --name myapp -e NAME=myapp myappimage
And inside the image entry point script you would have something like
cd /shared/volume
echo $NAME >> .containers
And in your shell script you would do something like
function stopit() {
for name in `cat .containers`; do
docker stop $name;
done;
}
But this is a bit fragile. If you are going to script the commands anyway, I would suggest using docker ps to get a list of containers and then using docker inspect to find which ones use this particular shared volume. You can do all of it inside the script, so what is the problem.
I would like to have a synchronized copy of one folder with all its subtree.
It should work automatically in this way: whenever I create, modify, or delete stuff from the original folder those changes should be automatically applied to the sync-folder.
Which is the best approach to this task?
BTW: I'm on Ubuntu 12.04
Final goal is to have a separated real-time backup copy, without the use of symlinks or mount.
I used Ubuntu One to synchronize data between my computers, and after a while something went wrong and all my data was lost during a synchronization.
So I thought to add a step further to keep a backup copy of my data:
I keep my data stored on a "folder A"
I need the answer of my current question to create a one-way sync of "folder A" to "folder B" (cron a script with rsync? could be?). I need it to be one-way only from A to B any changes to B must not be applied to A.
The I simply keep synchronized "folder B" with Ubuntu One
In this manner any change in A will be appled to B, which will be detected from U1 and synchronized to the cloud. If anything goes wrong and U1 delete my data on B, I always have them on A.
Inspired by lanzz's comments, another idea could be to run rsync at startup to backup the content of a folder under Ubuntu One, and start Ubuntu One only after rsync is completed.
What do you think about that?
How to know when rsync ends?
You can use inotifywait (with the modify,create,delete,move flags enabled) and rsync.
while inotifywait -r -e modify,create,delete,move /directory; do
rsync -avz /directory /target
done
If you don't have inotifywait on your system, run sudo apt-get install inotify-tools
You need something like this:
https://github.com/axkibe/lsyncd
It is a tool which combines rsync and inotify - the former is a tool that mirrors, with the correct options set, a directory to the last bit. The latter tells the kernel to notify a program of changes to a directory ot file.
It says:
It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes.
But - according to Digital Ocean at https://www.digitalocean.com/community/tutorials/how-to-mirror-local-and-remote-directories-on-a-vps-with-lsyncd - it ought to be in the Ubuntu repository!
I have similar requirements, and this tool, which I have yet to try, seems suitable for the task.
Just simple modification of #silgon answer:
while true; do
inotifywait -r -e modify,create,delete /directory
rsync -avz /directory /target
done
(#silgon version sometimes crashes on Ubuntu 16 if you run it in cron)
Using the cross-platform fswatch and rsync:
fswatch -o /src | xargs -n1 -I{} rsync -a /src /dest
You can take advantage of fschange. It’s a Linux filesystem change notification. The source code is downloadable from the above link, you can compile it yourself. fschange can be used to keep track of file changes by reading data from a proc file (/proc/fschange). When data is written to a file, fschange reports the exact interval that has been modified instead of just saying that the file has been changed.
If you are looking for the more advanced solution, I would suggest checking Resilio Connect.
It is cross-platform, provides extended options for use and monitoring. Since it’s BitTorrent-based, it is faster than any other existing sync tool. It was written on their behalf.
I use this free program to synchronize local files and directories: https://github.com/Fitus/Zaloha.sh. The repository contains a simple demo as well.
The good point: It is a bash shell script (one file only). Not a black box like other programs. Documentation is there as well. Also, with some technical talents, you can "bend" and "integrate" it to create the final solution you like.
I have a series of files named filename.part0.tar, filename.part1.tar, … filename.part8.tar.
I guess tar can create multiple volumes when archiving, but I can't seem to find a way to unarchive them on Windows. I've tried to untar them using 7zip (GUI & commandline), WinRAR, tar114 (which doesn't run on 64-bit Windows), WinZip, and ZenTar (a little utility I found).
All programs run through the part0 file, extracting 3 rar files, then quit reporting an error. None of the other part files are recognized as .tar, .rar, .zip, or .gz.
I've tried concatenating them using the DOS copy command, but that doesn't work, possibly because part0 thru part6 and part8 are each 100Mb, while part7 is 53Mb and therefore likely the last part. I've tried several different logical orders for the files in concatenation, but no joy.
Other than installing Linux, finding a live distro, or tracking down the guy who left these files for me, how can I untar these files?
Install 7-zip. Right click on the first tar. In the context menu, go to "7zip -> Extract Here".
Works like a charm, no command-line kung-fu needed:)
EDIT:
I only now noticed that you mention already having tried 7zip. It might have balked if you tried to "open" the tar by going "open with" -> 7zip - Their command-line for opening files is a little unorthodox, so you have to associate via 7zip instead of via the file association system built-in to windows. If you try the right click -> "7-zip" -> "extract here", though, that should work- I tested the solution myself (albeit on a 32-bit Windows box- Don't have a 64 available)
1) download gzip http://www.gzip.org/ for windows and unpack it
2) gzip -c filename.part0.tar > foo.gz
gzip -c filename.part1.tar >> foo.gz
...
gzip -c filename.part8.tar >> foo.gz
3) unpack foo.gz
worked for me
As above, I had the same issue and ran into this old thread. For me it was a severe case of RTFM when installing a Siebel VM . These instructions were straight from the manual:
cat \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.1of3 \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.2of3 \
OVM_EL5U3_X86_ORACLE11G_SIEBEL811ENU_SIA21111_PVM.tgz.3of3 \
| tar xzf –
Worked for me!
The tar -M switch should it for you on windows (I'm using tar.exe).
tar --help says:
-M, --multi-volume create/list/extract multi-volume archive
I found this thread because I had the same problem with these files. Yes, the same exact files you have. Here's the correct order: 042358617 (i.e. start with part0, then part4, etc.)
Concatenate in that order and you'll get a tarball you can unarchive. (I'm not on Windows, so I can't advise on what app to use.) Note that of the 19 items contained therein, 3 are zip files that some unarchive utilities will report as being corrupted. Other apps will allow you to extract 99% of their contents. Again, I'm not on Windows, so you'll have to experiment for yourself.
Enjoy! ;)
This works well for me with multivolume tar archives (numbered .tar.1, .tar.2 and so on) and even allows to --list or --get specific folders or files in them:
#!/bin/bash
TAR=/usr/bin/tar
ARCHIVE=bkup-01Jun
RPATH=home/user
RDEST=restore/
EXCLUDE=.*
mkdir -p $RDEST
$TAR vf $ARCHIVE.tar.1 -F 'echo '$ARCHIVE'.tar.${TAR_VOLUME} >&${TAR_FD}' -C $RDEST --get $RPATH --exclude "$EXCLUDE"
Copy to a script file, then just change the parameters:
TAR=location of tar binary
ARCHIVE=Archive base name (without .tar.multivolumenumber)
RPATH=path to restore (leave empty for full restore)
RDEST=restore destination folder (relative or absolute path)
EXCLUDE=files to exclude (with pattern matching)
Interesting thing for me is you really DON'T use the -M option, as this would only ask you questions (insert next volume etc.)
Hello perhaps would help.
I had the same problems ...
a save on my web site made automaticaly in Centos at 4 am create multiple file in multivolume tar format (saveblabla.tar, saveblabla.tar1.tar, saveblabla.tar2.tar,etc..)
after downloading this file on my PC (windows) i can't extract them with both windows cmd or 7zip (unknow error).
I thirst binary copy file to reassemble tar files. (above in that thread)
copy /b file1+file2+file3 destination
after that, 7zip worked !!! Thanks for you help