I've seen quite a few posts relating to Docker security and especially with supply chain attacks in the news recently about limiting your Docker base images to images that you trust.
However, I'm finding it difficult to find information on how to actually do this other than maybe some kinda of Dockerfile parsing. Perhaps we could inspect an image and find that one of the layers contains a sha256 of a base image we trust.
What about in multistage builds? Whatever image we used to build our package should be trusted as well.
Does anyone have any suggestions or experiences or tools to help ensure that only images that have been approved can be used as a base image for a final image and in multistage builds as well? Basically any "FROM" should be from an image that we can approve.
You can enable content trust for docker build which should verify base image integrity.
https://docs.docker.com/engine/security/trust/trust_automation/#build-with-content-trust
Related
I'm using Gatsby.js and gatsby-image to build a website that currently has about 300 images on it. I'm encountering 2 problems:
gatsby develop and gatsby build take many minutes to run because gatsby-image generates multiple resolutions and svg placeholder images for every image on the site. This makes for a great user experience once that pre-optimization is done, but a very slow development experience if I ever need to re-build.
My current workaround is to remove all but a few images during development.
Deploying to GitHub Pages takes too long with so many images (300 base images * 3 resolutions + 1 svg representation). Trying to deploy the website to GitHub pages causes a timeout. Going to attempt deploying to Netlify instead, but I anticipate the same problem. I also don't want to have to re-upload the images every time I make a change on the website.
I don't feel like my <1000 images should qualify as "image heavy", but given poor to mediocre upload speeds, I need a way to upload them incrementally and not re-upload images that have not changed.
Is there a way to upload images separately from the rest of a build for a Gatsby website?
I think maybe I could get something working with AWS S3, manually choosing which files from my build folder I upload when creating a new deploy.
Anyone else dealt with handling an image-heavy Gatsby site? Any recommendations for speeding up my build and deploy process?
Disclaimer: I work for Netlify
Our general recommendation is to do the image optimization locally and check those files into GitHub since it can take longer than our CI allows you (15 minutes) to do all that work and it is repetitive.
There is also an npm module that lets you cache the things you've made alongside your dependencies: https://www.npmjs.com/package/cache-me-outside that may do that for you without abusing GitHub (instead abusing Netlify's cache :))
See another answer from Netlify: smaller source images (as mentioned by #fool) or offloading to a service like Cloudinary or Imgix.
Currently I'm doing POC in docker for Windows 2016. I just want to know how to build an own image.
Currently we are using
docker pull microsoft/windowsservercore
to pull base image but due to security reason we should not download images from public repository. So we should build our own Windows images.
Is it possible to build our own image with out downloading? If yes, means how we can build our own Windows server images.
There are plenty of ways to build a base image you can use tar or scratch
Below is the example:
FROM scratch
ADD helloworld.sh /usr/local/bin/hellowworld.sh
CMD ["/usr/local/bin/helloworld.sh"]
see the link to get more information
I think you've to setup your own company docker registry. After setting it up you can "import" the windowsservercore image to the private registry. See this link for further explanation.
You can use one downloaded image as the base image and then customize that image as per your needs. You should refer to Dockerfile for information regarding configuring your own image.
Is there a way to obtain the docker parent image tree for a given image? I know
docker history IMG_NAME will provide an image id for the current image you're working with but everything else is missing. I've read this was taken out in v1.10 for security concerns but it seems to be a larger concern not being able to verify the tree of images that a final image was created from.
The other other thing I've found is docker save IMG_NAME -o TAR_OUTPUT.tar which will let you view all of the files in each layer but that seems pretty tedious.
How can I be assured that the only things modified in a given image for a piece of software is the installation and configuration of the software itself. It seems that being able to see the changes in the Dockerfiles used to generated each successive image would be an easy way to verify this.
Apart from has been said by chintan thakar, you will have to iterate maybe several times.
An example should clarify this
Suppose you want to dig into an image, and the Dockerfile to create this image starts with
FROM wordpress
so you go to
https://hub.docker.com/_/wordpress/
have a look at the Dockerfile, and you notice that
https://github.com/docker-library/wordpress/blob/0a5405cca8daf0338cf32dc7be26f4df5405cfb6/php5.6/apache/Dockerfile
starts with
FROM php:5.6-apache
so you go to the PHP 5.6 reference at
https://github.com/docker-library/php/blob/eadc27f12cfec58e270f8e37cd1b4ae9abcbb4eb/5.6/apache/Dockerfile
and you find the Dockerfile starts with
FROM debian:jessie
so you go to the Dockerfile of Debian jessie at
https://github.com/debuerreotype/docker-debian-artifacts/blob/af5a0043a929e0c87f7610da93bfe599ac40f29b/jessie/Dockerfile
and notice that this image is built like this
FROM scratch
ADD rootfs.tar.xz /
CMD ["bash"]
So you will need to do this if you want to see from where all the files come.
If there is a security issue notified, you will also need to do this, in order to know if you are concerned or not.
Say I use a Dockerfile to build an image from ubuntu:14.04, then install some things, add some code, then push to a repo where the image will be deployed for testing and eventually production.
My image works out to be > 2gbs. Most of that is the underlying and unchanging ubuntu:14.04 image layer.
Instead of shipping around my bloated image containing the ubuntu:14.04 base layer - theoretically i should be able to ensure my target systems already have this image - and i'd ship around just my higher level changes which would be applied on top.
(of course if the underlying image changed, i'd have to ensure the latest version was available on the target systems also)
Can we do this?
There is a tool called 'bub' which can split images into chunks and recognize only the differences. Check it out
Just wanted to know if the docker image with name dockstore-tool-samtools-index which is available here "https://quay.io/repository/cancercollaboratory/dockstore-tool-samtools-index"
and given as an input to the Google Genomics API (pipelines.create) contains the genome tools such as GATK/BWA or Cromwell.
Any help regarding this will be appreciated.
Thanks.
It does not appear to contain those additional tools: https://github.com/CancerCollaboratory/dockstore-tool-samtools-index/blob/master/Dockerfile
Here's how to check:
Find the docker container on https://dockstore.org/search-containers.
Click on the "GitHub" link in the row with the container of interest.
Read the contents of the Dockerfile to see what the image will contain.
One aspect of Docker images is that they usually try to have only one specific tool installed on them. This way the images are kept as small as possible with the idea that they can be used like modules.
There are images listed in the Dockstore search link provided by Nicole above which have BWA installed. Cromwell usually launches Docker containers rather than being installed on a Docker image, since it is more of workflow management system. You are always welcome to create your own custom image with the preferred installed software packages to fit what you need.
Hope it helps,
Paul