Suppose I have a monorepo. Apps (app1, app2) use a Gradle as a build system and share some build-logic with includeBuild("../shared-build-logic") which is outside of the root of each app.
├── shared-build-logic
│ └── src/...
└── app1
├── Earthfile
├── build.gradle
├── src/...
└── app2
├── Earthfile
├── build.gradle
├── src/...
Is it possible for Earthfile to access the files from outside of its root folder or Earthly has the same restrictions as Dockerfile?
I get the following error on attempt to COPY ../shared-build-logic ./:
============================ ❌ FAILURE [2. Build 🔧] ============================
Repeating the output of the command that caused the failure
+compile *failed* | --> COPY ../shared-build-logic ./
+compile *failed* | [no output]
+compile *failed* | ERROR Earthfile line 22:4
+compile *failed* | The command
+compile *failed* | COPY ../shared-build-logic ./
+compile *failed* | failed: "/shared-build-logic": not found
I would also like to perform integration testing with the docker-compose.yaml file located one level above the Eartfile root, but facing the similar problem:
integration-tests:
FROM earthly/dind:alpine
COPY ../docker-compose.yaml ./ # <------- does not work
WITH DOCKER --compose docker-compose.yaml --load build-setup=+compile --allow-privileged
RUN docker run -e SPRING_DATA_MONGODB_URI=mongodb://mongodb:27017/test-db --network=default_dev-local build-setup ./gradlew test
END
Is my the only solution to the problem to move Earthfile itself one level upper?
While you can't directly access targets outside of your Earthfile directory you can reference targets.
This allows you to write a target in an Earthfile under shared-build-logic that saves an artifact containing those files
You can expose the files you need by using a target.
shared-build-logic/Earthfile
files:
WORKDIR files
# Copy all of files you want to share
SAVE ARTIFACT ./*
app/Earthfile
use-files:
COPY ../shared-build-logic+files/* .
# do stuff
You should be able to do something similar with you integration-test target.
Earthfile
files:
WORKDIR files
SAVE ARTIFACT docker-compose.yaml
folder-with-integration-tests/Earthfile
integration-tests:
FROM earthly/dind:alpine
COPY ../+files/docker-compose.yaml ./
WITH DOCKER --compose docker-compose.yaml --load build-setup=+compile --allow-privileged
RUN docker run -e SPRING_DATA_MONGODB_URI=mongodb://mongodb:27017/test-db --network=default_dev-local build-setup ./gradlew test
END
Related
I'm trying to get to grips with working with workspaces in go 1.18, and how to make it work well in a monorepo.
As a minimum example, I have the following project structure:
.
├── docker-compose.yml
├── go.work
├── makefile
└── project
├── go.mod
├── module1
│ ├── Dockerfile
│ ├── go.mod
│ └── main.go
├── module2
│ ├── Dockerfile
│ ├── go.mod
│ └── main.go
└── shared-module
├── go.mod
└── shared.go
module1 and module2 can both be built into binaries with their respective main.go files. However, module1 uses shared-module, but module2 does not.
When building the binaries without Docker, I can cd into module1 or module2 and run go build and everything works fine. However problems occur when I want to use Docker for the builds.
Here is the Dockerfile for module1:
# FIRST STAGE
FROM golang:1.18 AS builder
WORKDIR /app/
# copy workfiles
COPY go.* ./
WORKDIR /app/project/
# dependency management files
COPY project/go.* ./
COPY project/module1/go.* ./module1/
COPY project/shared-module/go.* ./shared-module/
WORKDIR /app/project/module1/
RUN go mod download
WORKDIR /app/project/
# copy shared module
COPY project/shared-module/ ./shared-module/
# copy module to compile
COPY project/module1/ ./module1/
WORKDIR /app/project/module1/
RUN CGO_ENABLED=0 GOOS=linux go build -o bin/module1
# SECOND STAGE
FROM alpine:3.14.2
WORKDIR /app
COPY --from=builder /app/project/module1/bin/module1 /app
ENTRYPOINT [ "./module1" ]
With this build i'm trying to maximise caching by specifying files which change infrequently first. I'm also excluding any files which I don't need for the module (like module2).
Running docker-compose build module1 to build the image using that Dockerfile, the build fails with error:
go: open /app/project/module2/go.mod: no such file or directory
This initially surprised me, as module2 is not a dependency of either module1, or shared-module, but after a bit of consideration I realised it was because of the go.work file which specifies ./project/module2. Removing the line in go.work that specifies that module allows the image to be built successfully.
My question is therefore, if I want to have streamlined image builds, do I have to create multiple go.work files for each of the modules I want to build in Docker? For example, I would need another go.work file for module2 which omits module1 and shared-module.
I'm running into the problem that my gradle wrapper will only find subprojects if I execute it whilst being in the same working directory. For example:
Let's say the project structure is as follows:
.
├── app
│ ├── build.gradle
│ ├── ...
├── build.gradle
├── gradlew
├── settings.gradle
└── ...
It makes a difference whether I run gradlew from it's directory or from a different directory. If I run:
$ ./gradlew projects
> Task :projects
------------------------------------------------------------
Root project
------------------------------------------------------------
Root project 'com.name'
+--- Project ':app'
it has no problem finding :app. However, if I navigate and execute gradlew from a folder up, it cannot find it:
$ cd ..
$ ./android/gradlew projects
> Task :projects
------------------------------------------------------------
Root project
------------------------------------------------------------
Root project 'com'
No sub-projects
It can't find the projects. This is problematic for me since I need to run a task in :app from a pipeline from a different working directory, e.g. ./xx/yy/gradlew app:publishTask. However doing it this way, gradle can't find the task because it can't find the project. Is there a way to run these commands from any location?
Yes, it is. You have to:
store your current location in an temporary variable
change location to project directory
run ./gradlew
restore directory from variable
ex:
TMP_DIR=`pwd`
cd /path/to/project
./gradlew projects
cd $TMP_DIR
In my maven application i have multiple projects:
Core
Application 1
Application 2
Application 1 and Application 2 are two projects that uses the core (for example, those application are built for two different customers)
In order to Dockerize all of them, the simplest way would be to create a multi-module project, but the downside is that i have all inside a single project (core + Application 1 + Application 2).
I would like to have the core separated from them.
The main problem with this configuration is that the core project need to built before the other two, and App 1 and App 2 use this as maven dependency:
App 1
<dependency>
<groupId>it.myorg</groupId>
<artifactId>core-project</artifactId>
<version>1.12.0-SNAPSHOT</version>
</dependency>
If i try to dockerize the App 1 its fail when i package it, because inside the docker container core-project 1.12.0-SNAPSHOT do not exists.
I was thinking to setup a "local maven repo", pushing the core there and App 1 will "pull" the jar from the repo and not from .m2 folder, but i dont like this soulution.
I can provide more information, sorry if i dont provide examples, but my paper is white right now :(
Folder structure
+- Core
--- pom.xml
--- src
+- Application1
--- pom.xml
--- src
The solution i'm trying to do now is create a Dockerfile for core project (FROM maven:latest), building the image with a tag and in Dockerfile of App1 use this image (so, creating a multi stage build but in two separate moments).
The best would be
FROM maven:latest as core-builder
## build the core
FROM maven:latest
## Copy jar from builder
Because the project are in separate folder, i cant build the core in this way. I need to build del core BEFORE (running docker build -t) and later copy from them.
UPDATE
After the correct answer from #mihai i'm asking if its possible a structure like this:
-- myapp-docker
- Dockerfile
- docker-compose.yml
-- core-app
-- application_1
Having Dockerfile at the same level of core-app and application_1 its totally fine and 100% working. The only "problem" is that i should put all the projects in the same repo.
This is the proposed solution with multi-stage builds.
To replicate your setup I created this structure:
.
├── Dockerfile-app1
├── application1
│ ├── pom.xml
│ └── src
│ └── main
│ ├── resources
│ └── webapp
│ ├── WEB-INF
│ │ └── web.xml
│ └── index.jsp
├── core
│ ├── pom.xml
│ └── src
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── test
│ │ └── App.java
│ └── test
│ └── java
│ └── com
│ └── test
│ └── AppTest.java
In the pom.xml file from Application 1 I added the dependency to core:
<dependency>
<groupId>com.test</groupId>
<artifactId>core</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
I named the Dockerfile Dockerfile-app1, this way you can have more than 1 of them.
This is the Dockerfile-app1:
FROM maven:3.6.0-jdk-8 as build
WORKDIR /apps
COPY ./core .
RUN mvn clean install
FROM maven:3.6.0-jdk-8
# If you comment this out then the build fails because it cannot find the dependency to 'core'
COPY --from=build /root/.m2 /root/.m2
COPY ./application1 ./
RUN mvn clean install
You should probably add an entrypoint at the end to run your project or even better add another 3rd stage that only copies the generated artefacts and runs your project (this way the final image will not have your sourced in).
The first stage only builds the core submodule.
The second stage used the results of the first stage, copies only the source for application1 and builds it.
You can easily replicate this for application2 by creating a similar file Dockerfile-app2.
Since you're using maven, try dockerfile-maven to build the image. You don't want any of your build information inside of your image (like what the dependencies are), you should just add the jar at the end. I usually use it together with spring-boot-maven-plugin and repackage, to get a fully self-contained jar.
I am using Yocto 2.3 to build my device image.
My image includes packagegroup-core-boot that, in turn, includes busybox.
IMAGE_INSTALL = "\
....
packagegroup-core-boot \
Busybox is configured to include syslogd:
CONFIG_SYSLOGD=y
CONFIG_FEATURE_ROTATE_LOGFILE=y
CONFIG_FEATURE_REMOTE_LOG=y
CONFIG_FEATURE_SYSLOGD_DUP=y
CONFIG_FEATURE_SYSLOGD_CFG=y
CONFIG_FEATURE_SYSLOGD_READ_BUFFER_SIZE=256
CONFIG_FEATURE_IPC_SYSLOG=y
CONFIG_FEATURE_IPC_SYSLOG_BUFFER_SIZE=64
CONFIG_LOGREAD=y
CONFIG_FEATURE_LOGREAD_REDUCED_LOCKING=y
CONFIG_FEATURE_KMSG_SYSLOG=y
CONFIG_KLOGD=y
It is built and installed correctly.
Relevant syslog files do appear in busybox image directory:
tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0/image$ tree etc/
etc/
├── default
├── init.d
│ └── syslog.busybox
├── syslog.conf.busybox
├── syslog-startup.conf.busybox
These files don't appear in my main image rootfs, though. Only the syslogd command is included. See output on target device:
# ls -l $( which syslogd )
lrwxrwxrwx 1 root root 19 Jan 10 12:31 /sbin/syslogd -> /bin/busybox.nosuid
What can be happening to make this files not to be included in the final image?
Additional question:
As shown in the tree output, the init script for syslog is included in busybox but no link to /etc/rc?.d/ is created.
I understand that is should be created by a do_install() hook, shouldn't?
Thanks in advance.
EDIT
Contents of packages-split, as #Anders says, seems ok:
poky/build-idprint/tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0$ tree packages-split/busybox-syslog/
packages-split/busybox-syslog/
└── etc
├── init.d
│ ├── syslog
│ └── syslog.busybox
├── syslog.conf
├── syslog.conf.busybox
├── syslog-startup.conf
└── syslog-startup.conf.busybox
I just can't figure out what is stripping this files out of the final image.
Check tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0/packages-split. This is where all files are split into the packages that will be generated. If you search that directory, you'll find eg syslog.conf in the busybox-syslog package.
Thus, in order to get those files into your image, you'll need to add busybox-syslog to your image. I.e. IMAGE_INSTALL += "busybox-syslog".
I want to include a specific version of Gradle in the project folder so that when I use the Gradle wrapper it doesn't download it from the remote repository.
I downloaded the version of Gradle I need (gradle-4.0-bin.zip) and I put that zip fine inside of gradle/wrapper/ folder of the project (created with the gradle wrapper command).
Then I edited the gradle-wrapper.properties file in this way:
distributionUrl=file:///Users/pathj/to/the/project/gradle/wrapper/gradle-4.0-bin.zip
But when I run the first command, such as gradle task it returns:
What went wrong: A problem occurred configuring root project '03-gradle-wrapper-local'.
java.io.FileNotFoundException: /Users/myself/.gradle/wrapper/dists/gradle-4.0-bin/3p92xsbhik5vmig8i90n16yxc/gradle-4.0/lib/plugins/gradle-diagnostics-4.0.jar
(No such file or directory)
How do I tell Gradle to get the zip file from the current project folder, with a relative path, instead of downloading it, and to use that zip file to create a wrapper to be used in my builds?
Apart from storing gradle wrapper locally make sense or not it is possible. I assume that gradle-4.0-rc-3-bin distro is used.
Here is the project structure:
.
├── gradle
│ └── wrapper
│ ├── gradle-4.0-rc-3-bin.zip
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradlew
└── gradlew.bat
And here the content of gradle-wrapper.properties:
distributionBase=PROJECT
distributionPath=gradle
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=gradle-4.0-rc-3-bin.zip
Since wrapper files will be downloaded to the project dir adding gradle/gradle-4.0-rc-3-bin to SCM ignore file is recommended.
Demo can be found here.