How to install an extra software package in a buildpack? [duplicate] - ruby

I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image.
I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?

I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image. I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
This applies to Spring Boot apps, but really also any other app you can build with buildpacks.
There are a couple of options:
You can customize the base image that you use (called a stack).
You can add additional buildpacks which will perform more customizations during the build.
#2 is obviously easier if there is a buildpack that provides the functionality that you require. In regards to Datadog specifically, the Paketo buildpack now has a Datadog Buildpack you can use with Java and Node.js apps.
It's more work, but you can also create a buildpack if you are looking to add specific functionality. I wouldn't recommend this if you have one application that needs the functionality, but if you have lots of applications it can be worth the effort.
A colleague of mine put this basic sample buildpack together, which installs and configures a fictitious APM agent. It is a pretty concise example of this scenario.
#1 is also possible. You can create your own base image and stack. The process isn't that hard, especially if you base it on a well-known and trusted image that is getting regular updates. The Paketo team also has the jam create-stack command which you can use to streamline the process.
What's more difficult with both options is that you need to keep them up-to-date. That requires some CI to watch for software updates & publish new versions of your buildpack or stack. If you cannot commit to this, then both are a bad idea because your customization will get out of date and potentially cause security problems down the road.
UPDATE
You can bundle dependencies with your application. This option works well if you have static binaries you need to include, perhaps a cli you call to from your application.
In this case, you'd just create a folder in your project called binaries/ (or whatever you want) and place the static binaries in there (make sure to download versions compatible with the container image you're using, Paketo is Ubuntu Bionic at the time I write this). Then when you call the cli commands from your application, simply use the full path to them. That would be /workspace/binaries or /workspace/<path to binaries in your project>.
You can use the apt buildpack to install packages with apt. This is a generic buildpack that you provide a list of apt packages to and it will install them.
This can work in some cases, but the main drawback is that buildpacks don't run as root, so this buildpack cannot install these packages into their standard locations. It attempts to work around this by setting env variables like PATH, LD_LIBRARY_PATH, etc to help other applications find the packages that have been installed.
This works ok most of the time, but you may encounter situations where an application is not able to locate something that you install with the apt buildpack. Worth noting if you see problems when trying this approach.
END OF UPDATE
For what it's worth, this is a common scenario that is a bit painful to work through. Fortunately, there is an RFC that should make the process easier in the future.
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?
You can docker push it or you can add the --publish flag to pack build and it will send the image to whatever registry you tell it to use.
https://stackoverflow.com/a/28349540/1585136
The publish flag works the same way, you need to name your image [REGISTRYHOST/][USERNAME/]NAME[:TAG].

For me what worked was in my build.gradle file (I'm using kotlin) I added this:
bootBuildImage {
val ecrRepository: String? by project
buildpacks = listOf("urn:cnb:builder:paketo-buildpacks/java", "urn:cnb:builder:paketo-buildpacks/datadog")
imageName = "$ecrRepository:${project.version}"
environment = mapOf("BP_JVM_VERSION" to "17.*", "BP_DATADOG_ENABLED" to "true")
isPublish = true
docker {
val ecrPassword: String? by project
publishRegistry {
url = ecrRepository
username = "AWS"
password = ecrPassword
}
}
}
notice the buildpacks part where I added first the base default oci and then the datadog oci. I also added on the environment the BP_DATADOG_ENABLED to true, so that it adds the agent.

Related

Java buildpack for sourceless project?

We're wanting to set up Elastic Search with Appbase.io's abc tool (pointed at one of our existing PostgreSQL dynos). We don't want to compile either of these tools, but Heroku's buildpack documentation strongly suggests that there are flags that are set with the java buildpack sets that we want/need. The most simple solution would probably be to set up an empty pom file, but I don't know if that would actually work. (I assume it's not that hard to do--no java knowledge here.)
I plan to use Amiel's fix of the apt buildpack to actually install Elastic Search, and a custom buildpack to grab & install abc.
Is this a reasonable approach?
What are the gottchas we need to watch out for?

Deployment strategy for golang app, how to run golang app on production

I have a golang web app, and I need to deploy it. I am trying to figure out the best practice on how to run a golang app on production. The way I am doing it now, very simple,
just upload the built binary to production, without actually having
the source code on prod at all.
However I found an issue, my source code actually reads a config/<local/prod>.yml config file from the source. If I just upload the binary without source code, the app cant run because it is missing config. So I wonder what is the best practice here.
I thought about a couple solutions:
Upload source code, and the binary or build from the source
Only upload the binary and the config file.
Move yml config to Env Variables, but I think with this solution, the code will be less structured, because if you have lots of configs, env variables will be hard to manage.
Thanks in advance.
Good practice for deployment is to have reproducible build process that runs in a clean room (e.g. Docker image) and produces artifacts (binaries, configs, assets) to deploy, ideally also runs some tests that prove nothing was broken from the last time.
It is a good idea to package service - binary and all its needed files (configs, auxiliary files such as systemd, nginx or logrotate configs, etc.) into some sort of package - be it package native to your target environment Linux distribution (DPKG, RPM), virtual machine image, docker image, etc. That way you (or someone else tasked with deployment) won't forget any files, etc. Once you have package you can easily verify and deploy using native tools for that packaging format (apt, yum, docker...) to production environment.
For configuration and other files I recommend to make software to read it from well known locations or at least have option to pass paths in command line arguments. If you deploy to Linux I recommend following LFHS (tldr; configuration to /etc/yourapp/, binaries to /usr/bin/)
It is not recommended to build the software from source in production environment as build requires tools that are normally unnecessary there (e.g. go, git, dependencies, etc.). Installing and running these requires more maintenance and might cause security and performance risks. Generally you want to keep your production environment minimal as required to run the application.
I think the most common deployment strategy for an app is trying to comply with the 12-factor-app methodology.
So, in this case, if your YAML file is the configuration file, then it would be better if you put the configuration on the Environment Variables(ENV vars). So that when you deploy your app on the container, it is easier to config your running instance from the ENV vars rather than copying a config file to the container.
However, while writing system software, it is better to comply with the file system hierarchy structure defined by the OS you are using. If you are using a Unix-like system you could read the hierarchy structure by typing man hier on the terminal. Usually, I install the compiled binary on the /usr/local/bin directory and put the configuration inside the /usr/local/etc.
For the deployment on the production, I created a simple Makefile that will do the building and installation process. If the deployment environment is a bare metal server or a VM, I commonly use Ansible to do the deployment using ansible-playbook. The playbook will fetch the source code from the code repository, then build, compile, and install the software by running the make command.
If the app will be deployed on containers, I suggest that you create an image and use multi-stage builds so the source code and other tools that needed while building the binary would not be in the production environment and the image size would be smaller. But, as I mentioned before, it is a common practice to read the app configuration from the ENV vars instead of a config file. If the app has a lot of things to be configured, the file could be copied to the image while building the image.
While we wait for the proposal: cmd/go: support embedding static assets (files) in binaries to be implemented (see the current proposal), you can use one of the embedding static asset files tools listed in that proposal.
The idea is to include your static file in your executable.
That way, you can distribute your program without being dependent on your sources.

Why Use Spring Boot with Docker?

I'm quite new in docker, and i'm wondering that when using spring-boot, we can easily build, ship and deploy the application with maven or gradle plugin; and we can easily add Load Balance feature. So, what is the main reason to use docker in this case? Is containerized really needed in everywhere? Thanks for the reply!
Containers helps you to get the software to run reliably when moved from one computing environment to another. Docker consists of an entire runtime environment: your application and all its dependencies, libraries and other binaries and configuration files needed for its execution.
It also simplifies your deployment process, reducing a hell lot of mess to just one file.
Once you are done with your code, you can simply build and push the image on docker hub. All you need to do now on other systems is to pull the image and run container. It will take care of all the dependencies and everything.

Can I extend WebSphere Liberty buildpack?

I am looking for extending WebSphere Liberty buildpack included in Bluemix with some third party libraries from our application architecture, so the size of EAR file will decrease a lot and cf push command will be more fast and agile. Is it possible?
I know there is a WebSphere Liberty buildpack open sourced at Cloudfoundry.org but as far as I know, it is not so powerful like the one included in Bluemix and we would loose some interesting features.
Thanks!
The Liberty buildpack you find on GitHub should be pretty similar to the one Bluemix uses especially in terms of features. Beta features are included and auto-configuration should work so in theory if you do cf push appName -b https://github.com/cloudfoundry/ibm-websphere-liberty-buildpack.git -p myapp.war it should work the same way if you did cf push appName -p myapp.war.
If you want to modify the buildpack you can fork it and add the jars you want although I am unsure on the process for adding the jars. Maybe someone else can add an answer that points you in the right direction.
You can find an alternative buildpack for Websphere Liberty here. As you will notice, it is actually a CloudFoundry buildpack -- normally all of them will work in Bluemix. To use it, add a buildpack: line to your manifest.yml or a -b option to cf command line tool.
There is not much detail in your question as to what libraries or modifications you need, but this buildpack is in github and you can freely fork it and modify it. The buildpack prepares the runtime, and you should be able to add your own downloads in some hooks and replicate the directory structure you must be using locally. It is also very well documented, so you should find your way through the configurations if you are doing anything non-standard.
You are welcome to share your own new buildpack too!
If you want to be more efficient during the development cycle where you constantly push changes, you can try the 'development mode' support in the Bluemix Liberty buildpack. This allows you to push incremental file changes without even restarting the Liberty server (not to mention the whole application container, aka no cf push). See the doc here: https://www.ng.bluemix.net/docs/manageapps/eclipsetools/eclipsetools.html#incrementalpublish. You can also do remote debugging with development mode.
To customize a Liberty server in Bluemix, you can also use the server package command of a local Liberty server, and then cf push the generated zip.
See https://www.ibm.com/developerworks/community/blogs/msardana/entry/developing_with_bluemix_customizing_the_liberty_build_pack_to_add_your_configurations for details
That, of course, defeats your goal of smaller deployment times, but I wanted to add the possibility for completeness and to prevent someone from forking the buildpack without needing it.

Can docker application use dependent images?

We have an application that use jvm and python runtime and some other libraries.
As our view, jvm and python runtime and these libraries are our application dependent components.
We use docker as our development environment, but current docker release(1.5) seem to only supports streamline-base image building style, that we must specify all our dependent libraries in our dockerfile.
Is it possible to specify these libraries when we execute docker run command?
In our application example (suppose we build some docker images previously: jvm, python, lib1, lib2...)
We want to execute app. by $docker run --dep_img=jvm --dep_img=python --dep_img=lib1 --dep_img=lib2 our_app_image
Possible?
Short answer: No. The idea behind a Docker image is a static containerized app, with all its dependencies inside. There are some workarounds you can face the issue and solve it:
Having different images/tags for each dependency (or group of dependencies). Personally I think this is the approach you should try to follow. Organize the images and dependencies you have and create one image for each possible dependency group you can have.
Have a init script in you docker image that installs the dependencies each time you create a container. This script can get the dependencies through arguments to the script or using environmental variables. Use this approach if you have a very large number of possible scenarios of dependencies.

Resources