"config should have required property 'media_folder'" - when it does - netlify-cms

Its been a while since I've played with netlifycms - and I feel I've had this problem before but can't find the answer. My config.yml has the media_folder in it - but I'm getting an error it can't find a config setting for one - anyone have any ideas? So this is my config (full file
backend:
name: github
repo: acecentre/nhs-service-finder
branch: master
collections:
- name: "nhs-service"
label: "Service"
folder: "content/ccg"
media_folder: "static/images/uploads"
media_library:
name: uploads
create: false
But on loading the page (here) I get
Config Errors:
config should have required property 'media_folder'
config should have required property 'media_library'
config should match some schema in anyOf
Check your config.yml file.
What am I doing wrong?

Well - It took me a while.. media_folder and media_library - should be a root setting - not in collections..
backend:
name: github
repo: acecentre/nhs-service-finder
branch: master
media_folder: "static/images/uploads"
media_library:
name: uploads
collections:
- name: "nhs-service"
label: "Service"
folder: "content/ccg"

Related

How to add Elastic APM integration from API/CMD/configuration file

I've created a docker-compose file with some configurations that deploy Elasticsearch, Kibana, Elastic Agent all version 8.7.0.
where in the Kibana configuration files I define the police I needed under xpack.fleet.agentPolicies, with single command all my environment goes up and all component connect successfully.
The only issue is there is one manual step, which is I had to go to Kibana -> Observability -> APM -> Add Elastic APM and then fill the Server configuration.
I want to automate this and manage this from the API/CMD/configuration file, I don't want to do it from the UI.
What is the way to do this? in which component? what is the path the configuration should be at?
I tried to look for APIs or command to do that, but with no luck. I'm expecting help with automating the remaning step.
#Update 1
I've tried to add it as below, but I still can't see the integration added.
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
inputs:
- type: apm
enabled: true
vars:
- name: host
value: "0.0.0.0:8200"
- name: url
value: "http://0.0.0.0:8200"
- name: enable_rum
value: true
frozen: true
Tldr;
Yes, I believe there is a way to do it.
But I am pretty sure this is poorly documented.
You can find some idea in the repository of apm-server
Solution
In the kibana.yml file you can add some information related to fleet.
This section below is taken from the repository above and helped me set up apm automatically.
But if you have some specific settings you would like to see enable I am usure where you provide them.
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server (APM)
id: fleet-server-apm
is_default_fleet_server: true
is_managed: false
namespace: default
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
It is true that the kibana Fleet API is very poorly documented at this moment. I think your problem is that you are trying to add the variables to the fleet-server package insted of the apm package. Your yaml should look like this:
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
- name: apm-1
package:
name: apm
inputs:
- type: apm
keep_enabled: true
vars:
- name: host
value: 0.0.0.0:8200
frozen: true
- name: url
value: "http://0.0.0.0:8200"
frozen: true
- name: enable_rum
value: true
frozen: true
Source

Google CloudBuild artifacts YAML

I've followed the docs for Google CloudBuild here: https://cloud.google.com/cloud-build/docs/configuring-builds/store-images-artifacts
So here's my cloudbuild.yaml configuration:
steps:
- name: gcr.io/cloud-builders/git
id: git-checkout
args: [ 'fetch','--tags','--unshallow']
- name: openjdk
id: gradle-build
args: [
'./gradlew',
'--build-cache',
'-Si',
'-Panalytics.buildId=$BUILD_ID',
'-PgithubToken=$_GITHUB_TOKEN',
'-g', '$_GRADLE_CACHE',
'build'
]
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
If I comment out, the following, then it runs successfully:
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
Without comments, I get the following error from the CloudBuild console:
failed unmarshalling build config cloudbuild.yaml: json: cannot unmarshal array into Go value of type string
And under the Logs section, it simply says Logs unavailable.
You may need to indent objects: line
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
objects.location element should not be an array.
The following should work:
artifacts:
objects:
location: 'gs://my-bucket/artifacts/'
paths: ["build/libs/*.jar"]
I've also run into this error with a section of my cloudbuild.yaml file looking like:
- name: 'gcr.io/cloud-builders/git'
args:
- clone
- -depth
- 1
- --single-branch
- -b
- development
- git#bitbucket.org:aoaoeuoaeuoeaueu/oaeueoaueoauoaeuo.git
volumes:
- name: 'ssh'
path: /root/.ssh
Seems the issue is with the 1. So I just added quotes around which fixed it (- "1").

How to commit from concourse pipeline?

What is the best way to commit from a pipeline.The job pulls from a different repo and makes some changes + builds - then push the new files to a different repo.Is this possible?
You should use the git-resource.
The basic steps of what you are going to want to do are to
Pull from the repo into a container.
Do some stuff with the code
Move the new code into a different container
Push the contents of that new container to a different git-repository
Your pipeline configuration should look something like this:
jobs:
- name: pull-code
plan:
- get: git-resource-pull
- get: git-resource-push
- task: do-something
inputs:
- name: git-resource-pull
run:
path: /bin/bash
args:
- -c
- |
pushd git-resource-pull
// do something
popd
// move the code from git-resource-pull to git-resource-push
- put: git-resource-push
params: {repository: git-resource-push}
resources:
- name: git-resource-pull
type: git
source:
uri: https://github.com/team/repository-1.git
branch: master
- name: git-resource-push
type: git
source:
uri: https://github.com/team/repository-2.git
branch: master

Pull from multiple SCM then mv file in Concourse CI to workdir

I've been banging my head on this one for quite some time and I cannot figure it out (I know it must be a simple thing to do though).
Currently, what I'm trying to do is pulling from two repositories (which naturally creates two separate directories) then I'm trying to move files from one directory to the other to successfully execute the Dockerfile.
Here's how my pipeline.yml file looks like:
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic git-nexus-docker-images/nexus.lic; ls -la git-nexus-docker-images
- put: nexus-docker-image
params:
build: git-nexus-docker-images/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I've posted the pipeline that actually can be deployed to Concourse; however I tried a lot of things, but I can't figure out how to do this. I'm stuck on the part of moving the license file from git-nexus-license directory to git-nexus-docker-images directory. What I've done doesn't seem to mv the nexus.lic file because when while building the docker image it fails because it cannot find that file in the directory.
EDIT: I've successfully been able to "mv" nexus.lic using the code above, however the build is still failing due to not finding the file! I'm not sure what I'm doing wrong, the build works properly if I do it manually but with Concourse it's failing.
Okay so I figured out what I was doing wrong and as usual it was something small. I forgot to add the outputs to the yml file which tells concourse that this is the new workdir. Here's how it looks like now (which works for me):
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
outputs:
- name: build-nexus-dir
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic build-nexus-dir/nexus.lic; mv -v git-nexus-docker-images/* build-nexus-dir; ls -la build-nexus-dir;
- put: nexus-docker-image
params:
build: build-nexus-dir/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I hope this helps whoever gets stuck on this. :)

How do I tag a local docker image with ansible docker_image module?

I'm building a local docker image and I'd like to tag it, but I have no idea how the repository field should be filled for a docker image I just built locally.
Is tagging local images even possible with the docker_image module?
Seems that there is better solution with docker_image:
tasks:
- name: build_image
docker_image:
name: test_img:latest # Name of image, may be with repo path.
path: .
state: present
register: image_build
- name: tag_version
docker_image:
name: test_img:latest # Equal to name in build task.
repository: test_img:1.2 # New tag there.
pull: no
state: present
when: image_build.changed
Effect:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_img 1.2 ab14e8cce7ef 9 seconds ago 142MB
test_img latest ab14e8cce7ef 9 seconds ago 142MB
Works also with pushing to repo (need to change name to full repo path).
You can build and tag in one command. If you want it to stay local, you can't include a repository in the name (no /). The tag is just the equivalent of "latest". So the result looks something like:
- name: 'Build an image with a tag'
docker_image:
path: .
name: ansible-module
tag: v1
state: present
And the result will look like:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansible-module v1 39be0dcc8dfa 2 minutes ago 1.093 MB
If you want to include your registry url or repository name (docker hub login) and don't want to automatically push after building, I don't believe you can use this Ansible plugin.
Update, for an additional tag, you can do:
- name: 'Build an image'
docker_image:
path: .
name: ansible-module
tag: v1
state: present
register: docker_build
- name: 'Retag image'
shell: docker tag ansible-module:v1 ansible-module:dev
when: docker_build.changed
I was able to find out that the repository of the docker_image module is just the name of the image when it's locally built.
This is how you first build the image with tag latest and then add a tag to it.
- name: Build docker image
become: yes
docker_image:
path: /tmp/foo
name: foo
state: present
- name: Tag docker image
become: yes
docker_image:
name: foo
repository: foo
tag: "{{ version.stdout }}"
state: present

Resources