What are the possible reasons a code climate gap badge would show up as a question mark/unknown?
The other badges are working however, I can see number of issues,
and % LoC Covered badges.
Here's my .codeclimate.yml file
engines:
rubocop:
enabled: true
eslint:
enabled: true
csslint:
enabled: true
duplication:
enabled: true
config:
languages:
- ruby:
- javascript:
exclude_paths:
- "test/"
- "coverage/"
- "doc/"
- "bin/"
In think You are missing this from your .codeclimate file:
ratings:
paths:
- Gemfile.lock
- "**.css"
- "**.js"
- "**.jsx"
- "**.rb"
You can read about it more here: https://docs.codeclimate.com/v1.0/docs/ratings
You also need to make sure your files have UTF-8 encoding.
Related
I am trying to represent the following in the helmfile.yaml but I am getting an error. Can anyone help me to set it up?
values.yaml
extraVolumes:
- name: google-cloud-key
secret:
secretName: gcloud-auth
I tried the following in helmfile.yaml
repositories:
- name: loki
url: https://grafana.github.io/loki/charts
releases:
- name: loki
namespace: monitoring
chart: loki/loki
set:
- name: extraVolumes.name
value: google-cloud-key
- name: extraVolumes.secret.secretName
value: gcloud-auth
The error I am getting is
coalesce.go:160: warning: skipped value for extraVolumes: Not a table.
I also tried with the following in helmfile.yaml
- name: extraVolumes.name[]
value: google-cloud-key
This gave me the following error
Error: failed parsing --set data: key map "extraVolumes" has no value
Any idea?
Helmfile has two ways to provide values to the charts it installs. You're using set:, which mimics the finicky helm install --set option. However, Helmfile also supports values:, which generally maps to helm install -f. Helmfile values: supports two extensions: if a filename in the list ends in *.gotmpl then the values file itself is processed as a template file before being given to Helm; or you can put inline YAML-syntax values directly in helmfile.yaml.
This last option is probably easiest. Instead of using set:, use values:, and drop that block of YAML directly into helmfile.yaml.
releases:
- name: loki
namespace: monitoring
chart: loki/loki
values: # not `set:`
- extraVolumes: # inline YAML content as a single list item
- name: google-cloud-key
secret:
secretName: gcloud-auth
values: is set to a list of either filenames or inline mappings. If you're not deeply familiar with YAML syntax, this means you need to put a - list-item indicator before the inline YAML block. If you already have a list of values: files you can add this additional item into the list wherever appropriate.
I have an input settings like this (Proof Of Concept) and i will add more prospectors further on.
Can i avoid repetition of the multiline properties?
filebeat.prospectors:
- type: log
enabled: true
paths:
- /data/server/logs/inode-stage/inode-stage.log
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
fields:
env: 'stage'
app: 'inode'
- type: log
enabled: true
paths:
- /data/server/logs/inode-dev/inode-dev.log
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
fields:
env: 'dev'
app: 'inode'
I don't think that is possible right now. Not sure how many variations you will have in your inputs, but based on your current example I would extract the env with dissect. If you need something more powerful, you could even go for the script processor.
I have configured filebeat 6.6 on a Windows instance. Weird thing is, it is sending logs for IIS but not for file I have specified even though the filebeat can detect it.
Filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\LowError.txt
- type: log
enabled: true
paths:
- C:\inetpub\logs\LogFiles\*\*
- C:\Hosting\stagingb2c\PaymentGatewayLogs\*\*
recursive_glob: enabled
- type: log
enabled: true
paths:
- C:\Hosting\stagingb2c\ErrorLogs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
output.logstash:
hosts: ["13.234.83.186:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging:
to_files: true
files:
path: C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\filebeat-6.6.1-windows-x86_64\LOG
level: info
I can see logs from C:\inetpub\logs\LogFiles folder but not from C:\Hosting\stagingb2c\PaymentGatewayLogs.
I can not see any errors or warnings in filebeat.log when I started it with :slight_smile:
PS C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\filebeat-6.6.1-windows-x86_64> .\filebeat.exe -e -d "*"
|2019-03-04T21:15:51.602+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId_12f1050220190810\CredimaxPayment_TransactionDetails_OrderId_12f1050220190810|
|---|---|---|---|
|2019-03-04T21:15:51.761+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId_Sw2m\CredimaxPayment_PROCESS_ACS_RESULT_Response_20190213124610_OrderId_Sw2m.txt|
|2019-03-04T21:15:51.920+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId__SoLx\CredimaxPayment_PAY_Request_20190205085701_OrderId__SoLx.txt|
I am not able to see these logs in logstash though I can surely see other files coming in Logstash.
Change your input section to this and check,
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\LowError.txt
- C:\inetpub\logs\LogFiles\*\*
- C:\Hosting\stagingb2c\PaymentGatewayLogs\*\*
- C:\Hosting\stagingb2c\ErrorLogs\*
recursive_glob: enabled
Error parsing config file: yaml: line 22: did not find expected key
Cannot find a job named build to run in the jobs: section of your configuration file.
I got those errors, but I'm really new to yaml so I can't really find reaons why It's not working. any ideas? Some says It might have extra spaces or something, but I can't really find it.
yaml file
defaults: &defaults:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
version: 2
jobs:
build:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults // << here
- run: npm run test
- run: npm run build
deploy:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults
- run:
name: Deploy app scripts to AWS S3
command: npm run update-app
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
What you are trying to do is trying to merge two sequences. ie all elements of default are merged into steps. Which is not supported in YAML spec. Only you can merge maps and nested sequences.
This is invalid:
steps:
<<: *defaults
- run:
as <<: is for merging map elements, not sequences
If you do this:
step_values: &step_values
- run ...
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.
Its not possible for now. Unfortunately, the only solution is to repeat the whole list. Many users are requesting the same feature.
it looks like your YAML is not written properly. You can always check the structure validation of YAML from an open-source website such as http://www.yamllint.com/.
On checking the yaml file, on line 22 you are doing wrong. As explained by Srikanth, that you are trying to do is merging two sequences. i.e. all elements of default are merged into steps. Which is not supported in YAML at the moment.
Only you can merge maps and nested sequences
If you do this:
step_values: &step_values
- run ...
-----------------------------------------------
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.
This question already has an answer here:
Is it possible to do string substitution in YAML?
(1 answer)
Closed 5 years ago.
I have a task that I want to re-use in multiple jobs, but I don't want to have to repeat the task configuration for every job. What's the best way for me to do that?
Example:
jobs:
- name: build
plan:
- get: git-branch
trigger: true
- task: get-build-tag # <---- duplicate of below
config: {} # truncated for brevity
- task: build-image
file: some-stuff-to-do-with-get-build-tag
- name: test
plan:
- get: git-branch
trigger: true
- task: get-build-tag # <---- duplicate of above
config: {} # truncated for brevity
- task: run-tests
file: also-to-do-with-get-build-tag
Note to those who have flagged this question as a duplicate: I was instructed by the Concourse team to post this question on here specifically about Concourse configuration. Should configuration ever change from YAML to something else, this post could still act as a reference despite having nothing to do with YAML.
What you're looking for are YAML anchors.
Here's what that would look like in your pipeline:
# This is not part of concourse semantics but we allow
# extra keys to support anchoring
# https://github.com/concourse/concourse/issues/116
additional_tasks:
- &get-build-tag
task: get-build-tag
config: {}
jobs:
- name: build
plan:
- get: git-branch
trigger: true
- *get-build-tag
- task: build-image
file: some-stuff-to-do-with-get-build-tag
- name: test
plan:
- get: git-branch
trigger: true
- *get-build-tag
- task: run-tests
file: also-to-do-with-get-build-tag
If you want an example of how we do that in one of the pipelines we use for our testing on the concourse team, you can check it out here.