How to read YAML in list of Strings - spring-boot

I have a YAML file for my Spring app. It looks like this:
storages:
file:
local:
dir: file_storage
http:
local_host:
url: http:*************
client_id: *****
secret_id: ******t
ftp:
ftp_server:
path: ***********e
catalogues:
database:
local_database:
host: jdbc:postgresql://l*****st:5432/ea******
user: ***
password: ***
driver: org.postgresql.Driver
maximum_pool_size: 3
http:
local_rest:
url: http://localhost:1234/data
aliases:
default_file_storage: local
default_catalog: local_database
a9c8662e-f4bc-4e8e-882d-310bf8e86198: local_database
admin_catalog: local_database
admin_storage : local
And I need this data as strings in application.properties. I download my properties from external storage at S3 from amazon. These files can be application.properties format or .yml format
storages.file.local.dir=file_storage
storages.http. local_host.url=http:*************
storages.http. local_host.client_id=http:*************
storages.http. local_host.secret_id=http:*************
and so on.

Related

FSCrawler Error while crawling E:\TestFilesToBeIndexed\subfolder: java.net.ConnectException: Connection timed out: connect

Error while crawling path\to\file_folder: java.net.ConnectException: Connection timed out: connect
I am trying to ingest the remote server files using FSCrawler into the existing index of Elasticserach(which is on my local machine) but getting above exception.
Below is the _settings.yml file of FSCrawler:
---
name: "index_in_es_onefsc"
server:
hostname: "machinename.abc.com"
port: 22
username: "username"
password: "password#20"
protocol: "ssh"
fs:
url: "E:\\TestFilesToBeIndexed"
update_rate: "15m"
excludes:
- "*/~*"
json_support: false
filename_as_id: false
add_filesize: true
remove_deleted: true
add_as_inner_object: false
store_source: false
index_content: true
attributes_support: false
raw_metadata: false
xml_support: false
index_folders: true
lang_detect: false
continue_on_error: false
ocr:
language: "eng"
enabled: true
pdf_strategy: "ocr_and_text"
follow_symlinks: false
elasticsearch:
nodes:
- url: "http://127.0.0.1:9200"
bulk_size: 100
flush_interval: "5s"
byte_size: "10mb"
The documentation says that on Windows when doing SSH from and to a Windows
machine you must use the following form:
I think that on Windows, you need to use:
name: "index_in_es_onefsc"
fs:
url: "/E:/TestFilesToBeIndexed"
server:
hostname: "machinename.abc.com"
port: 22
username: "username"
password: "password#20"
protocol: "ssh"
Note that there is a known issue when running FSCrawler from a Windows machine. This has been fixed but in case you are using an older SNAPSHOT version than the one published on June 26th, you'll most likely need to upgrade.

YML with multiple files: Unhandled rejection YAMLException: duplicated mapping key

I tried to separate my resources field in multiple files in project. Most of then worked fine, but only this file, where I declare CognitoUserPool + CognitoUserPoolClient trowed this exception:
Unhandled rejection YAMLException: duplicated mapping key in "/home/uriel/Desktop/Foo/Backend/MultipleFile/backend-foo/resources/cognitoUserPoolFoo.yml" at line 20, column 3:
Type: AWS::Cognito::UserPoolClient
^
at generateError (/home/uriel/.nvm/versions/node/v10.16.0/lib/node_modules/serverless/node_modules/js-yaml/lib/js-yaml/loader.js:167:10)
I've already checked logical and indentation problems. This same lines worked on single file, they only throw this error when I move them to another YML file and import it.
Main YML file importing the another one
plugins:
- serverless-webpack
- serverless-python-requirements
- serverless-offline
resources:
- ${file(resources/cognitoUserPoolFoo.yml)}
File imported, the one that throw error.
Resources:
CognitoUserPoolFoo:
Type: AWS::Cognito::UserPool
Properties:
MfaConfiguration: OFF
UserPoolName: foo-${self:provider.stage}
EmailConfiguration:
ReplyToEmailAddress: foo#test.com
SourceArn: "arn:aws:ses:us-east-1:123456789012:identity/foo#test.com"
AutoVerifiedAttributes:
- email
Policies:
PasswordPolicy:
MinimumLength: 6
RequireLowercase: True
RequireNumbers: True
RequireSymbols: False
RequireUppercase: True
FooCognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: FooWebApp-${self:provider.stage}
GenerateSecret: false
UserPoolId:
Ref: "CognitoUserPoolFoo"

How to use the security group existing in horizon in heat template

I'm newbies on heat yaml template loaded by OpenStack
I've got this command which works fine :
openstack server create --image RHEL-7.4 --flavor std.cpu1ram1 --nic net-id=network-name.admin-network --security-group security-name.group-sec-default value instance-name
I tried to write this heat file with the command above :
heat_template_version: 2014-10-16
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_group:
- security_group: security-name.group-sec-default
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: security-name.group-sec-default
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
The stack creation failed with the following message error :
openstack stack create -t my_first.yaml First_stack
openstack stack show First_stack
.../...
| stack_status_reason | Resource CREATE failed: BadRequest: resources.my_instance: Unable to find security_group with name or id 'sec_group1' (HTTP 400) (Request-ID: req-1c5d041c-2254-4e43-8785-c421319060d0)
.../...
Thanks for helping,
According to the template guide it is expecting the rules type is of list.
So, change the content of template as below for security-group:
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules: [security-name.group-sec-default]
OR
security-group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- security-name.group-sec-default
After digging, I finally found what was wrong in my heat file. I had to declare my instance like this :
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
image: RHEL-7.4
flavor: std.cpu1ram1
networks:
- network: network-name.admin-network
security_groups: [security-name.group-sec-default]
Thanks for your support

spring.profiles is not working as expected in spring boot

boot config *.yml file.
server.port: 2222
spring:
application:
name: x-service
data:
mongodb:
host: db.x
database: x
# userName: ${db.userName}
# password: ${db.password}
rabbitmq:
# port: ${queue.port}
host: queue.x
username: ${queue.userName}
password: ${queue.password}
listener:
max-concurrency: 1
prefetch: 1
acknowledge-mode: auto
auto-startup: true
dynamic: true
###########DEV##############
spring.profiles: dev
#queue.virtual.host: xuser
queue.userName: guest
queue.password: guest
queue.port: 5672
#db.userName:
#db.password:
falconUrl: http://x.y.com
##########DEFAULT###########
spring.profiles: qa
queue.virtual.host: xuser
queue.userName: xuser
queue.password: xpassword
queue.port: 3456
db.userName: xuser
db.password: xpassword
falconUrl: http://x.z.com
It gives me org.yaml.snakeyaml.parser.ParserException: while parsing MappingNode
in 'reader', line 1, column 1:
server.port: 2222
^
Duplicate key: spring.profiles
in 'reader', line 47, column 1:
error. If I comment properties of one of the profile.It works fine.
Can anyone please suggest what is wrong here?
The error message is actually quite specific and accurate: in the top-level mapping of your YAML file (the one starting with the key-value pair server.port and 2222 you have two identical keys (the scalar spring.profiles). And duplicate keys are not allowed in YAML, as the are required to be unique according to the specification.
The underlying problem is that if you want to change the configuration depending on the environment, you'll have to follow the documented specification, which states that:
A YAML file is actually a sequence of documents separated by --- lines, and each document is parsed separately to a flattened map.
If a YAML document contains a spring.profiles key, then the profiles value (comma-separated list of profiles) is fed into the Spring Environment.acceptsProfiles() and if any of those profiles is active that document is included in the final merge (otherwise not)
Your YAML file is a single implicit YAML document because it lacks the directive indicator --- that occurs at the beginning of an explicit YAML document. (the YAML directive ... that indicates end-of-document might not be supported properly supported by snake-yaml, at least it is not mentioned in the examples).
Your code should look like:
server.port: 2222
spring:
application:
name: x-service
data:
mongodb:
host: db.x
database: x
# userName: ${db.userName}
# password: ${db.password}
rabbitmq:
# port: ${queue.port}
host: queue.x
username: ${queue.userName}
password: ${queue.password}
listener:
max-concurrency: 1
prefetch: 1
acknowledge-mode: auto
auto-startup: true
dynamic: true
###########DEV##############
---
spring.profiles: dev
#queue.virtual.host: xuser
queue.userName: guest
queue.password: guest
queue.port: 5672
#db.userName:
#db.password:
falconUrl: http://x.y.com
##########DEFAULT###########
---
spring.profiles: qa
queue.virtual.host: xuser
queue.userName: xuser
queue.password: xpassword
queue.port: 3456
db.userName: xuser
db.password: xpassword
falconUrl: http://x.z.com
The statement in the documentation that "each document is parsed separately to a flattened map" is of course only true, if each of the documents has a mapping at the top level. That is what spring-boot expects, but you can as easily have a scalar or sequence at the top level of a document, and such documents are certainly not parsed by snake-yaml to a flattened map.

Mongoid: (Replication) configuration file yml not loaded

development:
hosts: [[database_1.mongolab.com, 12345], [database_2.mongolab.com, 12345]]
database: database_name
username: database_user
password: database_pass
persist_in_safe_mode: true
raise_not_found_error: false
This configuration file (config/mongoid.yml) is loaded using :
Mongoid.load!("config/mongoid.yml")
But I get this error :
Mongo::ConnectionFailure at /
Failed to connect to a master node at localhost:27017
You can create your mongoid.yml and place it anywhere you like. But be sure that on application path (config/initialization) under that you do the following:
Mongoid.load!("path/to/your/mongoid.yml")
Update
To use mongoid master in your project, set this in your Gemfile
gem "mongoid", :git => "git#github.com:durran/mongoid.git"
You are using the Sinatra configuration scheme when using Mongoid with Rails.
Try this:
development:
hosts:
- - database_1.mongolab.com
- 12345
- - database_2.mongolab.com
- 12345
database: database_name
username: database_user
password: database_pass
persist_in_safe_mode: true
raise_not_found_error: false

Resources