What's wrong with this yaml file? - yaml

I have a plugin.yml file for a bukkit plugin:
name: SlayCraft
version: 1.0.0
main: src.john01dav.slaycraft.SlayCraft
commands:
scsetspawn:
permission: slaycraft.setspawn
description: Sets the SlayCraft spawn point to where you are standing
usage: /scsetsapwn <arena/lobby>
scjoin:
permission: slaycraft.join
description: Joins the SlayCraft game
usage: /scjoin
scfirework:
permission: slaycraft.firework
description: Launches a firework at the player's location
usage: /scfirework
scexplosion:
permission: slaycraft.explosion:
description: Launches an explosion at the player's location
usage: /scexplosion
permissions:
slaycraft.setspawn:
default: op
slaycraft.join:
default: true
slaycraft.firework:
default: op
slaycraft.explosion:
default: op
This yaml looks perfectly fine to me, and yet there are errors. Any ideas? I have searched for people having similar errors, but none of them seem explicable.

The error is pretty specific:
ERROR:
mapping values are not allowed here
in "<unicode string>", line 19, column 36:
permission: slaycraft.explosion:
^
You have an extra colon on this line:
permission: slaycraft.firework
description: Launches a firework at the player's location
usage: /scfirework
scexplosion:
permission: slaycraft.explosion: #<-- This colon is not needed.
description: Launches an explosion at the player's location
usage: /scexplosion
Remove it.

Related

Saltstack user.present does not set uid while creating the user

My goal is to have a user with a given uid. I try to have a simple user created with the very basic state:
Add Student:
user.present:
- name: Student
- uid: 333123123123
- allow_uid_change: True
333123123123 is just some dummy value. I'd like something more meanigful later, but this is what I use for testing.
This creates the user perfectly fine, but with generated uid:
ID: Add Student
Function: user.present
Name: Student
Result: True
Comment: New user Student created
Started: 19:47:33.543457
Duration: 203.157 ms
Changes:
----------
account_disabled:
False
account_locked:
False
active:
True
comment:
description:
disallow_change_password:
False
expiration_date:
2106-02-07 07:28:15
expired:
True
failed_logon_attempts:
0
fullname:
Student
gid:
groups:
home:
homedrive:
last_logon:
Never
logonscript:
name:
Student
passwd:
None
password_changed:
2022-02-21 19:47:33
password_never_expires:
False
profile:
None
successful_logon_attempts:
0
uid:
S-1-5-21-3207633127-2685365797-3805984769-1043
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 203.157 ms
Now, if I try running state.apply again, I get the following message:
ID: Add Student
Function: user.present
Name: Student
Result: False
Comment: Encountered error checking for needed changes. Additional info follows:
- Changing uid (S-1-5-21-3207633127-2685365797-3805984769-1043 -> 333123123123) not permitted, set allow_uid_change to True to force this change. Note that this will not change file ownership.
Started: 19:47:45.503643
Duration: 7000.025 ms
Changes:
Summary
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 7.000 s
So it IS being considered, checked and verified - but not working while creating the user. The syntax seems to be confirmed. Why is it not getting applied upon creating the user?
It is possible to change a user's SID, but it requires unsupported registry hacking. Creating a new user with a specific SID would be even harder. Salt won't do that.
If you need to know the SID of a Windows user, you have to create it first and then query it. If you need it in a following state in the same run, then you can use slots.

How do I make that my hasura actions are ready to be used for my ci / cd tests?

I have started building up a backend with hasura. That backend is validated on my CI / CD service with api tests, among other things.
Within my hasura backend, I have implemented openfaas functions. I am deploying everything on a kubernetes cluster. Before running the tests, I wait until all jobs and all deployments are done. I am deploying with devspace which deploys everything through helm charts. So, at the end of the deployment, I am dead-sure the deployments are all done (ultimately, I've checked directly on the k8s cluster). Even the openfaas functions are deployed and ready to use.
Yet, when I run my acceptance tests, I run into issues. If I don't wait long enough, then my actions are not working properly. They return some strange errors that e.g. the response returned invalid json
Error: GraphQL error: not a valid json response from webhook
or the mutation is not in the mutation root
Error: GraphQL error: field "login" not found in type: 'mutation_root'
However, the openfaas functions themselves log only success. There is no error there. They are called and they apparently throw no error.
Waiting 3-5 minutes after hasura deployment or trying to call the actions until they return something relevant works fine, however. My current work-around is to wait an additional 5 minutes after my deployments have been done and only then run my api tests.
Is that normal? Is there a more efficient way to get feedback on when hasura really is ready to accept calls to its actions? I am currently working with version 1.2.1.
EDIT
After re-verification, waiting "long enough" does not help. What, however, helps, is calling some actions until they return successful answer. Currently, what I am doing is
#! /bin/sh
if [ "$#" -lt "3" ] ; then
echo "Usage: $0 <hasura-endpoint> <profile> <auth-app-id> [<timeout-in-sec> <deltat-in-sec>]"
exit 1
fi
ENDPOINT=$1
PROFILE=$2
AUTH_APP_ID=$3
TIMEOUT=${4:-300}
DELTA_T=${5:-5}
FIXTURES_FILE=./shared/fixtures/${PROFILE}/database/Users/auth.json
username=$(jq -r '.[1].email' $FIXTURES_FILE)
password=$(jq -r '.[1].password' $FIXTURES_FILE)
user_id=$(jq -r '.[1].id' $FIXTURES_FILE)
echo "Trying to login with $username / $password / $AUTH_APP_ID"
for iteration in `seq 1 $TIMEOUT`; do
result=$(gq $ENDPOINT -q 'mutation($username: String!, $password: String!, $appId: uuid!) { login(username: $username, password: $password, appId: $appId) { userId }}' -v "username=$username" -v "password=$password" -v "appId=$AUTH_APP_ID" | jq -r '.data.login.userId')
if [ "$result" == "$user_id" ] ; then
exit 0
else
sleep $DELTA_T
fi
done
echo "Hasura actions availability timed out" && exit 1
That performs logins with valid credentials until the action returns the right user id, and not an error. The log of this script on my ci / cd is something like
$ ./scripts/login_until_it_works.sh ${API_ENDPOINT}/v1/graphql $PROFILE $AUTH_ADMIN_APP_ID
Trying to login with nathalie.droz#test-vtxnet.ch / yl2YOuSrz_ / [MASKED]
Executing query... error
Error: ApolloError: GraphQL error: not a valid json response from webhook
at new ApolloError (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:92:26)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1297:31)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3)
at SubscriptionObserver.next (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:235:7)
at /usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1102:36
at Set.forEach (<anonymous>)
at Object.next (/usr/local/lib/node_modules/graphqurl/node_modules/apollo-client/bundle.umd.js:1101:21)
at notifySubscription (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:135:18)
at onNotify (/usr/local/lib/node_modules/graphqurl/node_modules/zen-observable/lib/Observable.js:179:3) {
graphQLErrors: [
{
extensions: [Object],
message: 'not a valid json response from webhook'
}
],
networkError: null,
message: 'GraphQL error: not a valid json response from webhook',
extraInfo: undefined
}
Executing query... done
Notice that the second query, 5 seconds after the first, is successful. My action is defined as follows:
- args:
enums: []
input_objects: []
objects:
- description: null
fields:
- description: null
name: token
type: String!
- description: null
name: refreshToken
type: String!
- description: null
name: userId
type: uuid!
name: LoginResponse
scalars: []
type: set_custom_types
- args:
comment: null
definition:
arguments:
- description: null
name: username
type: String!
- description: null
name: password
type: String!
- description: null
name: appId
type: uuid!
forward_client_headers: false
handler: http://gateway.openfaas:8080/function/login.{{FUNCTION_NAMESPACE}}
headers: []
kind: synchronous
output_type: LoginResponse
type: mutation
name: login
type: create_action
- args:
action: login
definition:
select:
filter: {}
role: incognito
type: create_action_permission
When you deploy via Helm, it creates the Deployments and everything else you've defined and tells you it's done. That doesn't mean that whatever you deployed is ready to serve requests. That's because each service may have its own boot time, especially the services who advertise High Availability.
Kubernetes is designed to address this issue with the help of "liveness/readiness probes". Basically, in your yaml/helm files you instruct K8s what it needs to check before it returns that a pod is ready. This could be for example a 200 HTTP status code from /live endpoint in your app or whatever.
Check this out: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Puppet 6 and module puppetlabs/accounts does not create user account in Hiera YAML format

When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)

Cloudify File Plugin "Operation not permitted" error

I'm attempting to copy a file to a VM using the cloudify.nodes.File type, but am running into a permission error that I'm having trouble figuring out.
According to the documentation, I should be able to copy a file by using:
docker_yum_repo:
type: cloudify.nodes.File
properties:
resource_config:
resource_path: resources/docker.repo
file_path: /etc/yum.repos.d/docker.repo
owner: root:root
mode: 644
The relevant portions of my blueprint are:
vm_0:
type: cloudify.nodes.aws.ec2.Instances
properties:
client_config: *client_config
agent_config:
install_method: none
user: ubuntu
resource_config:
kwargs:
ImageId: { get_attribute: [ ami, aws_resource_id ] }
InstanceType: t2.micro
UserData: { get_input: install_script }
KeyName: automation
relationships:
- type: cloudify.relationships.depends_on
target: ami
- type: cloudify.relationships.depends_on
target: nic_0
...
file_0:
type: cloudify.nodes.File
properties:
resource_config:
resource_path: resources/config/file.conf
file_path: /home/ubuntu/file.conf
owner: root:root
mode: 644
relationships:
- type: cloudify.relationships.contained_in
target: vm_0
But, I keep receiving the error:
2019-02-20 15:36:59.128 CFY <sbin> 'install' workflow execution failed: RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> [Errno 1] Operation not permitted: './file.conf'
Execution of workflow install for deployment sbin failed. [error=Traceback (most recent call last):
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 571, in _remote_workflow_child_thread
workflow_result = self._execute_workflow_function()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 600, in _execute_workflow_function
result = self.func(*self.args, **self.kwargs)
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py", line 30, in install
node_instances=set(ctx.node_instances))
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 29, in install_node_instances
processor.install()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 102, in install
graph.execute()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 237, in execute
raise self._error
RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> [Errno 1] Operation not permitted: './file.conf'
I've tried a few different values for file_path: "/home/ubuntu/file.conf", "/tmp/file.conf", and "./file.conf" (shown in the error output above), but I receive the same permission error each time. I've also tried the relationship: cloudify.relationships.depends_on without any success as well.
I'm using Cloudify Manager 4.5.5 via their Docker image.
Has anyone seen this issue? Am I using the plugin incorrectly? And is this "best-practice" or should I create a new VM that already has all of the files necessary and have that spun-up on AWS?
Thanks in advance!
Update
I forgot to mention that if I try to set the owner of the file to ubuntu:ubuntu, I get an error about the user not being found:
2019-02-20 16:19:21.743 CFY <sbin> 'install' workflow execution failed: RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> 'getpwnam(): name not found: ubuntu'
Execution of workflow install for deployment sbin failed. [error=Traceback (most recent call last):
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 571, in _remote_workflow_child_thread
workflow_result = self._execute_workflow_function()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 600, in _execute_workflow_function
result = self.func(*self.args, **self.kwargs)
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py", line 30, in install
node_instances=set(ctx.node_instances))
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 29, in install_node_instances
processor.install()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 102, in install
graph.execute()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 237, in execute
raise self._error
RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> 'getpwnam(): name not found: ubuntu'
It looks like the VM isn't yet ready to receive the file (since it's failing in the install lifecycle).
Try "use_sudo: true" in the "resource_config" block. Also add an interfaces block like this:
interfaces:
cloudify.interfaces.lifecycle:
create:
executor: host_agent
delete:
executor: host_agent
If you don't override the executor, it will run on the manager (which is probably why you see "ubuntu" user not existing).

Behat [ERROR 1871] This element is not expected

When I am running behat in my project, I get the following output:
[mattias:~/projects/resecond]$ behat (staging✱)
Feature: User
In order to personalize the app
I want to give users the ability to create an
account
Scenario: Finding a specific user # app/tests/acceptance/User.feature:6
Given I request "users/dd40ee60-98d3-11e4-a625-07e5e99a99e6"
Then I get a "200" response
[Symfony\Component\Translation\Exception\InvalidResourceException]
[ERROR 1871] Element '{urn:oasis:names:tc:xliff:document:1.2}target': This element is not expected. Expected is one of ( {urn:oasis:names:tc:xliff:document:1.2}context-group, {urn:oasis:names:tc:xliff:document:1.2}count-group, {urn:oasis:names:tc:xliff:document:1.2}note, {urn:oasis:names:tc:xliff:document:1.2}alt-trans, ##other{urn:oasis:names:tc:xliff:document:1.2}* ). (in /Users/mattiassiofjellvang/Projects/resecond/ - line 40, column 0)
This is my behat.yml file in root of my laravel project:
default:
paths:
features: app/tests/acceptance
extensions:
Behat\MinkExtension\Extension:
goutte: ~
base_url: http://my.app/api
This is my composer.json:
"behat/behat": "2.5.*",
"behat/mink": "1.5.*",
"behat/mink-extension": "*",
"behat/mink-goutte-driver": "*"
Whats wrong? I am supposed to write more text, because SO thinks my post is mostly code. I disagree

Resources