I'm attempting to copy a file to a VM using the cloudify.nodes.File type, but am running into a permission error that I'm having trouble figuring out.
According to the documentation, I should be able to copy a file by using:
docker_yum_repo:
type: cloudify.nodes.File
properties:
resource_config:
resource_path: resources/docker.repo
file_path: /etc/yum.repos.d/docker.repo
owner: root:root
mode: 644
The relevant portions of my blueprint are:
vm_0:
type: cloudify.nodes.aws.ec2.Instances
properties:
client_config: *client_config
agent_config:
install_method: none
user: ubuntu
resource_config:
kwargs:
ImageId: { get_attribute: [ ami, aws_resource_id ] }
InstanceType: t2.micro
UserData: { get_input: install_script }
KeyName: automation
relationships:
- type: cloudify.relationships.depends_on
target: ami
- type: cloudify.relationships.depends_on
target: nic_0
...
file_0:
type: cloudify.nodes.File
properties:
resource_config:
resource_path: resources/config/file.conf
file_path: /home/ubuntu/file.conf
owner: root:root
mode: 644
relationships:
- type: cloudify.relationships.contained_in
target: vm_0
But, I keep receiving the error:
2019-02-20 15:36:59.128 CFY <sbin> 'install' workflow execution failed: RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> [Errno 1] Operation not permitted: './file.conf'
Execution of workflow install for deployment sbin failed. [error=Traceback (most recent call last):
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 571, in _remote_workflow_child_thread
workflow_result = self._execute_workflow_function()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 600, in _execute_workflow_function
result = self.func(*self.args, **self.kwargs)
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py", line 30, in install
node_instances=set(ctx.node_instances))
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 29, in install_node_instances
processor.install()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 102, in install
graph.execute()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 237, in execute
raise self._error
RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> [Errno 1] Operation not permitted: './file.conf'
I've tried a few different values for file_path: "/home/ubuntu/file.conf", "/tmp/file.conf", and "./file.conf" (shown in the error output above), but I receive the same permission error each time. I've also tried the relationship: cloudify.relationships.depends_on without any success as well.
I'm using Cloudify Manager 4.5.5 via their Docker image.
Has anyone seen this issue? Am I using the plugin incorrectly? And is this "best-practice" or should I create a new VM that already has all of the files necessary and have that spun-up on AWS?
Thanks in advance!
Update
I forgot to mention that if I try to set the owner of the file to ubuntu:ubuntu, I get an error about the user not being found:
2019-02-20 16:19:21.743 CFY <sbin> 'install' workflow execution failed: RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> 'getpwnam(): name not found: ubuntu'
Execution of workflow install for deployment sbin failed. [error=Traceback (most recent call last):
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 571, in _remote_workflow_child_thread
workflow_result = self._execute_workflow_function()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/dispatch.py", line 600, in _execute_workflow_function
result = self.func(*self.args, **self.kwargs)
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py", line 30, in install
node_instances=set(ctx.node_instances))
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 29, in install_node_instances
processor.install()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py", line 102, in install
graph.execute()
File "/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py", line 237, in execute
raise self._error
RuntimeError: Workflow failed: Task failed 'cloudify_files.tasks.create' -> 'getpwnam(): name not found: ubuntu'
It looks like the VM isn't yet ready to receive the file (since it's failing in the install lifecycle).
Try "use_sudo: true" in the "resource_config" block. Also add an interfaces block like this:
interfaces:
cloudify.interfaces.lifecycle:
create:
executor: host_agent
delete:
executor: host_agent
If you don't override the executor, it will run on the manager (which is probably why you see "ubuntu" user not existing).
Related
I'm running into this error in AWS Lambda. It appears that the devtools websocket is not up. Not sure how to fix it. Any ideas? Thanks for your time.
Exception originated from get_ws_endpoint() due to websocket response timeout https://github.com/pyppeteer/pyppeteer/blob/ad3a0a7da221a04425cbf0cc92e50e93883b077b/pyppeteer/launcher.py#L225
Lambda code:
import os
import json
import asyncio
import logging
import boto3
import pyppeteer
from pyppeteer import launch
logger = logging.getLogger()
logger.setLevel(logging.INFO)
pyppeteer.DEBUG = True # print suppressed errors as error log
def lambda_handler(event, context):
asyncio.get_event_loop().run_until_complete(main())
async def main():
browser = await launch({
'headless': True,
'args': [
'--no-sandbox'
]
})
page = await browser.newPage()
await page.goto('http://example.com')
await page.screenshot({'path': '/tmp/example.png'})
await browser.close()
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Exception:
Response:
{
"errorMessage": "Browser closed unexpectedly:\n",
"errorType": "BrowserError",
"stackTrace": [
" File \"/var/task/lambda_handler.py\", line 23, in lambda_handler\n asyncio.get_event_loop().run_until_complete(main())\n",
" File \"/var/lang/lib/python3.8/asyncio/base_events.py\", line 616, in run_until_complete\n return future.result()\n",
" File \"/var/task/lambda_handler.py\", line 72, in main\n browser = await launch({\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 307, in launch\n return await Launcher(options, **kwargs).launch()\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 168, in launch\n self.browserWSEndpoint = get_ws_endpoint(self.url)\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 227, in get_ws_endpoint\n raise BrowserError('Browser closed unexpectedly:\\n')\n"
]
}
Request ID:
"06be0620-8b5c-4600-a76e-bc785210244e"
Function Logs:
START RequestId: 06be0620-8b5c-4600-a76e-bc785210244e Version: $LATEST
---- files in /tmp ----
[W:pyppeteer.chromium_downloader] start chromium download.
Download may take a few minutes.
0%| | 0/108773488 [00:00<?, ?it/s]
11%|█▏ | 12267520/108773488 [00:00<00:00, 122665958.31it/s]
27%|██▋ | 29470720/108773488 [00:00<00:00, 134220418.14it/s]
42%|████▏ | 46172160/108773488 [00:00<00:00, 142570388.86it/s]
58%|█████▊ | 62607360/108773488 [00:00<00:00, 148471487.93it/s]
73%|███████▎ | 79626240/108773488 [00:00<00:00, 154371569.93it/s]
88%|████████▊ | 95754240/108773488 [00:00<00:00, 156353972.12it/s]
100%|██████████| 108773488/108773488 [00:00<00:00, 161750092.47it/s]
[W:pyppeteer.chromium_downloader]
chromium download done.
[W:pyppeteer.chromium_downloader] chromium extracted to: /tmp/local-chromium/588429
-----
/tmp/local-chromium/588429/chrome-linux/chrome
[ERROR] BrowserError: Browser closed unexpectedly:
Traceback (most recent call last):
File "/var/task/lambda_handler.py", line 23, in lambda_handler
asyncio.get_event_loop().run_until_complete(main())
File "/var/lang/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/var/task/lambda_handler.py", line 72, in main
browser = await launch({
File "/opt/python/pyppeteer/launcher.py", line 307, in launch
return await Launcher(options, **kwargs).launch()
File "/opt/python/pyppeteer/launcher.py", line 168, in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
File "/opt/python/pyppeteer/launcher.py", line 227, in get_ws_endpoint
raise BrowserError('Browser closed unexpectedly:\n')END RequestId: 06be0620-8b5c-4600-a76e-bc785210244e
REPORT RequestId: 06be0620-8b5c-4600-a76e-bc785210244e Duration: 33370.61 ms Billed Duration: 33400 ms Memory Size: 3008 MB Max Memory Used: 481 MB Init Duration: 445.58 ms
I think BrowserError: Browser closed unexpectedly is just the error you get when Chrome crashes for whatever reason. It would be nice if pyppeteer printed out the error, but it doesn't.
To track things down, it's helpful to pull up the exact command that pyppeteer runs. You can do that this way:
>>> from pyppeteer.launcher import Launcher
>>> ' '.join(Launcher().cmd)
/root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome --disable-background-networking --disable-background-timer-throttling --disable-breakpad --disable-browser-side-navigation --disable-client-side-phishing-detection --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=site-per-process --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-translate --metrics-recording-only --no-first-run --safebrowsing-disable-auto-update --enable-automation --password-store=basic --use-mock-keychain --headless --hide-scrollbars --mute-audio about:blank --no-sandbox --remote-debugging-port=33423 --user-data-dir=/root/.local/share/pyppeteer/.dev_profile/tmp5cj60q6q
When I ran that command in my Docker image, I got the following error:
$ /root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome # ...
/root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome:
error while loading shared libraries:
libnss3.so: cannot open shared object file: No such file or directory
So I installed libnss3:
apt-get install -y libnss3
Then I ran the command again and got a different error:
$ /root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome # ...
[0609/190651.188666:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
So I needed to change my launch command to something like:
browser = await launch(headless=True, args=['--no-sandbox'])
and now it works!
Answering my own question.
Finally I was able to run Pyppeteer(v0.2.2) with Python 3.6 and 3.7 (not 3.8) after I bundled chromium binary in a lambda layer.
So in summary, it appears to work only when its configured to run with user provided chromium executable path and not with automatically downloaded chrome. Probably some race condition or something.
Got Chromium from https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-41/stable-headless-chromium-amazonlinux-2017-03.zip
browser = await launch(
headless=True,
executablePath='/opt/python/headless-chromium',
args=[
'--no-sandbox',
'--single-process',
'--disable-dev-shm-usage',
'--disable-gpu',
'--no-zygote'
])
Issue posted on repo https://github.com/pyppeteer/pyppeteer/issues/108
I have been trying to run pyppeteer in a Docker container and ran into the same issue.
Finally managed to fix it thanks to this comment: https://github.com/miyakogi/pyppeteer/issues/14#issuecomment-348825238
I installed Chrome manually through apt
curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add -
echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list
apt update -y && apt install -y google-chrome-stable
and then specified the path when launching the browser.
You also have to run it in headless and with args "--no-sandbox"
browser = await launch(executablePath='/usr/bin/google-chrome-stable', headless=True, args=['--no-sandbox'])
Hope this will help!
If anybody is running on Heroku and facing the same error:
Add the buildpack : The url for the buildpack is below :
https://github.com/jontewks/puppeteer-heroku-buildpack
Ensure that you're using --no-sandbox mode
launch({ args: ['--no-sandbox'] })
Make sure all the necessary dependencies are installed. You can run ldd /path/to/your/chrome | grep not on a Linux machine to check which dependencies are missing.
In my case, i get this:
libatk-bridge-2.0.so.0 => not found
libgtk-3.so.0 => not found
and then install dependencies:
sudo apt-get install at-spi2-atk gtk3
and now it works!
I am getting an:
Error: Undefined message in HUE every time I try to upload a file.
I am able to create a subdirectory/folder in HDFS but file uploads are not working.
I tried copying a file to HDFS from the linux CLI using the hadoop user, and it works.
Hue user is hadoop
HDFS directory owner is hadoop:hadoop
Edit: Adding the error
ERROR Internal Server Error: /filebrowser/upload/file
Traceback (most recent call last):
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/base.py", line 178, in _get_response
response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/middleware/csrf.py", line 300, in process_view
request_csrf_token = request.POST.get('csrfmiddlewaretoken', '')
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/core/handlers/wsgi.py", line 126, in _get_post
self._load_post_and_files()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/request.py", line 299, in _load_post_and_files
self._post, self._files = self.parse_file_upload(self.META, data)
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/request.py", line 258, in parse_file_upload
return parser.parse()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/multipartparser.py", line 269, in parse
self._close_files()
File "/usr/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11.20-py2.7.egg/django/http/multipartparser.py", line 316, in _close_files
handler.file.close()
AttributeError: 'NoneType' object has no attribute 'close'
[12/Apr/2020 22:48:51 -0700] upload DEBUG HDFSfileUploadHandler receive_data_chunk
[12/Apr/2020 22:48:51 -0700] upload ERROR Not using HDFS upload handler:
[12/Apr/2020 22:48:51 -0700] resource ERROR All 1 clients failed: {'http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1': u'500 Server Error: Internal Server Error for url: http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop\n{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}\n'}
[12/Apr/2020 22:48:51 -0700] resource ERROR Caught exception from http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1: 500 Server Error: Internal Server Error for url: http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop
{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}
(error 500)
As you can see from the error message, it reports that there is no match found for the query param passed when hue tries to perform the CHECKACCESS operation.
http://IRedactedMyinstanceIdentHere.ap-southeast-1.compute.internal:14000/webhdfs/v1/user/hadoop/Test-Data?op=CHECKACCESS&fsaction=rw-&user.name=hue&doas=hadoop
{"RemoteException":{"message":"java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.CHECKACCESS","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}
This implementation seems to be missing in some of the hadoop versions and is a known bug HTTPFS - CHECKACCESS operation missing.
I'm running into an issue where I'm not sure what is cause, here is the setup:
Following the Ansible win_service doc
Debian GNU/Linux buster/sid
ansible 2.7.5
config file = /home/ansible/ansibleGalaxy/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 28 2018, 16:27:22) [GCC 8.2.0]
Playbook:
- name: Update Dalet Installer service
win_service:
name: DaletInstallerService
username: .\Administrator
password: toto
start_mode: auto
state: started
Basically I want to update the credential of the service on the target machine.
And whatever setting i'm getting this error message:
fatal: [192.168.56.103]: FAILED! => {
"can_pause_and_continue": false,
"changed": false,
"depended_by": [],
"dependencies": [
"Afd",
"Tcpip"
],
"description": "DaletInstallerService",
"desktop_interact": false,
"display_name": "DaletInstallerService",
"exists": true,
"msg": "Service 'DaletInstallerService (DaletInstallerService)' cannot be started due to the following error: Cannot start service DaletInstallerService on computer '.'.",
"name": "DaletInstallerService",
"path": "'C:\\Program Files (x86)\\DALET\\DaletInstaller\\DaletInstallerService.prunsrv.exe' //RS//DaletInstallerService",
"start_mode": "auto",
"state": "stopped",
"username": ".\\Administrator"
}
ERROR! Unexpected Exception, this is probably a bug: 'ascii' codec can't encode character u'\xa0' in position 29: ordinal not in range(128)
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 118, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py", line 156, in run
result = self._tqm.run(play=play)
File "/usr/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", line 291, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/strategy/linear.py", line 325, in run
results += self._wait_on_pending_results(iterator)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 712, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/usr/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 135, in inner
dbg.cmdloop()
File "/usr/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 1166, in cmdloop
cmd.Cmd.cmdloop(self)
File "/usr/lib/python2.7/cmd.py", line 130, in cmdloop
line = raw_input(self.prompt)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 29: ordinal not in range(128)
Still looking for a workaround or a fix, but any input would be welcome to understand the root cause.
Matth
I am new with celery and i am following the tutorial given on their site i got this error
from celery import Celery
app = Celery('tasks', broker='pyamqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
and cmd shows error like this
-------------- celery#DESKTOP-O90R45G v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.14393 2016-12-16 20:05:48
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x4591950
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] . tasks.add
[2016-12-16 20:05:49,029: CRITICAL/MainProcess] Unrecoverable error:
TypeError('argument 1 must be an integer, not _subprocess_handle',)
Traceback (most recent call last): File
"c:\python27\lib\site-packages\celery\worker\worker.py", line 203, in
start
self.blueprint.start(self) File "c:\python27\lib\site-packages\celery\bootsteps.py", line 119, in
start
step.start(parent) File "c:\python27\lib\site-packages\celery\bootsteps.py", line 370, in
start
return self.obj.start() File "c:\python27\lib\site-packages\celery\concurrency\base.py", line 131,
in start
self.on_start() File "c:\python27\lib\site-packages\celery\concurrency\prefork.py", line
112, in on_start
**self.options) File "c:\python27\lib\site-packages\billiard\pool.py", line 1008, in
__init__
self._create_worker_process(i) File "c:\python27\lib\site-packages\billiard\pool.py", line 1117, in
_create_worker_process
w.start() File "c:\python27\lib\site-packages\billiard\process.py", line 122, in
start
self._popen = self._Popen(self) File "c:\python27\lib\site-packages\billiard\context.py", line 383, in
_Popen
return Popen(process_obj) File "c:\python27\lib\site-packages\billiard\popen_spawn_win32.py", line
64, in __init__
_winapi.CloseHandle(ht) TypeError: argument 1 must be an integer, not _subprocess_handle Traceback (most recent call last): File
"<string>", line 1, in <module> File
"c:\python27\lib\site-packages\billiard\spawn.py", line 159, in
spawn_main
new_handle = steal_handle(parent_pid, pipe_handle) File "c:\python27\lib\site-packages\billiard\reduction.py", line 126, in
steal_handle
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE) >WindowsError: [Error 6] The handle is invalid
Celery is not supported anymore in Windows since the 4.0, as stated in their Readme:
Celery is a project with minimal funding, so we don't support Microsoft Windows. Please don't open any issues related to that platform.
Unfortunately, this bug seems to be one of the side effects (support for process handles removed)
Your best bet is to downgrade celery, remove it first then :
pip install celery==3.1.18
Looks like your rabbitmq server is not running. Did you run install and start rabbimq server?
You can checkout rabbitmq docs and install it. Start rabbitmq server and then start your celery worker with your app
celery worker -l info -A tasks
I have a plugin.yml file for a bukkit plugin:
name: SlayCraft
version: 1.0.0
main: src.john01dav.slaycraft.SlayCraft
commands:
scsetspawn:
permission: slaycraft.setspawn
description: Sets the SlayCraft spawn point to where you are standing
usage: /scsetsapwn <arena/lobby>
scjoin:
permission: slaycraft.join
description: Joins the SlayCraft game
usage: /scjoin
scfirework:
permission: slaycraft.firework
description: Launches a firework at the player's location
usage: /scfirework
scexplosion:
permission: slaycraft.explosion:
description: Launches an explosion at the player's location
usage: /scexplosion
permissions:
slaycraft.setspawn:
default: op
slaycraft.join:
default: true
slaycraft.firework:
default: op
slaycraft.explosion:
default: op
This yaml looks perfectly fine to me, and yet there are errors. Any ideas? I have searched for people having similar errors, but none of them seem explicable.
The error is pretty specific:
ERROR:
mapping values are not allowed here
in "<unicode string>", line 19, column 36:
permission: slaycraft.explosion:
^
You have an extra colon on this line:
permission: slaycraft.firework
description: Launches a firework at the player's location
usage: /scfirework
scexplosion:
permission: slaycraft.explosion: #<-- This colon is not needed.
description: Launches an explosion at the player's location
usage: /scexplosion
Remove it.