Parse Dashboard configuration error "Your config file contains invalid JSON. Exiting." - parse-platform

I am trying to install Parse Dashboard on AWS. The public directory works but the /apps directory is blank.
When looking at the logs I see
> parse-dashboard#1.0.14 start /var/app/current
> node ./Parse-Dashboard/index.js
Your config file contains invalid JSON. Exiting.
I am deploying the parse-dashboard from github. And I have entered in the values in the parse-dashboard-config.json that match the keys on parse.com.
This is the JSON that I am using
{
"apps": [
{
"serverURL": "xxxxxxxxxxxxxxxxxxxxxxx/parse",
"appId": "xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"masterKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"appName": "xxxxxxxxxxxxxx"
}
],
"iconsFolder": "icons"
}
In index.js the log is being generated by
129 if (error instanceof SyntaxError) {
130 console.log('Your config file contains invalid JSON. Exiting.');
131 process.exit(1);
132 }

For anyone who's looking for an answer, the parse-dashboard-config.json file should be in the your current folder where you're running
parse-dashboard --config parse-dashboard-config.json

Related

pa11y-ci: How to specify filename for default html report?

I have Pa11y working well with the Json output, but I wanted to provide a HTML report for clients. We do manual WCAG checks right now and are migrating to a more automated approach. I've installed the Pa11y HTML reporter, and it seems to want to produce the output into the CI. It's a better output but how do I specify the filename for the html file such as myreport.html?
Here is my config file:
{
"defaults": {
"reporters": [
"pa11y-reporter-html"
],
"timeout": 10000,
"standard": "WCAG2A",
"viewport": {
"width": 800,
"height": 600
}
}
}
Here is how I'm calling it from the command line:
pa11y-ci --sitemap https://oursite.co.uk/sitemaps/sitemap.xml
Also, how do I add the sitemap to the config file? When I add it in like this it seems to ignore it:
"sitemap:":"https://oursite.co.uk/sitemaps/sitemap.xml",
The solution was to use:
pa11y-ci-reporter-html -s results.json
The json is the separate .json output from pally-ci generated separately.
The ci-reporter is installed gloablly using:
> npm install -g pa11y-ci-reporter-html
So you have to keep pa11y-ci set to json mode for this to work.

Terraform custom provider GPG issue

hope you're doing well.
I'm writing an API using in Go that can works similarly to the terraform provider protocol
So I already have two endpoints working on my local machine over HTTPS:
https://myapi:9000/v1/provider/:namespace/:type/versions
https://myapi:9000/v1/provider/:namespace/:type/:version/download/:os/:arch
Let's say for example these full endpoint:
https://myapi:9000/v1/provider/myprovider/custom/versions
https://myapi:9000/v1/provider/myprovider/custom/0.1.0/download/linux/amd64
So I have the next .tf.json file:
{
"module": {
"linux": {
"source": "myapi:9000/v1/module/mymodule/custom",
"version": "0.1.2"
}
}
}
That uses this two files
provider.tf.json
provider "mycustomprovider" {
username = "abc"
password = "def"
host = "yjk"
}
versions.tf.json
terraform {
required_providers {
mycustomprovider = {
source: "myapi:9000/v1/myprovider/custom",
version: "0.1.0"
}
}
required_version = ">=1.0.2"
}
Then I simply run: terraform init to get my assets.
So, when I'm getting my custom module, that works fine.
Output (module download):
Initializing modules...
Downloading myapi:9000/mymodule/custom/gnu 0.1.2 for linux...
- linux in .terraform/modules/linux
Initializing the backend...
But when I'm getting my provdier I have this error:
Initializing provider plugins...
- Finding myapi:9000:9000/myprovider/custom versions matching "0.1.0"...
- Installing myapi:9000:9000/myprovider/custom v0.1.0...
╷
│ Error: Failed to install provider
│
│ Error while installing myapi:9000/myprovider/custom v0.1.0: error checking signature:
│ openpgp: invalid data: tag byte does not have MSB set
So, my provider versions endpoint is working. That's why terraform is able to recognize the version of my provider.
The problem should be on my download endpoint.
Before talking about this endpoint I want to add some context, I'm running a S3 client using localstack and exposing it through ngrok. These two things work and I able to upload or download files without problem.
Terraform custom providers should have three files (as I understand):
the provider in zip format (like in their example)
a provider_SHA256SUMS file with the shasums of each provider zip file (in this case I'm only have one)
a provider_SHA256SUMS.sig that is uses to recognize the integrity of the provider_SHA256SUMS file.
So to get these files I'm running these commands:
provider_SHA256SUMS
$ sha256sum 0.1.0.zip > 0.1.0_SHA256SUMS
provider_SHA256SUMS
$ gpg --gen-key # generating a new key
$ gpg --armor --output 0.1.0_SHA256SUMS.sig --detach-sig 0.1.0_SHA256SUMS
The response with my endpoint is something like this. (the gpg info is just sample data, no real warning here.)
{
"protocols": [
"5.0"
],
"os": "linux",
"arch": "amd64",
"filename": "0.1.0.zip",
"download_url": "https://d4f6-186-84-89-138.ngrok.io/terraform/v1/providers/myprovider/custom/0.1.0.zip",
"shasums_url": "https://d4f6-186-84-89-138.ngrok.io/terraform/v1/providers/myprovider/custom/0.1.0_SHA256SUMS",
"shasums_signature_url": "https://d4f6-186-84-89-138.ngrok.io/terraform/v1/providers/myprovider/custom/0.1.0_SHA256SUMS.sig",
"shasum": "1dd61b508aad0d65b32c71159775e409fd618adc5ba945cc2eebb42f29e085d3",
"signing_keys": {
"gpg_public_keys": [
{
"key_id": "9F21EA3C1C9F793C",
"ascii_armor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQGNBGEuvh8BDADyT3JLsSqnSLk0I2MrFPJgCvCYpPFsFJgfDx2EwL7TGGDeslaN\ndoOq05X+9vKyM6qQ1jQmpfS5dzsQIsHtUlsU0nphS21ZvYm10aZUt3dXxlMwu2Is\nSO+q8O4WSMXclIsBUyhzP6TMQ7nISXHundVx7b/S/bEYucOIMeYmqg1PKId55U4G\n7y/8W6mmzX6NvF97fRyN37fBqvx8q2SxT5iB3C2Sbfd7i/sMvjC0tQOBv1EKh3RN\ncElP5NlJbv58Ysz+UTU21EPkkvPH4pLuUcB9/0uwzi5y/268EWTy3+UlWnoh12ds\nESZFgijzUsdvOmCOZkdd5X1Radzr+6VKVXHHIprKgO5AlvjFoLQK0NzzMiXjhyUF\nk9plo7kET4dy9ztySJYutx5eNMJInF5mYKNdH3H36ThXAIPptAu8WJjCtYok78C5\nilpv7cTiM9F5g7SlxnKU+xFmbzhSnYxth9DEhrO9ufliT1Df76iuqpc8B79sUtUH\nWvf7QIgkL6HtL5MAEQEAAbQYbXlhcGkgPG15YXBpQGN1c3RvbS5jb20+iQHUBBMB\nCgA+FiEEbk3GwdLc9Ypn5Wg9nyHqPByfeTwFAmEuvh8CGwMFCQPCZwAFCwkIBwIG\nFQoJCAsCBBYCAwECHgECF4AACgkQnyHqPByfeTzfQgwAyqcGJFbU2zN45F/2ECBs\nE6vYbfk9qRXpvU6PWodE0t5sqcxY2Oz0r29OGaW5mDyZRE+zRGir4yQki3RqI6vY\nh66uTWMybUV6qipv3qXHIqbSn3H/ss4Tuf9C2//Pz/LpMKiMiJilpXyCy8F8l504\nEsm+PU3CtNioNZCkoeH6kJWkjXDGQWQK58R4SFRfHcJMa03+gyPgv5Ba593/zGqh\nl2GmmwbAJHcnSH1EBAulcd48nQCMOYuvIqa40CDOhcz+rIlqivvP6KVX+qmRVmaV\nY1u391a40wfaRomuk46JCKFQVeElAZ4tac8UaOv2x0GOBzIw7/1CwulN2VvojiEJ\nVj0Q6sZ/K7+dU4H7NLQ1aIN+Vv3t7VIISu3wzraCT5c1aduH4YLio83W9rS4EcRj\nHmej5JG4B16HOMMrM1caq+cVPyymCzblEShplCdmQ7qcYOqvRYW8ewVPNqWJSTiR\nC/Kpq1N8OOKdG0Th8ja4jfRkfexloCdUOlSOKktK8uU+uQGNBGEuvh8BDADOMUX3\nxatbgt4sArBKNlnZWrZZCRFHxzeGaZsY9EsNY6D722iGoU40iYs6ky08bOQT/g8O\nFSooA6DKNhxVCM/r99rsiYrNIzT9s/ywKmUb6JipgAiGpd5W9lBAB/u6pQ039ni2\nQI+5cYZ+8i6v2b6oOGdnym8p2K14O+keAh7Z6aOnpb8YIq3B7khtcO+oHvp820sB\nOa0hlMs39qQHkG70ybAe0HdcZAhXVVSmrN6EdDZ8SZmRSAVbiv84a9t5b8swNhxZ\nT3csxqdAWbz3GOCIUmmaJUYOdGYLwAc2BnsRyxzNq66H962uK9hygrDrjJSpNnxU\n/VdXcRxYZcLqUHGPMds/gqwr/30JmXlkPqbG/3v4D+wy5OFsr+uquK4helqhJdQ9\nfOWrMyUxShZhZ476YURn1VbaF4a5x5zi2OBSxYK9VjSfAedisMtvsRIxOMgU0eXT\nNkRaTBoQTX2ZiVjy0fwVeHgNIuPsszQRokRZ2zFttC+tU5x/ffayBU3qZhUAEQEA\nAYkBvAQYAQoAJhYhBG5NxsHS3PWKZ+VoPZ8h6jwcn3k8BQJhLr4fAhsMBQkDwmcA\nAAoJEJ8h6jwcn3k8QU4MAJetwC4o/F9m0tJKO6DYqX5bsnGlp1u3oyG0ATvSvT9E\nBTxbQlpcIOrJ16Be/92SmfVaGbbqWywqjkNgK7s08Zbbk7WONZyAg8NR5/b5Cgi9\ncJrR73dbDnijvhjDkAAn414+M57DG65tPt1vlXDqa8LSQobDdszn1i/ugvqxqj1y\n6NmFvVPxor67n9r67Iq4PzWF3WK7tosPUaTbFczbS2xS4sINPCEddb2Ima5cixL0\nh2pni/jonYo4RCWmUvpMx48CevgXFCzWOGdaOSI75MklcaH4IBe2EFaCbN3IUMlA\nHI2TOuR0KXsX0R3jzmDzVJkXaXWMqPjcFlxvXuMTE4ooI6DiBN7+2xAqfYOURmy1\nwjfWwCVR3OaPY2cGvxWPnIz2mtKjhRIaYwfzDVdR5vlSU/YwkJUv11P6Y8YPX7jw\nRYVFtTkd7qghjvWMBMpABTxYWxvd74EgUUnYOfoei97nKOnb3loj+XdoZeGCmyL7\nCZWzoNTMkeFkob1UkxIe+Q==\n=wZDI\n-----END PGP PUBLIC KEY BLOCK-----"
}
]
}
}
The shasum prop in the response should be a shasum corresponding to the provider zip file, it can also be found inside the provider_SHA256SUMS file.
To get the key_id and ascii_armor props I'm running this commands:
$ gpg --list-secret-keys --keyid-format=long ## key_id
$ gpg --armor --export <MY_KEYID> > public.gpg ## export to public key to base64
$ cat public.gpg | sed -E ':a;N;$!ba;s/\r{0,1}\n/\\n/g' ## one-lined ascii_armor
Short Question
What am I doing wrong with the gpg key to get this error? Am I lacking of steps, or doing everything wrong?
openpgp: invalid data: tag byte does not have MSB set
----Update----
This is how I'm building the response in Go:
I'm not putting the full code because I belive that the problem is with the KeyID and ASCIIArmor atributes.
As you can see: Shasum, KeyID and ASCIIArmor are hardcoded.
response := ProviderDownloadResponse{
Protocols: protocols,
Os: os,
Arch: arch,
Filename: filename,
DownloadURL: downloadURL,
ShasumsURL: SHASUMsURL,
ShasumsSignatureURL: SHASUMSSignatureURL,
Shasum: "1dd61b508aad0d65b32c71159775e409fd618adc5ba945cc2eebb42f29e085d3",
SigningKeys: SigningKeys{
GpgPublicKeys: []GPGPublicKey{
{
KeyID: "9F21EA3C1C9F793C",
ASCIIArmor: "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQGNBGEuvh8BDADyT3JLsSqnSLk0I2MrFPJgCvCYpPFsFJgfDx2EwL7TGGDeslaN\ndoOq05X+9vKyM6qQ1jQmpfS5dzsQIsHtUlsU0nphS21ZvYm10aZUt3dXxlMwu2Is\nSO+q8O4WSMXclIsBUyhzP6TMQ7nISXHundVx7b/S/bEYucOIMeYmqg1PKId55U4G\n7y/8W6mmzX6NvF97fRyN37fBqvx8q2SxT5iB3C2Sbfd7i/sMvjC0tQOBv1EKh3RN\ncElP5NlJbv58Ysz+UTU21EPkkvPH4pLuUcB9/0uwzi5y/268EWTy3+UlWnoh12ds\nESZFgijzUsdvOmCOZkdd5X1Radzr+6VKVXHHIprKgO5AlvjFoLQK0NzzMiXjhyUF\nk9plo7kET4dy9ztySJYutx5eNMJInF5mYKNdH3H36ThXAIPptAu8WJjCtYok78C5\nilpv7cTiM9F5g7SlxnKU+xFmbzhSnYxth9DEhrO9ufliT1Df76iuqpc8B79sUtUH\nWvf7QIgkL6HtL5MAEQEAAbQYbXlhcGkgPG15YXBpQGN1c3RvbS5jb20+iQHUBBMB\nCgA+FiEEbk3GwdLc9Ypn5Wg9nyHqPByfeTwFAmEuvh8CGwMFCQPCZwAFCwkIBwIG\nFQoJCAsCBBYCAwECHgECF4AACgkQnyHqPByfeTzfQgwAyqcGJFbU2zN45F/2ECBs\nE6vYbfk9qRXpvU6PWodE0t5sqcxY2Oz0r29OGaW5mDyZRE+zRGir4yQki3RqI6vY\nh66uTWMybUV6qipv3qXHIqbSn3H/ss4Tuf9C2//Pz/LpMKiMiJilpXyCy8F8l504\nEsm+PU3CtNioNZCkoeH6kJWkjXDGQWQK58R4SFRfHcJMa03+gyPgv5Ba593/zGqh\nl2GmmwbAJHcnSH1EBAulcd48nQCMOYuvIqa40CDOhcz+rIlqivvP6KVX+qmRVmaV\nY1u391a40wfaRomuk46JCKFQVeElAZ4tac8UaOv2x0GOBzIw7/1CwulN2VvojiEJ\nVj0Q6sZ/K7+dU4H7NLQ1aIN+Vv3t7VIISu3wzraCT5c1aduH4YLio83W9rS4EcRj\nHmej5JG4B16HOMMrM1caq+cVPyymCzblEShplCdmQ7qcYOqvRYW8ewVPNqWJSTiR\nC/Kpq1N8OOKdG0Th8ja4jfRkfexloCdUOlSOKktK8uU+uQGNBGEuvh8BDADOMUX3\nxatbgt4sArBKNlnZWrZZCRFHxzeGaZsY9EsNY6D722iGoU40iYs6ky08bOQT/g8O\nFSooA6DKNhxVCM/r99rsiYrNIzT9s/ywKmUb6JipgAiGpd5W9lBAB/u6pQ039ni2\nQI+5cYZ+8i6v2b6oOGdnym8p2K14O+keAh7Z6aOnpb8YIq3B7khtcO+oHvp820sB\nOa0hlMs39qQHkG70ybAe0HdcZAhXVVSmrN6EdDZ8SZmRSAVbiv84a9t5b8swNhxZ\nT3csxqdAWbz3GOCIUmmaJUYOdGYLwAc2BnsRyxzNq66H962uK9hygrDrjJSpNnxU\n/VdXcRxYZcLqUHGPMds/gqwr/30JmXlkPqbG/3v4D+wy5OFsr+uquK4helqhJdQ9\nfOWrMyUxShZhZ476YURn1VbaF4a5x5zi2OBSxYK9VjSfAedisMtvsRIxOMgU0eXT\nNkRaTBoQTX2ZiVjy0fwVeHgNIuPsszQRokRZ2zFttC+tU5x/ffayBU3qZhUAEQEA\nAYkBvAQYAQoAJhYhBG5NxsHS3PWKZ+VoPZ8h6jwcn3k8BQJhLr4fAhsMBQkDwmcA\nAAoJEJ8h6jwcn3k8QU4MAJetwC4o/F9m0tJKO6DYqX5bsnGlp1u3oyG0ATvSvT9E\nBTxbQlpcIOrJ16Be/92SmfVaGbbqWywqjkNgK7s08Zbbk7WONZyAg8NR5/b5Cgi9\ncJrR73dbDnijvhjDkAAn414+M57DG65tPt1vlXDqa8LSQobDdszn1i/ugvqxqj1y\n6NmFvVPxor67n9r67Iq4PzWF3WK7tosPUaTbFczbS2xS4sINPCEddb2Ima5cixL0\nh2pni/jonYo4RCWmUvpMx48CevgXFCzWOGdaOSI75MklcaH4IBe2EFaCbN3IUMlA\nHI2TOuR0KXsX0R3jzmDzVJkXaXWMqPjcFlxvXuMTE4ooI6DiBN7+2xAqfYOURmy1\nwjfWwCVR3OaPY2cGvxWPnIz2mtKjhRIaYwfzDVdR5vlSU/YwkJUv11P6Y8YPX7jw\nRYVFtTkd7qghjvWMBMpABTxYWxvd74EgUUnYOfoei97nKOnb3loj+XdoZeGCmyL7\nCZWzoNTMkeFkob1UkxIe+Q==\n=wZDI\n-----END PGP PUBLIC KEY BLOCK-----",
},
},
},
}
x/crypto/openpgp that is used by terraform does not support reading armored messages, see issue, this is where the error comes from.
shasums_signature_url filed doc mentions:
binary, detached GPG signature
Also, see Manually preparing a release doc
which is a valid GPG binary (not ASCII armored) signature
So you should try signing without --armor flag.

Issues in JSON to XML and Upload to FTP in Ballerina Integrator

I am trying samples given in Ballerina integrator tutorials, while running json to xml and then upload into ftp sample facing the issue:
error org.wso2.ei.b7a.ftp.core.util.BallerinaFTPException.
I knew the reason for this issue but don't know where i have to put that command. Please help me to sort out the issue.
Reason for the issue is: ftp credentials are mentioned in conf file, I put conf file under the root directory but it doesn't refer. Need to give
b7a.config.file=src/upload_to_ftp/resources/ballerina.conf
But I don't know where I have to give this?
Thanks in Advance.
You can add -b7a.config.file when running the generated jar file.
Official documentation :
https://ei.docs.wso2.com/en/latest/ballerina-integrator/develop/running-on-jvm/
However, keeping the ballerina.conf file in the root directory should work. Ballerina looks for the conf file automatically when running. Make sure the conf file is outside the src directory.
For the error that you have mentioned, could you add in logs to see if the json has been converted to xml properly? Since the code is structured in a way that checks if the conversion has occurred, it should print an xml
if (employee is xml) {
var ftpResult = ftp->put(remoteLocation, employee);
if (ftpResult is error) {
log:printError("Error", ftpResult);
response.setJsonPayload({Message: "Error occurred uploading file to FTP.", Resason: ftpResult.reason()});
} else {
response.setJsonPayload({Message: "Employee records uploaded successfully."});
}
} else {
response.setJsonPayload({Message: "Error occurred tranforming json to xml.", Resason: employee.reason()});
}
The if( employee is xml ) part will check if the conversion is successful.
The same applies after the file is sent to the server. If the file hasnt been sent, then the ftpResult would be an error. Basically, if you got the message { Message : "Employee records uploaded successfully" } then all the checks should have passed.
I have passed credentials directly to ftpConfig then its working fine. Conversion happened and converted file has been uploaded into ftp location successfully
ftp:ClientEndpointConfig ftpConfig = {
protocol: ftp:SFTP,
host: "corpsftp.dfaDFDA.com",
port: 22,
secureSocket: {
basicAuth: {
username: "DDFDS",
password: "FADFHYFGJ"
}
}
};
Output
{
"Message": "Employee records uploaded successfully."
}

new composer-wallet - jszip error

I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed

how to use secure docker registry(by CA) for mesos container?

In DCOS, I want to deploy a mesos container with a self-defined image which stored in a local secure docker registry, and it has been secured by CA (not username and password!)
The json is
{
"id": "/gpu-tflinker",
"cmd": "while [ true ] ; do nvidia-smi; sleep 5; done",
"cpus": 0.1,
"mem": 1024,
"gpus": 1,
"instances": 1,
"constraints": [
[
"hostname",
"CLUSTER",
"10.140.0.22"
]
],
"container": {
"type": "MESOS",
"docker": {
"image": "tflinker:test-gpu",
"credential": null
}
}
}
The above json failed to run on marathon, and there is no content on mesos's stderr and stdout file, on mesos-agent log, the error message is :
E0721 05:01:57.726367 22498 slave.cpp:3976] Container 'e2c68720-0fb7-41bc-9d3b-a2b5e4793816' for executor 'gpu-t
flinker.b6f96725-6dd1-11e7-ba5d-0242b2c758c0' of framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000 failed to s
tart: Unexpected HTTP response '401 Unauthorized' when trying to get the manifest
W0721 05:01:57.726478 22497 composing.cpp:541] Container 'e2c68720-0fb7-41bc-9d3b-a2b5e4793816' is already destr
oyed
I0721 05:01:57.726583 22497 slave.cpp:4082] Executor 'gpu-tflinker.b6f96725-6dd1-11e7-ba5d-0242b2c758c0' of fram
ework 1079aaea-6dde-4dc1-8990-d926a895de78-0000 has terminated with unknown status
I0721 05:01:57.726603 22497 slave.cpp:4193] Cleaning up executor 'gpu-tflinker.b6f96725-6dd1-11e7-ba5d-0242b2c75
8c0' of framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000
I0721 05:01:57.726794 22497 slave.cpp:4281] Cleaning up framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000
so it seems mesos failed to fetch the docker image. I've configed CA file for dockerd(move ca files to /etc/docker/certs.d/), so I can 'docker pull' the image to local machine, but I am not sure how to config CA file for mesos~
in mesos-agent configurations, there exist a item --docker_config=VALUE, but it seems this item can only be used for username/password secured registry, I don't know how to config for CA secured registry.
anybody can help me out?! thanks!
I think CA file is just for encryption. you will need username and password in ca file way I think.
In my way, I put auth file into the container to authorize my private registry.
I wrote a web service for downloading the auth file
xxx.tar.gz(format: .docker/config.json in the tar.gz)
in the config.json, {"auths": {"test.com:6999": {"auth": "(username:password) [base64 encode]"}}} like {"auths": {"test.com:6999": {"auth": "Y2NjOjEyMw=="}}}
use Mesos uris to download auth files prepared into the containers. then, it would authorized.
"uris": [
"http:your download url"
]

Resources