I am trying to connect to a topic that requires SSL. Can anyone suggest an approach to achieve this?
I have a ca.crt, ca.p12 and password provided by the kafka cluster administrator.
I can connect just fine using a test utility that leverages the keytool/keystore, but these don't work for Go.
The code to connect to the cluster is as follows:
func (p *KafkaPublisher) Initialise() {
configFile := "./config/kafka.properties"
fmt.Printf("Reading config file from: %s\n", configFile)
conf := ReadConfig(configFile)
var err error
kafkaProducer, err = kafka.NewProducer(&conf)
// rest of code commented out.
}
And this works just fine for SASL using the following config file:
bootstrap.servers=markets-cb--vog--pu-taejvaba.bf2.kafka.rhcloud.com:443
security.protocol=SASL_SSL
sasl.mechanisms=PLAIN
# Service account user name
sasl.username=700e9c83-b4be-4f23-8697-b6cfa5921354
sasl.password=ab955a0d-e78d-4d96-bd8e-35de3b6b83e5
# Best practice for Kafka producer to prevent data loss
acks=all
I reviewed the following documentation to try and find the right properties:
https://docs.confluent.io/platform/current/clients/librdkafka/html/md_CONFIGURATION.html#autotoc_md91
and have been unsuccessful
bootstrap.servers=my-cluster-kafka-listener1-bootstrap-climate.violet-cluster-new-2761a99850dd8c23002367ac6ce7f9ad-0000.au-syd.containers.appdomain.cloud:443
security.protocol=SSL
ssl.key.password=ebuSuzkDbfFK
ssl.certificate.location="ca.pem"
# Best practice for Kafka producer to prevent data loss
acks=all
I have also tried putting the pem file inline as a single quote-delimited line using the property ssl.certificate.pem. This produced the the most encouraging result with an error:
Failed to create producer: ssl.certificate.pem failed: not in PEM format?: error:0909006C:PEM routines:get_name:no start line: Expecting: CERTIF
But it is a PEM file format...
Here is the certificate (Don't worry, there is no security risk with me pasting this in here as the cert is now removed from the server and I changed the url and password above.)
-----BEGIN CERTIFICATE-----
MIIFLTCCAxWgAwIBAgIUBT4au51IElFmVL4RenvrRsFpwiwwDQYJKoZIhvcNAQEN
BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2
MDAeFw0yMjA3MTkwNTE4NDBaFw0yMzA3MTkwNTE4NDBaMC0xEzARBgNVBAoMCmlv
LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDdghbY97oYE5A1GcGccMMp01RW3DSsl3tZ3U/Q2YKY
IDkqwkITevXu0WjUHh7v/659xwryNMtUjlz8JF4MZQnZwq1xEX6ldA+/+JeG2pJE
eEnFXPvP9meDfi2N5bQC/At5N5ZAca4jfrKVognzgHMj1JMwXtTLu4Jw73Za+dRg
y30I51x33zPTYgq+5QKTssOvvAQp+OGsf2ts1s3P5weOLJ3tfGUrVdhoblMB+RUl
TF7KIuknFY0+sNRUJSeuw29qUdH9KFJ3bYBMEF2afSybS4DlFSrs7Od0ZsPWnONn
ZRV/SdjpvjP13k32XqzX8O1h0oOnHOPExE7OQgjRVcluJh3aJbSEAb9DnAUCRX88
/ZA2tlDWGxJIv94CruKDaF6vMaupUUXmQG0Irk8cfGKHgwjvG2HL2U/oaJpPEtdo
Zu2TgeHxF/k9YBfzKC6ZlZrwQLN1iL+mAL45ql1OYyGVWPS6NWUSpJA9FS7qJlfr
o7hiD/wg4xubtGntBCui+jdrRoDwcVQvk5+pZYNchK26oT4qjY57YhXLWByesg+c
MUSDTgtU//DHfAq6VyJqMbnP68RwW1MRMg1lbiMUTJ8wTsKgBm6fZ1E2+Jg/wJtq
FhLTBjiB6Fa2m6aZQdqaN/RMu9mUNjE45b4GVIFayouS1ejGoNNsC2rv0n4CQhNj
+wIDAQABo0UwQzAdBgNVHQ4EFgQU2YAwObuT5NpEivgwO+j+Z/qSn1gwEgYDVR0T
AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB
AB+18g/YjUYpLbBrQgzvHhNP0THKUzVQ5ze15JZfaQHdOf6XYhAMpnHm/mBrXM2B
5ZHwkQzCuHfZmSikeVnr/thloh1+2NHdf9s4XQtxZiouZxzNnbX3Hf/gviX/lvm2
p8twSqxsMnI00x7jPDGDmZBF5bX7Mtp3c7gD3K4goz8InHdn8j1jxhYg0fQdX8j3
ryoy+hWCkaW1PPYxGgrmdJg7kiffTBw3jx9+Md11EXb7+ryZeUEsI6MqGQSg6F94
U+nPWV/qBErEbe5iNISdOkUK8wcjk/IOVAps1CJs8BECcCaReGwVpC8twxx9c8BY
DEW76Y8J9syxgxUZeEywoqguxc80SSuTLXcNkAoifdReUeW/b08cJffP55nztfbE
Fhg4E5vYj41q2Z5iOI1sZsY22Z4VszW0Fl11DcloM4/088W6O3Lp3Jo0rMu/k14/
DNr5AM8Lrgno947S1OWZ87Q1IF8zlayM+c5XWRHO64jBHouTo1HvDodudaF+XYxv
F7xRVUehnnACQExy2OYeOkjtxmsinQDfZcvvj7b7NfCqytM3IjB1jk9GxeoFgKYj
/n9WFjHWtSnC+nsyZo2c37XeaHbBEtls3LHXb6+OmOtiTzw0C8TEQJ4AcqaPNk1k
knHg6PPWBenWskC9KH898c6vvhZ5/VHSWXJG6f8GxWya
-----END CERTIFICATE-----
Related
I have no trouble connecting go pgxpool to a postgresql database in a docker container but can't figure out how to write a connection URL string for a linode postgresql RDS. Specifically, what is the first part of the URL "postgres://"? I can't find any example for a connection URL other than a local db and no code examples for a DSN connection.
Can somebody please help me out with either a connection URL or DSN for these details?
Here is my current connection string which returns "host is invalid". ssl_mode is also invalid.
config, err := pgxpool.ParseConfig("user=linpostgres, password=secret, host=lin-9930-2356-pgsql-primary.servers.linodedb.net, port=5432 dbname=mydb, pool_max_conns=10")
This psq connect string times out: psql --username=linpostgres --host=lin-9930-2356-pgsql-primary.servers.linodedb.net port=5432 --password
You can check the code official GitHub repository code here: link
// See Config for definitions of these arguments.
//
// # Example DSN
// user=jack password=secret host=pg.example.com port=5432 dbname=mydb sslmode=verify-ca pool_max_conns=10
//
// # Example URL
// postgres://jack:secret#pg.example.com:5432/mydb?sslmode=verify-ca&pool_max_conns=10
so your connection string should be
postgres://jack:secret#pg.example.com:5432/mydb?sslmode=verify-ca&pool_max_conns=10
hope this helps
I'm using the aws-sdk-go package in Golang to connect to Amazon S3 to provide a cloud-based storage pool. I have this working well. I would like to be able to support bulk high-speed transfers using Snowball, so I got a Snowball Edge to test this in my lab. I have not figured out how to get this working, and the documentation for Snowball Edge doesn't seem complete. This configuration may be impacted by having ordered a Snowball Edge and not just a Snowball.
The reason that I'm finding the Edge more problematic, is that a normal Snowball requires an application called snowballAdapter to be running, which looks like it handles some port mapping issues. But, this application seems to be incompatible with the Edge device, as it reports that it doesn't work with a "Snowball Edge Manifest file".
I looked at the ports that are available on the real AWS S3 and nmap reports:
nmap -v -sT -Pn s3.us-east-1.amazonaws.com
...
Scanning s3.us-east-1.amazonaws.com (52.216.161.53) [1000 ports]
Discovered open port 443/tcp on 52.216.161.53
Discovered open port 80/tcp on 52.216.161.53
Whereas on Snowball Edge, the ports are:
nmap -v -sT -Pn 192.168.1.4
....
Scanning 192.168.1.4 [1000 ports]
Discovered open port 8080/tcp on 192.168.1.4
Discovered open port 22/tcp on 192.168.1.4
Discovered open port 9091/tcp on 192.168.1.4
Discovered open port 8443/tcp on 192.168.1.4
....
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy
8443/tcp open https-alt
9091/tcp open xmltec-xmlmail
So, it seems to me that the issue may be that I have to make the aws package use port 8443 for Snowball Edge instead of 443 for the real S3. The code for connecting to S3 is pretty straight forward:
creds := credentials.NewStaticCredentials(s3Config.S3AccessKey, s3Config.S3SecretAccessKey, s3Config.S3Token)
_, err := creds.Get()
if err != nil {
return nil, nil, err
}
if len(baseFolder) > 0 {
baseFolder = baseFolder + "/"
}
cfg := aws.NewConfig().WithRegion(s3Config.S3Region).WithCredentials(creds)
svc := s3.New(session.New(), cfg)
params := &s3.ListObjectsInput{
Bucket: aws.String(s3Config.S3BucketName),
Prefix: aws.String(baseFolder),
Delimiter: aws.String("/"),
}
resp, err := svc.ListObjects(params)
So, the question is, how do I change the code to point to the Snowball Edge? I've tried mapping to the Snowball Edge in /etc/hosts from the Amazon S3 endpoint. I understand why this didn't work after discovering that the ports are different. I've played around with adding different forms of "WithEndpoint("...host...") with no success. Or, am I on the completely wrong track, and should be able to get the snowballAdapter to work with a Snowball Edge?
By the way, all the snowballEdge commands work as expected, so the device seems to be working fine, e.g.:
./snowballEdge list-access-keys
{
"AccessKeyIds" : [ "..." ]
}
./snowballEdge get-secret-access-key --access-key-id ....
[snowballEdge]
aws_access_key_id = ...
aws_secret_access_key = ...
And, I've used the correct keys associated with the device, and it does have the S3 Service configured in it:
./snowballEdge list-services
{
"ServiceIds" : [ "s3" ]
}
Snowball Edge is a very different beast than is AWS S3. In addition to the access key and secret access key, it requires that an endpoint and additional credentials in the form of a certificate. The real AWS S3 has a valid certificate, supported by the certificate authorities, but Snowball Edge has a self-signed certificate.
The configuration can be created with this:
cfg := aws.NewConfig().WithRegion(s3Config.S3Region).WithCredentials(creds).WithEndpoint(s3Config.S3Endpoint).WithHTTPClient(httpClient)
svc = s3.New(session.New(), cfg)
The endpoint looks like this (you see Snowball Edge responding on port 8443 in the question text above):
https://192.168.1.4:8443
The certificate is required to be formatted like you'd have in a file (including all the new line characters. It looks like this (again, the string you pass in has to include the new line characters after every line):
-----BEGIN CERTIFICATE-----
MIIC7zCCAdegAwIBAgIJBJZB/gkBP0B5MA0GCSqGSIb3DQEBCwUAMBYxFDASBgNV
BAMMCzE3Mi4yMC4xLjE3NB4XDTE3MDQyODIwMTMwOFoXDTE5MDQxODIwMTMwOFow
FjEUMBIGA1UEAwwLMTcyLjIwLjEuMTcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCrLWlTfrSj9R1of5Z98EHYIEEPBgnWxnlrvA+ryAzPmiXbYomI4Tpl
PsuIA+7hGXG10H0zwlz0n22EUv4pE79toYcd3czOJUAHEuSelhtP7u91vM4GguFx
A00gosu04RFUD+BYNeaLTQfd7vdmQB3bY3KEbn7Dfrs/1MYFhKb8J77mgCuUbAPu
PNvwLoV+hBL+ndgs+bIu4MtXjUJDiigRZkacpMQaduDMqEq6seoc+JwrKNBjRBRu
3l/fcQoWf+g902oZJaXXnVGqqb7o2YAQFehUAbmCJfuKFSl5tu0B+3KvQQni7lK+
SV8WItdrPumS98BBlt6NpzgC5fTwCmapAgMBAAGjQDA+MAwGA1UdEwQFMAMBAf8w
HQYDVR0OBBYEFMAvKzKgKI+izqPX6DJjJz/0fELtMA8GA1UdEQQIMAaHBKwUAREw
DQYJKoZIhvcNAQELBQADggEBAGwyzmI+9psQu9/N/oClN7Lej7e4E8cC8vymVfPz
fdW45IMNVEYHxHbu9+JzLOtLxfuDmD6B6fEYVoPubb6tsnacuwOSMZhTvhhy9nv2
f+2Pslgj/kYTeMePbHOPTyQ4sd1BE7ALdNiL/hd08ZNhqObagixNYw9eYeHEStBy
tOADKcY9gOxek1k9t+96nATgSy0WIytwra0uEgyipKQ2gXKpgg15SI4nDxQLLEgG
lb3FtRk+PfJxQ4zbHZe/cRNflcGwVCefycLQOA2Sdr8pgHW7gvETu9i9ywF0UV6f
b9wsPcDmg3EaxBa+wrLlYSzaPhI+rZYh6bpnTn311QIFZ+s=
-----END CERTIFICATE-----
Found the Snowball Edge to be pretty fast storage using S3. We were able to achieve about 1,100 MBytes/second of copy rate.
I've tried now for several hours te set up gitlab and especially gitlab-shell. After being trolled by the documentation I found a sample config, that fitted my needs, but I get an API 500 error :
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: FAILED. code: 500
gitlab-shell self-check failed
Try fixing it:
Make sure GitLab is running;
Check the gitlab-shell configuration file:
sudo -u git -H editor /home/git/gitlab-shell/config.yml
Please fix the error above and rerun the checks.
To explain my current setup:
#/home/git/gitlab-shell/config.yml
user: git
gitlab_url: https://[myfqdn]/
http_settings:
ca_file: "/etc/gitlab-ssl/git-mydomain-chain.pem"
ca_path: "/etc/gitlab-ssl"
self_signed_cert: false
repos_path: "/home/git/repositories/"
auth_file: "/home/git/.ssh/authorized_keys"
redis:
bin: "/usr/bin/redis-cli"
namespace: resque:gitlab
host: localhost
port: 6379
log_level: INFO
audit_usernames: false
In the /etc/gitlab-ssl directory are two files:
* my privatekey git-mydomain-key.pem
* the combinded public key and CA-key git-mydomain-chain.pem
In addition I added the ca-key to the ca-certificates (it's a cacert signed one).
Can anyone help me and tell me what went wrong?
This error has nothing to do with gitlab. This is pure YAML parser (Psych in your case) error.
Line 5 column 3 is:
ca_path:
⇑ HERE
That said you have a strange unterminated string right above:
⇓⇓⇓ WTF?!
ca_file: "/etc/gitlab-ssl/git-mydomain-chain.pem #This file contains my public key and the ca key
Remove everything after hash (inclusive) and close the string quotes.
Hope it helps.
I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.
When deploying an APNS certificate in a .wlapp file in MFP 7.0, I'm seeing a null-pointer exception when it validates the end-date, even though it has one. ( openssl pkcs12 -in apns-certificate-sandbox.p12 | openssl x509 -noout -enddate returns a valid date in the future).
It seems others have made this work, so I'm guessing it must be something I am doing wrong...has anyone else resolved similar issues with valid Apple Push Notification Service certs failing to be deployed on MFP
Relevant lines from the log:
947: "com.ibm.worklight.admin.services.ApplicationService E FWLSE3000E: A server error was detected.",
"948: com.ibm.worklight.admin.common.util.exceptions.ValidationException: FWLSE3119E: APNS certificate validation failed. See additional messages for details.",
"949: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validateApnsConfiguration(PushEnvironmentUtil.java:232)",
"950: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validatePushConfiguration(PushEnvironmentUtil.java:220)",
[ ... lots more trace here .. ]
"1030: Caused by: java.lang.NullPointerException",
"1031: at java.io.ByteArrayInputStream.(ByteArrayInputStream.java:117)",
"1032: at com.ibm.worklight.admin.util.PushEnvironmentUtil.getCertificateExpiryDate(PushEnvironmentUtil.java:362)",
"1033: at com.ibm.worklight.admin.util.PushEnvironmentUtil.validateApnsConfiguration(PushEnvironmentUtil.java:230)",
Initial hurdle was that the .wlapp file was not being built, so no apns certificate was in the file (it is just in .zip format with a meta directory that should hold the .p12 file). The underlying issue was that the tag's password field in application-descriptor.xml wasn't exactly right: it was following the example from "Push Notifications in iOS applications" at https://developer.ibm.com/mobilefirstplatform/documentation/getting-started-7-0/notifications/push-notifications-native-ios-applications/ :
<pushSender password="apns-certificate-p12 password"/>
when it really should just have the password:
<pushSender password="password"/> </code></pre>
with the file named either apns-certificate-sandbox.p12 or apns-certificate-production.p12 depending on which server is to be used.
Double dumbass on me for not checking the official docs at http://www-01.ibm.com/support/knowledgecenter/SSHS8R_7.0.0/com.ibm.worklight.dev.doc/devref/c_the_application_descriptor.html , which has it described correctly.
Moral: "When in doubt, RTFM"