I always get 401 unauthorized response with xAPI for Learning Locker using community EC2 AMI machine - lrs

I've configured Learning Locker on AWS EC2 and using the already build community AMI with Ubuntu 16.04. I can access the URL and can login into the system and play with it. I went in and created the client and using the default organization.
I'm passing Authorization token as per documentation in my each of the request but I still get 401 unauthorized.
I even followed the support videos that are shared in the documentation for statements and state but they even didn't work for me.
I'm sturggling on this from two days now, so assistance is required. I tried it using CURL and using Insomnia software but the response remained same. As I'm under test settings, so don't mind even sharing the exact tokens. And CURL requests.
Here is the CURL request that I used
curl -H "Authorization: Basic NDk0MTdjYmUzMDQ3YzkyOWJkOTIzMWUxOWM2YmYwZjZhNzMyMmE0YTpjMzYyZWFlYTU5ZjgwMjAxY2VjYjQ4NjIxY2EyMGQ2NmIwNmU4ZDE4" -H "X-Experience-API-Version: 1.0.3" -H "Content-Type: application/json" -X POST http://ec2-18-185-127-9.eu-central-1.compute.amazonaws.com/data/xAPI/activities/state --data "activityId=http%3A%2F%2Fwww.example.org%2Factivity&agent=%7B%22mbox%22%3A%20%22mailto%3Atest%40example.org%22%7D&stateId=example_state_id&registration=361cd8ef-0f6a-40d2-81f2-b988865f640c"
Here is the response: {"errorId":"7fe46a1d-e46e-4a22-ad21-399c6bb16e6a","message":"Unauthorised"}
The only call that succeeds is call to /data/xAPI/about, which gives the below response
{
"X-Experience-API-Version": "1.0.3",
"version": [
"1.0.3"
]
}
Learning Locker Status
ubuntu#ip-172-31-33-77:~$ sudo su learninglocker
learninglocker#ip-172-31-33-77:/home/ubuntu$ pm2 status
┌─────┬──────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ API │ default │ 2.0.0 │ cluster │ 1501 │ 47h │ 0 │ online │ 0.3% │ 105.7mb │ lea… │ enabled │
│ 3 │ Scheduler │ default │ 2.0.0 │ cluster │ 1949 │ 47h │ 0 │ online │ 0% │ 78.0mb │ lea… │ enabled │
│ 1 │ UIServer │ default │ 2.0.0 │ cluster │ 1502 │ 47h │ 0 │ online │ 0.3% │ 80.2mb │ lea… │ enabled │
│ 2 │ Worker │ default │ 2.0.0 │ cluster │ 1910 │ 47h │ 0 │ online │ 0.3% │ 106.3mb │ lea… │ enabled │
│ 4 │ xAPI │ default │ 0.0.0-… │ cluster │ 1978 │ 47h │ 0 │ online │ 0% │ 70.9mb │ lea… │ enabled │
│ 5 │ xAPI │ default │ 0.0.0-… │ cluster │ 2027 │ 47h │ 0 │ online │ 0.3% │ 71.6mb │ lea… │ enabled │
└─────┴──────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Learning Locker Logs
learninglocker#ip-172-31-33-77:/home/ubuntu$ pm2 logs xAPI
[TAILING] Tailing last 15 lines for [xAPI] process (change the value with --lines option)
/var/log/learninglocker/xapi_stdout-4.log last 15 lines:
4|xAPI | 2020-04-22 10:28:57:549 - info: Listening on port 8081
4|xAPI | 2020-04-22 10:28:57:553 - info: Process ready
4|xAPI | 2020-04-22 10:28:57:600 - info: Created new Mongo connection
4|xAPI | 2020-08-26 12:13:23:946 - info: Listening on port 8081
4|xAPI | 2020-08-26 12:13:23:952 - info: Process ready
4|xAPI | 2020-08-26 12:13:24:008 - info: Created new Mongo connection
4|xAPI | 2020-08-26 19:57:44:805 - info: Created new Mongo connection
/var/log/learninglocker/xapi_stdout-5.log last 15 lines:
5|xAPI | 2020-04-22 10:28:59:426 - info: Listening on port 8081
5|xAPI | 2020-04-22 10:28:59:429 - info: Process ready
5|xAPI | 2020-04-22 10:28:59:468 - info: Created new Mongo connection
5|xAPI | 2020-08-26 12:13:23:943 - info: Listening on port 8081
5|xAPI | 2020-08-26 12:13:23:952 - info: Process ready
5|xAPI | 2020-08-26 12:13:24:014 - info: Created new Mongo connection
5|xAPI | 2020-08-26 20:11:38:514 - info: Created new Mongo connection
5|xAPI | 2020-08-27 15:01:13:192 - info: Created new Mongo connection
/var/log/learninglocker/xapi_stderr-5.log last 15 lines:
5|xAPI | 2020-08-26 21:06:39:195 - error: 17200f4e-d98d-48ba-b09b-ea447fa68b05: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 21:14:59:778 - error: e9ae78fd-943a-4647-b310-ad101ba913d4: xapi-statements handled - Method (undefined) is invalid for alternate request syntax
5|xAPI | 2020-08-26 21:42:05:999 - error: 078ef7b8-b33b-4280-8565-747994ed1e73: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 21:56:08:157 - error: c6a21e87-1215-4b04-91dd-b510b52d6364: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:02:50:626 - error: fd56c95b-b9c5-4178-8d18-6f35141490d6: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:03:25:201 - error: 1b38c501-449a-411e-a347-4dc9b2d642ca: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:11:56:776 - error: 651244b9-31ee-437c-abf5-7dbf2210e2a4: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:12:18:698 - error: d6cd8b7a-bd15-4692-84df-a5f3c3ec9b9f: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:13:04:239 - error: e9a37180-da2e-456b-8408-92817f78c9e3: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:24:04:922 - error: 7fe46a1d-e46e-4a22-ad21-399c6bb16e6a: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:33:49:707 - error: 49c9671e-b029-408b-8b81-cfcd38308fdb: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:46:18:820 - error: 4955d956-8b33-4be6-b3bd-3931f9249bae: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 22:49:33:257 - error: 4f0fadbf-122e-4412-8967-c5995bf74b35: jscommons handled - Unauthorised
5|xAPI | 2020-08-26 23:48:56:248 - error: ed0c1692-f6b2-4190-9645-d09389c9cf9b: jscommons handled - Unauthorised
5|xAPI | 2020-08-27 15:01:13:225 - error: 32028e87-e844-4a01-8136-a6a2a2ee53b9: jscommons handled - Unauthorised
/var/log/learninglocker/xapi_stderr-4.log last 15 lines:
4|xAPI | 2020-08-26 21:08:32:861 - error: 4311e4ac-bdcd-4765-98db-a9292e8d921b: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 21:14:16:597 - error: 4a3dda4a-d5f9-49d5-b78f-1ab7b4bac731: xapi-statements handled - Method (undefined) is invalid for alternate request syntax
4|xAPI | 2020-08-26 21:52:58:594 - error: 2dda9bd1-c1f2-4635-9ff6-f7865163824a: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 21:58:22:651 - error: f45b92c0-6403-439e-9783-0384d22a352b: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:03:16:466 - error: f4b494e2-6daf-485e-9c63-65febf4d1dab: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:11:45:122 - error: 0dcbcc49-2b69-45a2-91e8-e8d8422cbcc4: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:12:45:131 - error: d662c18e-8c0a-4efd-b3b5-9c13cc236c57: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:13:11:537 - error: da431a2c-a157-4695-a808-90381bf99113: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:25:27:984 - error: 53926e82-0d1f-45a2-8a70-f4c3b7c43e64: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:33:27:589 - error: 93e21ba1-0c06-4721-868f-6205b13a0009: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:35:41:287 - error: 7512a719-d25b-4852-a40e-9a8c6f5d2300: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 22:47:03:308 - error: 7a5aab9e-8675-4588-b720-ccf422f61f42: jscommons handled - Unauthorised
4|xAPI | 2020-08-26 23:27:07:862 - error: c54e4396-33f3-4cfa-8141-79db6cfa130b: jscommons handled - Unauthorised
Can anyone tell me, what mistake I'm making over here? Please also go through the screenshots.

You'll need to create at least one Store and specify it in the client settings.
Log into your organization.
Select Settings > Stores.
Click Add New (to add a new record store).
Fill in the Name/Description.
Then go back to your Clients menu.
Specify the LRS and Scope in the client details.

I'm not familiar specifically with Learning Locker, but I don't believe the displayed value for the Authorization header is correct, at least not what I would expect it to be based on the displayed key and secret. The base64 encoded value of:
NDk0MTdjYmUzMDQ3YzkyOWJkOTIzMWUxOWM2YmYwZjZhNzMyMmE0YTpjNDk0MTdjYmUzMDQ3YzkyOWJkOTIzMWUxOWM2YmYwZjZhNzMyMmE0YTpjMzYyZWFlYTU5ZjgwMjAxY2VjYjQ4NjIxY2EyMGQ2NmIwNmU4ZDE4
Decodes to:
49417cbe3047c929bd9231e19c6bf0f6a7322a4a:c49417cbe3047c929bd9231e19c6bf0f6a7322a4a:c362eaea59f80201cecb48621ca20d66b06e8d18
Which appears to have an extra segment c49417cbe3047c929bd9231e19c6bf0f6a7322a4a
You might try a header with the following value:
NDk0MTdjYmUzMDQ3YzkyOWJkOTIzMWUxOWM2YmYwZjZhNzMyMmE0YTpjMzYyZWFlYTU5ZjgwMjAxY2VjYjQ4NjIxY2EyMGQ2NmIwNmU4ZDE4
Which is based on the key and secret then base64 encoded
49417cbe3047c929bd9231e19c6bf0f6a7322a4a:c362eaea59f80201cecb48621ca20d66b06e8d18

Related

How to connect Trino/Presto to connect to Elasticsearch in GCP VM?

I am running elasticsearch on my GCP VM based on the below doc - install Elastic in GCP VM. I also have a trino docker image running in my GCP
I have enabled firewall in GCP & I am able to access the Elastic URL http://localhost:9200/ from my mac browser. But when i try to integrate Elastic with trino iam getting connection refused error.
Below is my elasticsearch.properties file for trino and the error log.
connector.name=elasticsearch
elasticsearch.host=localhost
elasticsearch.port=9200
elasticsearch.default-schema-name=default
elasticsearch.ignore-publish-address=true
Error log:
2022-08-09T13:44:12.578Z INFO main io.trino.metadata.StaticCatalogStore -- Loading catalog etc/catalog/elasticsearch.properties --
2022-08-09T13:44:12.927Z INFO main Bootstrap PROPERTY DEFAULT RUNTIME DESCRIPTION
2022-08-09T13:44:12.927Z INFO main Bootstrap jmx.base-name ---- ----
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.backoff-init-delay 500.00ms 500.00ms Initial delay to wait between backpressure retries
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.backoff-max-delay 20.00s 20.00s Maximum delay to wait between backpressure retries
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.connect-timeout 1.00s 1.00s Elasticsearch connect timeout
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.default-schema-name default default Default schema name to use
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.host ---- localhost
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.http-thread-count 6 6 Number of threads handling HTTP connections to Elasticsearch
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.ignore-publish-address false true
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.keystore-password [REDACTED] [REDACTED]
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.keystore-path ---- ----
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.max-http-connections 25 25 Maximum number of persistent HTTP connections to Elasticsearch
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.max-retry-time 30.00s 30.00s Maximum timeout in case of multiple retries
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.node-refresh-interval 1.00m 1.00m How often to refresh the list of available Elasticsearch nodes
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.port 9200 9200
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.request-timeout 10.00s 10.00s Elasticsearch request timeout
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.scroll-size 1000 1000 Scroll batch size
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.scroll-timeout 1.00m 1.00m Scroll timeout
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.security ---- ----
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.enabled false false
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.truststore-path ---- ----
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.truststore-password [REDACTED] [REDACTED]
2022-08-09T13:44:12.927Z INFO main Bootstrap elasticsearch.tls.verify-hostnames true true
2022-08-09T13:44:13.348Z ERROR main io.trino.plugin.elasticsearch.client.ElasticsearchClient Error refreshing nodes
io.trino.spi.TrinoException: Connection refused
at io.trino.plugin.elasticsearch.client.ElasticsearchClient.doRequest(ElasticsearchClient.java:733)
at io.trino.plugin.elasticsearch.client.ElasticsearchClient.fetchNodes(ElasticsearchClient.java:293)
at io.trino.plugin.elasticsearch.client.ElasticsearchClient.refreshNodes(ElasticsearchClient.java:176)
at io.trino.plugin.elasticsearch.client.ElasticsearchClient.initialize(ElasticsearchClient.java:158)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at io.airlift.bootstrap.LifeCycleManager.startInstance(LifeCycleManager.java:241)
at io.airlift.bootstrap.LifeCycleManager.addInstance(LifeCycleManager.java:212)
I tried generating .pem files based on the below cmd and added the respective elastic properties. But continued to get different error. Need help on the exact configuration
openssl req -x509 -newkey rsa:4096 -keyout http.pem -out elastic-certificates.pem -days 365

SonarQube - Failed to get CE Task status - HTTP code 502

I am trying to run sonarqube (hosted remotely) via SonarScanner command through local machine for magento application (for PHP), but getting below error every time. I tried to find solution for this, but didnt find much related to my issue
anyone has any idea about this error?
13:22:38.820 INFO: ------------- Check Quality Gate status
13:22:38.820 INFO: Waiting for the analysis report to be processed (max 600s)
13:22:38.827 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=7ms
13:22:43.845 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=11ms
13:22:48.854 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=9ms
13:22:53.866 DEBUG: GET 200 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=12ms
13:22:58.871 DEBUG: GET 502 https://example.com/api/ce/task?id=AYE-URS3o9NiO9ce0vrw | time=5ms
13:22:58.899 DEBUG: eslint-bridge server will shutdown
13:23:04.549 DEBUG: stylelint-bridge server will shutdown
13:23:09.571 INFO: ------------------------------------------------------------------------
13:23:09.571 INFO: EXECUTION FAILURE
13:23:09.571 INFO: ------------------------------------------------------------------------
13:23:09.571 INFO: Total time: 12:09.000s
13:23:09.688 INFO: Final Memory: 14M/50M
13:23:09.688 INFO: ------------------------------------------------------------------------
13:23:09.689 ERROR: Error during SonarScanner execution
Failed to get CE Task status - HTTP code 502: <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>

Preprocessing a message containing multiple log records

TL;DR. Is it possible to preprocess a message by splitting on the newlines, and then have each message go through the fluentd pipeline as usually?
I'm receiving these log messages in fluentd:
2018-09-13 13:00:41.251048191 +0000 : {"message":"146 <190>1 2018-09-13T13:00:40.685591+00:00 host app web.1 - 13:00:40.685 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Received GET /alerts\n"}
2018-09-13 13:00:41.337628343 +0000 : {"message":"199 <190>1 2018-09-13T13:00:40.872670+00:00 host app web.1 - 13:00:40.871 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Processing with Api.AlertController.index/2 Pipelines: [:api]\n156 <190>1 2018-09-13T13:00:40.898316+00:00 host app web.1 - 13:00:40.894 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Rendered \"index.json\" in 1.0ms\n155 <190>1 2018-09-13T13:00:40.898415+00:00 host app web.1 - 13:00:40.894 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Sent 200 response in 209.70ms\n"}
The problem with these logs is that second message: it contains multiple application log lines.
This is, unfortunately, what I have to deal with: the system (hello, Heroku logs!)I'm working with buffers logs and the spits them out as a single chunk, making it impossible to know the number of records in the chunk upfront.
This is known property of Heroku log draining.
Is there a way to preprocess the log message, so that I get a flat stream of messages to be processed normally by subsequent fluentd facilities?
This is how the post-processed stream of messages should look like:
2018-09-13 13:00:41.251048191 +0000 : {"message":"146 <190>1 2018-09-13T13:00:40.685591+00:00 host app web.1 - 13:00:40.685 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Received GET /alerts\n"}
2018-09-13 13:00:41.337628343 +0000 : {"message":"199 <190>1 2018-09-13T13:00:40.872670+00:00 host app web.1 - 13:00:40.871 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Processing with Api.AlertController.index/2 Pipelines: [:api]\n"}
2018-09-13 13:00:41.337628343 +0000 : {"message":"156 <190>1 2018-09-13T13:00:40.898316+00:00 host app web.1 - 13:00:40.894 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Rendered \"index.json\" in 1.0ms\n"}
2018-09-13 13:00:41.337628343 +0000 : {"message":"155 <190>1 2018-09-13T13:00:40.898415+00:00 host app web.1 - 13:00:40.894 request_id=40932fe8-cd7e-42e9-af24-13350159376d [info] Sent 200 response in 209.70ms\n"}
P.S. My current config is super basic, but I'm posting it just in case. All I'm trying to do is to understand if it's possible, in principle, preprocess the message?
<source>
#type http
port 5140
bind 0.0.0.0
<parse>
#type none
</parse>
</source>
<filter **>
#type stdout
</filter>
How about https://github.com/hakobera/fluent-plugin-heroku-syslog ?
fluent-plugin-heroku-syslog has been unmaintained since 4 years ago, but it will work with Fluentd v1 using compatible layer.

Docker with ElasticSearch none-root permission issue

I am trying to run ElasticSearch on Docker through docker-compose up. Whenever I try to start up my containers, I am getting this error:
Running as non-root...
elasticsearch | 2017-01-06 00:08:23,861 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
elasticsearch | at java.security.AccessControlContext.checkPermission(AccessControlContext.jav a:472)
elasticsearch | at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
elasticsearch | at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
elasticsearch | at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(Default MBeanServerInterceptor.java:322)
elasticsearch | at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
elasticsearch | at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:541)
elasticsearch | at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:258)
elasticsearch | at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:206)
elasticsearch | at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220)
elasticsearch | at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197)
elasticsearch | at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:125)
elasticsearch | at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:67)
elasticsearch | at org.elasticsearch.cli.Command.main(Command.java:59)
elasticsearch | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89)
elasticsearch | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82)
elasticsearch |
elasticsearch | 2017-01-06 00:08:25,813 main ERROR RollingFileManager (/data/elasticsearch.log) java.io.FileNotFoundException: /data/elasticsearch.log (Permission denied) java.io.FileNotFoundException: /data/elasticsearch.log (Permission denied)
This is my docker-compose.yml file:
elasticsearch:
container_name: elasticsearch
image: "itzg/elasticsearch:5.1.1"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- "./data/elasticsearch/:/data"

Error starting remote Bamboo agent: HTTP status code 500 received in response to fingerprint request

I have the following error when starting remote Bamboo agent:
INFO | jvm 1 | 2012/11/20 01:15:58 | 2012-11-20 01:15:58,235 INFO [WrapperSimpleAppMain] [RemoteAgentHomeLocatorForBootstrap] Agent home located at '/Users/user9066/bamboo-agent-home'
INFO | jvm 1 | 2012/11/20 01:15:58 | 2012-11-20 01:15:58,248 INFO [WrapperSimpleAppMain] [AgentUuidInitializer] Found agent UUID <snip> in temporary UUID file '/Users/user9066/bamboo-agent-home/uuid-temp.properties'
INFO | jvm 1
| 2012/11/20 01:15:58 | 2012-11-20 01:15:58,378 INFO [WrapperSimpleAppMain] [AgentContext] Requesting fingerprint, url: http://<bamboo-server-ip>:8090/bamboo/AgentServer/GetFingerprint.action?hostName=<remote-agent-ip>&version=3&agentUuid=<snip>
ERROR | wrapper | 2012/11/20 01:15:58 | JVM exited while starting the application.
INFO | jvm 1 | 2012/11/20 01:15:58 | Exiting due to fatal exception.
INFO | jvm 1 | 2012/11/20 01:15:58 | com.atlassian.bamboo.agent.bootstrap.RemoteAgentHttpException: HTTP status code 500 received in response to fingerprint request.
INFO | jvm 1 | 2012/11/20 01:15:58 | at com.atlassian.bamboo.agent.bootstrap.AgentContext.initFingerprint(AgentContext.java:131)
The ports 8085 and 54663 are open. Enabling log4j does not provide any additional information too.
Has anyone seen this error? Any pointers to resolve this please?
I had a similar error. I seemed to fix it by downloading the alternative remote agent installer called bamboo-agent-4.2.0.jar. You can find it as a small link underneath the main remote agent download button.
Once i had ran this jar and successfully authenticated, the original jar worked.

Resources