I am currently running Parse Server 2.2.18 on Amazon AWS Elastic Beanstalk. I am attempting to use Cloud Code to send push notifications from an afterSave function to a particular channel. However, these notifications do not reach any devices even though they succeed.
This is the _PushStatus entry generated by Parse in my MongoDB database...
{
"_id": "eEaI2eReAi",
"pushTime": "2016-09-06T18:29:01.172Z",
"_created_at": {
"$date": "2016-09-06T18:29:01.172Z"
},
"query": "{\"channels\":{\"$in\":[\"E22\"]}}",
"payload": "{\"alert\":\"Test Group - Aye.\",\"e\":\"e\",\"badge\":\"Increment\"}",
"source": "rest",
"status": "pending",
"numSent": 0,
"pushHash": "675bcfb564807cdfc24085528c2cac39",
"_wperm": [],
"_rperm": []
}
Push notifications are sent properly through the Web UI on the Parse.com dashboard, where I can target a particular audience with the push of a button. The issue lies within my hosted Parse Server and it talking to APNS. I have correctly configured my push certificates and initialized my Node.js Parse Server with the local .p12 file and application bundle.
Any support is greatly appreciated.
-
Specifications:
Shared MongoDB cluster running Mongo 3.0.12.
Parse Server running 2.2.18 on AWS EB.
Related
I am trying to move my backend API app (node.js express server) from Heroku to AWS Elastic Beanstalk. But I did not realize the amount of features that Heroku was providing automatically and which I now have to set up manually in AWS.
So here is the list of features which I discovered were missing in AWS and the solutions I have implemented.
Could you please let me know if I am missing something in order to run smoothly my APIs in AWS and get the equivalent of what I had in Heroku?
auto-restart server when crashed : I am using PM2 to automatically restart my server in case of critical error
SSL certificate : I am using AWS ACM certificate,
logging : have inserted the datadog agent in order to receive logs in datadog
logging response time : I have added the "morgan-body" package to get each requests' duration and response code (had to manually filter the AWS healthchecks and search engine bots, because AWS gave me an IP adress which was visited constatntly by Baidu bots)
server timeout : I have implemented a 1200000ms timeout on the whole app (any better option ?)
auto deploy from Github : I have implemented a github automation to deploy code automatically (better options?)
Am I missing something? This app is already live so I do not want to put my customers at risk when I will move from Heroku to AWS...
Thanks for your help!
I believe you are covered:
Heroku Dynos restart after crashing or raising an error (Heroku Restarting Policy)
SSL certificates are provided for free
logging: Heroku supports various plugins, including Datadog
response time (in millisec) is logged automatically
HTTP timeout is 30 sec (it cannot be changed)
deploy from Github is possible (connecting the accounts), Docker deployment is also supported. Better options? Using Github Actions to deploy a new version after code push or tagging.
If you are migrating a production environment I strongly suggest first to setup a Heroku (Free) Dyno to test and verify all your needs are satisfied.
Databricks documentation shows how get the cluster's hostname, port, HTTP path, and JDBC URL parameters from the JDBC/ODBC tab in the UI. See image:
(source: databricks.com)
Is there a way to get the same information programmatically? I mean using the Databricks API or Databricks CLI. I am particularly interested in the HTTP path which contains the Workspace Id.
You can use the Get operation of the SQL Analytics REST API (maybe together with List) - it returns the JDBC connection string as a part of response (jdbc_url field):
{
"id": "123456790abcdef",
"name": "My SQL endpoint",
"cluster_size": "Medium",
"min_num_clusters": 1,
"max_num_clusters": 10,
"auto_stop_mins": 30,
"num_clusters": 5,
"num_active_sessions": 30,
"state": "RUNNING",
"creator_name": "user#example.com",
"jdbc_url":"jdbc:spark://<databricks-instance>:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/protocolv1/o/0123456790abcdef;",
"odbc_params": {
"host": "<databricks-instance>",
"path": "/sql/protocolv1/o/0/123456790abcdef",
"protocol": "https",
"port": 443
}
}
HTTP Path is also there, as path part of the odbc_params object.
Another way is to go to Databricks console
Click compute icon Compute in the sidebar.
Choose a cluster to connect to.
Navigate to Advanced Options.
Click on the JDBC/ODBC tab.
Copy the connection details.
More details here
It's not available directly from Databricks API, but this is the template for cluster JDBC connection string:
jdbc:spark://<db-hostname>:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/<workspace-id>/<cluster-id>;AuthMech=3;UID=token;PWD=<personal-access-token>
db-hostname is your hostname of your instance URL
workspace-id is the long numeric in your hostname (https://adb-1234512345123456.2.azuredatabricks.net/). It is available as workspaceId in output of az databricks workspace list or you can parse it from hostname
cluster-id the cluster you want the connection string for
personal-access-token is the token used for authentication
So all of above you already have or can get programmatically and substitute into template. It's a bit cumbersome, but that's the best we can do.
I am learning how to use Spring Cloud Data Flow. A lot of the tutorials make use of the shell so I am trying to get that set up. I am able to start the shell but I get server unknown. I have been trying to point the shell at my local running instance of the server (dataflow config server http://localhost:9393)but I keep getting the errors listed below. I am able to navigate to my server and run applications so I know that it is working on port 9393, not sure why the shell can not see it. I am running version 1.0.0.M3 for the shell.
I have tried the following.
server-unknown:>dataflow config server http://localhost:9393
Unable to contact Data Flow Server at 'http://localhost:9393': 'java.lang.IllegalArgumentException: Deployments relation is required'.
server-unknown:>dataflow config server 'http://localhost:9393'
Unable to contact Data Flow Server at 'http://localhost:9393':
'java.lang.IllegalArgumentException: Deployments relation is required'.
server-unknown:>dataflow config server http://localhost:9393
Unable to contact Data Flow Server at 'http://localhost:9393': 'java.lang.IllegalArgumentException: Deployments relation is required'.
server-unknown:>dataflow config server --uri http://localhost:9393
Unable to contact Data Flow Server at 'http://localhost:9393': 'java.lang.IllegalArgumentException: Deployments relation is required'.
server-unknown:>dataflow config server http://localhost:9393/
Unable to contact Data Flow Server at 'http://localhost:9393/': 'java.lang.IllegalArgumentException: Deployments relation is required'.
server-unknown:>dataflow config server \http://localhost:9393/
Unable to contact Data Flow Server at '\http://localhost:9393/': 'java.lang.IllegalArgumentException: Illegal character in scheme name at index 0: \http://localhost:9393/'.
server-unknown:>dataflow config server https://localhost:9393/
Unable to contact Data Flow Server at 'https://localhost:9393/': 'org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://localhost:9393/": Unrecognized SSL message, plaintext connection?; nested exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?'.
server-unknown:>dataflow config server --uri http://localhost:9393/ --skip-ssl-validation true
Option 'skip-ssl-validation' is not available for this command. Use tab assist or the "help" command to see the legal options
The 1.0.0 M3 version is at least 2yrs old.
Please upgrade to the latest GA release for bot SCDF and the Shell applications. You can get the latest release coordinates for the both from the getting-started guide.
When I try running these example functions to connect to Cloud Datastore, I get a 401 Invalid Credentials error.
I'm running the go code from a VM within a Google Cloud Project. I have enabled the Datastore API and generated a JSON key which is loaded by the example code.
This question is very similar and even mentions the same repo, but does not use the same authentication shown in the examples, and was related to a 403 Unauthorized error.
For some reason the Datastore documentation does not mention go outside the context of App Engine.
I just deployed a .NET application using SQL Server via Elastic Beanstalk.
It seems like my newly deployed application can't connect to my database. I just followed this video: http://www.youtube.com/watch?v=z-N0z5K_WFI (except I encountered issues during deployment where I had to untick incremental deploy)
I was able to connect to the db using SQL management studio. I also tried running the app locally while connecting to the amazon RDS db and has success. After deployment, checking the site and trying to login/register, I get this error:
No data received
Unable to load the webpage because the server sent no data.
Here are some suggestions:
Reload this webpage later.
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
The only thing in my mind right now is that my EC2 or application can't connect to the Database.
Is this a CIDR issue?
Couple of things to consider -
Is the port for your RDS database instance being blocked?
When you deployed your app, you should have seen a page in the wizard asking if you wanted the EC2 security group for your deployed Elastic Beanstalk instance to be added to the RDS security group for your database instance. You need to checkmark the relevant RDS security group too.
There's also an updated video from last year's AWS re:Invent conference that shows deployment of a SQL Server based app to RDS/Elastic Beanstalk - http://youtu.be/5N352oeYmqE
Hope this helps.