In my loopback application, once i create the access token (after login), it remains valid in my application unless application stops. when application restarted it is not allowing previous access token. How can i make previous access token validate even after restarting the application?
Your access token is getting stored by default in loopback memory. Therefore, it persists only until the application is restarted.
open server/model-config.json
"AccessToken": {
"dataSource": "db",
"public": false
}
This is the initial configuration of the Access Tokens. See here the storage datasource is db which is loopback memory. You need to change this to your MongoDB or some other storage
You need to store Access Tokens in the database rather in the memory.
For example lets store this to the mongoDb storage.
Assuming you already have mongodb installed in your system. Install the mongodb connector. In console type
npm install loopback-connector-mongodb
Now configure the server/datasources.json file. Add this line to this file.
"mongodb": {
"host": "0.0.0.0",
"port": 27017,
"database": "MONGODB DATABASE NAME",
"password": "MONGODB PASSWORD",
"name": "MONGODB NAME",
"connector": "mongodb",
"user": "YOUR USER NAME"
}
Open server/model-config.json. change this db to mongodb
"AccessToken": {
"dataSource": "mongodb",
"public": false
}
Now run the loopback server `Acces Tokens will be there even after restarting the application.
Related
Is it possible to write APM data to file and send this data to APM server via logstash or filebeat?
For security reasons, I can not reach the APM server directly on my Asp.net Core application.
This is APM configuration and I can only see the server Http address as configuration option:
"ElasticApm": {
"SecretToken": "",
"ServerUrls": "http://localhost:8200",
"ServiceName": "projectname",
"Environment": "development"
}
Databricks documentation shows how get the cluster's hostname, port, HTTP path, and JDBC URL parameters from the JDBC/ODBC tab in the UI. See image:
(source: databricks.com)
Is there a way to get the same information programmatically? I mean using the Databricks API or Databricks CLI. I am particularly interested in the HTTP path which contains the Workspace Id.
You can use the Get operation of the SQL Analytics REST API (maybe together with List) - it returns the JDBC connection string as a part of response (jdbc_url field):
{
"id": "123456790abcdef",
"name": "My SQL endpoint",
"cluster_size": "Medium",
"min_num_clusters": 1,
"max_num_clusters": 10,
"auto_stop_mins": 30,
"num_clusters": 5,
"num_active_sessions": 30,
"state": "RUNNING",
"creator_name": "user#example.com",
"jdbc_url":"jdbc:spark://<databricks-instance>:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/protocolv1/o/0123456790abcdef;",
"odbc_params": {
"host": "<databricks-instance>",
"path": "/sql/protocolv1/o/0/123456790abcdef",
"protocol": "https",
"port": 443
}
}
HTTP Path is also there, as path part of the odbc_params object.
Another way is to go to Databricks console
Click compute icon Compute in the sidebar.
Choose a cluster to connect to.
Navigate to Advanced Options.
Click on the JDBC/ODBC tab.
Copy the connection details.
More details here
It's not available directly from Databricks API, but this is the template for cluster JDBC connection string:
jdbc:spark://<db-hostname>:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/<workspace-id>/<cluster-id>;AuthMech=3;UID=token;PWD=<personal-access-token>
db-hostname is your hostname of your instance URL
workspace-id is the long numeric in your hostname (https://adb-1234512345123456.2.azuredatabricks.net/). It is available as workspaceId in output of az databricks workspace list or you can parse it from hostname
cluster-id the cluster you want the connection string for
personal-access-token is the token used for authentication
So all of above you already have or can get programmatically and substitute into template. It's a bit cumbersome, but that's the best we can do.
I have 2 pods of Redmine deployed with Kubernetes the problem due to this the issue of session management is happening so some users are unable to login due to this so I came up with the idea to store the cache of both the pods in Redis server with Kubernetes(Centralized).
I am giving the below configuration inside the Redmine pod in location.
/opt/bitnami/redmine/config/application.rb
configuration
config.cache_store = :redis_store, {
host: "redis-headless.redis-namespace", #service name of redis
port: 6379,
db: 0,
password: "xyz",
namespace: "redis-namespace"
}, {
expires_in: 90.minutes
}
But this is not working as supposed .Need help where I am doing wrong.
Redmine doesn't store any session data in its cache. Thus, configuring your two Redmines to use the same cache won't help.
By default Redmine stores the user sessions in a signed cookie sent to the user's browser without any server-local session storage. Since the session cookie is signed with a private key, you need to make sure that all installations using the same sessions also use the same application secret (and code and database).
Depending on how you have setup your Redmine, this secret is typically either stored in config/initializers/secret_token.rb or config/secrets.yml (relative to your Redmine installation directory). Make sure that you use the same secret here on both your Redmines.
i am using codedeploy to deploy my code to server. 3 days back it was working fine. but suddenly it fails to assume role although it was working fine previously.
error : {
"Code" : "AssumeRoleUnauthorizedAccess",
"Message" : "EC2 cannot assume the role Ec2Codedeploy"}
"LastUpdated" : "2017-07-10T06:49:59Z"
my trust relationship is :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
there is contradiction between documentation also.
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html
http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-ec2.html#troubleshoot_iam-ec2_errors-info-doc
no. 1 says service should be "codedeploy.amazonaws.com"
no.2 says service should be "ec2.amazonaws.com"
issue persists after reboot also.
kindly help me in this issue.
It appears that you have a role designed for use by AWS CodeDeploy, but you have assigned it to an Amazon EC2 instance. This is indicated by the error message: EC2 cannot assume the role Ec2Codedeploy
From Create a Service Role for AWS CodeDeploy:
The service role you create for AWS CodeDeploy must be granted the permissions to access the instances to which you will deploy applications. These permissions enable AWS CodeDeploy to read the tags applied to the instances or the Auto Scaling group names associated with the instances.
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
This is separate to the Role that you would assign to your Amazon EC2 instances, which generates credentials that can be used by applications on the instances.
These should be two separate roles with different assigned permissions.
I am currently running Parse Server 2.2.18 on Amazon AWS Elastic Beanstalk. I am attempting to use Cloud Code to send push notifications from an afterSave function to a particular channel. However, these notifications do not reach any devices even though they succeed.
This is the _PushStatus entry generated by Parse in my MongoDB database...
{
"_id": "eEaI2eReAi",
"pushTime": "2016-09-06T18:29:01.172Z",
"_created_at": {
"$date": "2016-09-06T18:29:01.172Z"
},
"query": "{\"channels\":{\"$in\":[\"E22\"]}}",
"payload": "{\"alert\":\"Test Group - Aye.\",\"e\":\"e\",\"badge\":\"Increment\"}",
"source": "rest",
"status": "pending",
"numSent": 0,
"pushHash": "675bcfb564807cdfc24085528c2cac39",
"_wperm": [],
"_rperm": []
}
Push notifications are sent properly through the Web UI on the Parse.com dashboard, where I can target a particular audience with the push of a button. The issue lies within my hosted Parse Server and it talking to APNS. I have correctly configured my push certificates and initialized my Node.js Parse Server with the local .p12 file and application bundle.
Any support is greatly appreciated.
-
Specifications:
Shared MongoDB cluster running Mongo 3.0.12.
Parse Server running 2.2.18 on AWS EB.